I'm using glusterFs on AWS. In particular I'm using a big instance with four ephemeral drive that I want to merge to create a unique file system.
Here what I usually do:
umount /dev/xvdb
mkfs.xfs /dev/xvdb -f
mount /dev/xvdb /brick0
umount /dev/xvda
mkfs.xfs /dev/xvda -f
mount /dev/xvdb /brick1
gluster volume create gv0 master:/brick0 master:/brick1 force
gluster volume start gv0
mount.glusterfs master:/gv0 /mount/point
The volume starts and works correctly for few hour (quite intensive read/write operations on the volume). However after two hours (usually, but not necessarily) the gluster volume doesn't result anymore mounted on the mount point. Anyone already experienced this issue? Can you help me to solve it? Thanks in advance