1

I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience.

The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab:

192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server".

If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted!

I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable.

So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell.

But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly.

Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust.

--Zack

user183394
  • 183
  • 1
  • 4
  • 8

2 Answers2

1

A potential solution can be adding more nobootwait and direct-io-mode to fstab, try something like this:

     serverip:/vol  mountpoint  glusterfs  defaults,nobootwait,_netdev,direct-io-mode=disable  0       0

Also, check your /etc/init/mounting-glusterfs.conf and add:

     exec start wait-for-state WAIT_FOR=networking WAITER=mounting-glusterfs-$MOUNTPOINT

I hope this helps as I experienced similar issue in the past and solved using above combinations/configurations.

ostendali
  • 403
  • 2
  • 4
0

I wonder if adding some logging to your fstab might help give you a bit more info? See the fstab config options in the admin guide: http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf

6.1.2.2. Automatically Mounting Volumes To automatically mount a Gluster volume • To mount a volume, edit the /etc/fstab file and add

the following line: HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0

For example:

server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0

Mounting Options

You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas.

  • backupvolfile-server=server-name
  • fetch-attempts=N (where N is number of attempts)
  • log-level=loglevel log-file=logfile
  • direct-io-mode=[enable|disable]
  • ro (for readonly mounts) acl (for enabling posix-ACLs)
  • worm (making the mount WORM - Write Once, Read Many type)
  • selinux (enable selinux on GlusterFS mount

For example: mount -t glusterfs -o backupvolfile-server=volfile_server2,fetchattempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/testvolume /mnt/glusterfs

I tend to think that going with IPs instead of names is simpler and more reliable.

Neal Magee
  • 329
  • 1
  • 7