0

I want to use GlusterFS as a distributed Filestorage on FreeBSD 11.1 Documentation is poor, so I followed some howtos on the net. I could create the glusterfs volume, but I have trouble to mount it on an other clients machine. Here is what I did so far:

I have three hosts, all in the same subnet.

10.0.0.21 Webserver
10.0.0.31 gluster1
10.0.0.32 gluster2

I added the above entries in the /etc/hosts files on all of the three hosts.

I modified /etc/rc.conf on gluster1 and gluster2 with:

glusterd_enable="YES"

on gluster1 I did:

gluster peer probe gluster2

(succeeded)

each gluster1 and gluster2 has the following harddrives: /dev/da1

they are partitioned (BSD Label) and mounted on gluster1 and gluster2 as /datastore

"cat /etc/fstab" gives on both gluster1 and gluster2:

# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/da0a       /               ufs     rw      1       1
/dev/da1a       /datastore      ufs     rw      2       2

I created the gluster volume1:

gluster volume create volume1 replica 2 transport tcp gluster1:/datastore gluster2:/datastore force

(I'm aware of the split brain risk, this is a simple test szenario)

I started the volume1 with:

gluster volume start volume1

A check of the volume1 with:

gluster volume info

gives me back:

Type: Replicate
Volume ID: a760c545-1cc9-47a4-bc9e-51f6180e4d7a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/datastore
Brick2: gluster2:/datastore
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

So far everything worked, and seems to be fine.

Now my trouble starts to mount and use this on the client / consumer machine (Webserver)

I read at several places that the glusterfs volume1 should be mountable with:

mount -t glusterfs gluster1:/volume1 /mnt

This gives me simple back the following error:

mount: gluster1:/volume1: Operation not supported by device

As I normally do before I ask "silly" questions, I googled a lot for this. Played around with also installing glusterfs on the client (pkg install glusterfs), enabling it in the clients /etc/rc.conf, adding stuff for FUSE, but I could not bring it up to work. I feel quite annoyed, because I know it must be a very small thing I'm missing here!?

Can anyone shed some light into my issue?

luster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster1:/datastore N/A N/A N N/A Brick gluster2:/datastore N/A N/A N N/A Self-heal Daemon on localhost N/A N/A N 55181 Self-heal Daemon on gluster2 N/A N/A N 30318

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

So, I enabled NFS with this:

gluster volume set volume1 nfs.disable off

There was a warning of no longer using GlusterFS NFS, but instead to use NFS-Ganesha. The warning I ignored for this test.

now I restarted the volume:

gluster volume stop volume1 
gluster volume start volume1 

To check I did:

gluster volume info

which showed me now:

Volume Name: volume1
Type: Replicate
Volume ID: a760c545-1cc9-47a4-bc9e-51f6180e4d7a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/datastore
Brick2: gluster2:/datastore
Options Reconfigured:
nfs.disable: off
transport.address-family: inet

So the nfs.disable was set to off. NFS should be on now right?

But

gluster volume status volume1

still shows no NFS running:

Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/datastore                   N/A       N/A        N       N/A
Brick gluster2:/datastore                   N/A       N/A        N       N/A
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       99115
NFS Server on gluster2                      N/A       N/A        N       N/A
Self-heal Daemon on gluster2                N/A       N/A        N       37075

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

Disturbing here is also (beside NFS Online is N), that both bricks seems to be not online too (Online indicated as N)?!??

So I'm really stuck and could use some help.

stoney
  • 171
  • 1
  • 1
  • 9
  • 2
    It might be better to post on the SuperUser Stack Exchange, although somebody here might be able to help. – ProgrammingLlama May 09 '18 at 17:00
  • Another Google hour indicated that it could be related that the NFS is disabled on the gluster volume... I will follow up on this lead tomorrow – stoney May 09 '18 at 20:54
  • Enabling glusterfs NFS did not help, read my edit section above – stoney May 10 '18 at 07:59
  • 1
    I wish I had experience with glusterfs and that I could help you. Have you tried asking on their [IRC channel](https://www.gluster.org/community/)? – ProgrammingLlama May 10 '18 at 08:04
  • 1
    Ahh good old IRC... did not use it for more then 10 years! But it seems to be definitely worth trying! Thank you John. – stoney May 10 '18 at 08:08

2 Answers2

2

Finally it is working:

/usr/local/sbin/mount_glusterfs gluster1:/volume1 /mnt

did the trick...

the client also need to have the net/glusterfs package installed, and the following statement in the /boot/loader.conf:

fuse_load="YES"

Cheers

stoney
  • 171
  • 1
  • 1
  • 9
0

I think issue may be with ufs file system. Does it support extended attributes extensively ?

GlusterFS required FS with extended attribute support. (XFS is one).

From the link: (https://access.redhat.com/articles/1273933)

As the Red Hat Storage makes extensive use of extended attributes, an XFS inode size of 512 bytes works better with Red Hat Storage than the default XFS inode size of 256 bytes. So, inode size for XFS must be set to 512 bytes, while formatting the Red Hat Storage bricks. To set the inode size, you need to use -i size option with the mkfs.xfs command.

kumar
  • 2,530
  • 6
  • 33
  • 57