Problem: A Distributed-Replicate Gluster volume only has half capacity.
I set up AWS EC2 instances as a Gluster volume, and a third EC2 instance that mount the Gluster volume.
Both Gluster servers have two bricks of 2G each. The Gluster volume is set up with replication factor 2, with the intention that the two servers hold 4G of identical data. Here is the output from querying on one of the Gluster servers:
ubuntu@ip-172-31-10-167:~$ sudo gluster volume info
Volume Name: swarm
Type: Distributed-Replicate
Volume ID: 142a9406-f3c9-49c8-a38f-f55e85185d1a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip-172-31-10-167.eu-west-1.compute.internal:/data/gluster/swarm/brick0
Brick2: ip-172-31-28-55.eu-west-1.compute.internal:/data/gluster/swarm/brick0
Brick3: ip-172-31-10-167.eu-west-1.compute.internal:/data/gluster/swarm/brick1
Brick4: ip-172-31-28-55.eu-west-1.compute.internal:/data/gluster/swarm/brick1
Options Reconfigured:
auth.allow: *
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
ubuntu@ip-172-31-10-167:~$ sudo gluster volume status
Status of volume: swarm
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ip-172-31-10-167.eu-west-1.compute.in
ternal:/data/gluster/swarm/brick0 49152 0 Y 15345
Brick ip-172-31-28-55.eu-west-1.compute.int
ernal:/data/gluster/swarm/brick0 49152 0 Y 14176
Brick ip-172-31-10-167.eu-west-1.compute.in
ternal:/data/gluster/swarm/brick1 49153 0 Y 15366
Brick ip-172-31-28-55.eu-west-1.compute.int
ernal:/data/gluster/swarm/brick1 49153 0 Y 14197
Self-heal Daemon on localhost N/A N/A Y 15388
Self-heal Daemon on ip-172-31-28-55.eu-west
-1.compute.internal N/A N/A Y 14219
Task Status of Volume swarm
------------------------------------------------------------------------------
There are no active volume tasks
ubuntu@ip-172-31-10-167:~$ sudo gluster volume status swarm detail
Status of volume: swarm
------------------------------------------------------------------------------
Brick : Brick ip-172-31-10-167.eu-west-1.compute.internal:/data/gluster/swarm/brick0
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 15345
File System : xfs
Device : /dev/xvdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 2.0GB
Total Disk Space : 2.0GB
Inode Count : 1048576
Free Inodes : 1048533
------------------------------------------------------------------------------
Brick : Brick ip-172-31-28-55.eu-west-1.compute.internal:/data/gluster/swarm/brick0
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 14176
File System : xfs
Device : /dev/xvdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 2.0GB
Total Disk Space : 2.0GB
Inode Count : 1048576
Free Inodes : 1048533
------------------------------------------------------------------------------
Brick : Brick ip-172-31-10-167.eu-west-1.compute.internal:/data/gluster/swarm/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 15366
File System : xfs
Device : /dev/xvdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 2.0GB
Total Disk Space : 2.0GB
Inode Count : 1048576
Free Inodes : 1048533
------------------------------------------------------------------------------
Brick : Brick ip-172-31-28-55.eu-west-1.compute.internal:/data/gluster/swarm/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 14197
File System : xfs
Device : /dev/xvdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 2.0GB
Total Disk Space : 2.0GB
Inode Count : 1048576
Free Inodes : 1048533
So everything seems fine from the above. But when I mount the volume on the third server, the volume shows that it only has a capacity of 2G instead of 4G:
ubuntu@ip-172-31-13-169:~$ mount
ip-172-31-10-167.eu-west-1.compute.internal:/swarm on /swarm/volumes/mytest type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)
ubuntu@ip-172-31-13-169:~$ df -h
Filesystem Size Used Avail Use% Mounted on
ip-172-31-10-167.eu-west-1.compute.internal:/swarm 2.0G 53M 2.0G 3% /swarm/volumes/mytest
ubuntu@ip-172-31-13-169:~$
And sure enough, the volume gets fills up if I write 2G to it.
If I create a bunch of small files at once, I can see that they distribute out between brick0
and brick1
.
All AWS EC2 instances are running Ubuntu LTS 16.04 AMD64 HVM EBS. Gluster versions tried are 3.12.7 and 4.0.1.
What am I missing?