I had just created a btrfs RAID10 array in my Ubuntu 14.04 box for the first time by using the command:
mkfs.btrfs -d raid10 -m raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
My 4 hard drives are each 2TB drives. Under RAID10, I had expected to see usable space of about 3.6TB in total. Yet, for some reason, I'm seeing a total usable of 7.3TB (refer to /dev/sda
below):
Filesystem Size Used Avail Use% Mounted on
/dev/sde1 42G 1.7G 38G 5% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 7.8G 12K 7.8G 1% /dev
tmpfs 1.6G 1.2M 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda 7.3T 13G 7.3T 1% /mnt/tmp
Is this unexpected behavior?
Other information:
bofh@stronghold:~$ sudo btrfs fi show
Label: none uuid: a3a65325-0184-46d8-bef7-7ed066c9e320
Total devices 4 FS bytes used 29.14GiB
devid 1 size 1.82TiB used 16.03GiB path /dev/sda
devid 2 size 1.82TiB used 16.01GiB path /dev/sdb
devid 3 size 1.82TiB used 16.01GiB path /dev/sdc
devid 4 size 1.82TiB used 16.01GiB path /dev/sdd
When I try to list out the RAID type I'm using, I see both "RAID10" and "single"
bofh@stronghold:~$ sudo btrfs fi df /mnt/tmp
Data, RAID10: total=52.00GiB, used=48.77GiB
Data, single: total=8.00MiB, used=0.00
System, RAID10: total=16.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID10: total=2.00GiB, used=64.12MiB
Metadata, single: total=8.00MiB, used=0.00
Another rather weird thing is that btrfs device scan doesn't throw up any results:
bofh@stronghold:~$ sudo btrfs device scan
Scanning for Btrfs filesystems
bofh@stronghold:~$
I'm using Ubuntu 14.04 server which I just installed new and ran a dist-upgrade.
Any advice appreciated!