0

I recently upgraded a desktop server with 2 drives (1TB+3TB) drive to one with 1x1TB and 3x3TB drives. In the process, I installed btrfs and raided the 3TB drives together, leaving the 1TB drive as a boot volume.

I converted the original 3TB drive into a btrfs filesystem using:

sudo btrfs-convert /dev/sdb, added the two new drives and re-balanaced the whole filesystem:

btrfs balance start -dconvert=raid1 -mconvert=raid1 /media/HD2

After re-balancing, the total amount of space used by each drive differs:

$ sudo btrfs fi show
Label: 'hd2'  uuid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Total devices 3 FS bytes used 2.51TiB
    devid    1 size 2.73TiB used 1.89TiB path /dev/sdc1
    devid    2 size 2.73TiB used 2.47TiB path /dev/sdb
    devid    3 size 2.73TiB used 2.47TiB path /dev/sdd

and:

$ sudo btrfs fi df /media/HD2
Data, RAID1: total=2.51TiB, used=2.51TiB
System, RAID1: total=32.00MiB, used=500.00KiB
Metadata, RAID1: total=931.00GiB, used=4.44GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
$

Is this just the result of accumulation of bad sectors on device #1?

Fox
  • 3,977
  • 18
  • 23
Ron Gejman
  • 309
  • 1
  • 4
  • 11
  • 1
    Accumulation of bad sectors? That is plain nonsense. As long as the disk has "reserve" sectors, it will use them (and report on request). If it runs out of those, the disk will fail. The reserve sectors will never be available as free space and using them will not reduce your available space on the disk. – Sven Apr 29 '15 at 12:29
  • What does `btrfs fi df ` say? – Fox Apr 29 '15 at 14:15
  • @Fox: updated with the results of ```btrfs fi df /media/HD2```. – Ron Gejman Apr 30 '15 at 23:18
  • @Sven: can you help me understand why the space used for /dev/sdc1 is different than the other two drives? – Ron Gejman Apr 30 '15 at 23:20

1 Answers1

1

Well looking at the numbers it seems to make sense. Though I am no expert in btrfs, so please take what I offer as an explanation with a grain of salt.

btrfs wiki has an explanation how free space works in btrfs. What you see in btrfs fi show is space allocated for some purpose. It does not necesarilly mean, that there is less data on /dev/sdc1.

If you add the usage (1.89+2.47+2.47) you get 3.415TiB, which is roughly the allocated regions from btrfs fi df (2.51+0.93=3.44TiB). Currently used space is about 2.514TiB. If you multiply that by 2 and split into 3 drives (2.514*2/3), you get about 1.67TiB per drive. That is the amount of data on each drive, if the filesystem is perfectly balanced, though you probably won't be able to achieve that.

Now, because BTRFS clearly does allocate space in larger chunks, you have some allocated space (in metadata) that is not used. If the metadata space is fragmented (that could be the result of balance - moving just some metadata to the new drive), the blocks cannot be reclaimed as unallocated (shown as free in fi show).

If I wanted to clear that up, would try running defrag on the filesystem and then balance again.

btw. have you removed ext4 backup subvolume after converting? tbh I have no idea, where that gets accounted.

Fox
  • 3,977
  • 18
  • 23
  • I haven't removed the backup volume yet because I want to make sure the system is stable. I haven't had any issues yet so I think I will remove it. Do you think I should remove it, see if the space issue goes away and then defrag? – Ron Gejman May 01 '15 at 14:08
  • It is very hard to suggest a safe approach (btrfs is still not widely trusted, especially in such corner cases). (do you have backup before conversion? you should have) But I'd say make a backup of the filesystem, it should not be broken thanks to checksums. I'd defragment and balance and see what happens... – Fox May 01 '15 at 21:40