I recently upgraded a desktop server with 2 drives (1TB+3TB) drive to one with 1x1TB and 3x3TB drives. In the process, I installed btrfs and raided the 3TB drives together, leaving the 1TB drive as a boot volume.
I converted the original 3TB drive into a btrfs filesystem using:
sudo btrfs-convert /dev/sdb
, added the two new drives and re-balanaced the whole filesystem:
btrfs balance start -dconvert=raid1 -mconvert=raid1 /media/HD2
After re-balancing, the total amount of space used by each drive differs:
$ sudo btrfs fi show
Label: 'hd2' uuid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Total devices 3 FS bytes used 2.51TiB
devid 1 size 2.73TiB used 1.89TiB path /dev/sdc1
devid 2 size 2.73TiB used 2.47TiB path /dev/sdb
devid 3 size 2.73TiB used 2.47TiB path /dev/sdd
and:
$ sudo btrfs fi df /media/HD2
Data, RAID1: total=2.51TiB, used=2.51TiB
System, RAID1: total=32.00MiB, used=500.00KiB
Metadata, RAID1: total=931.00GiB, used=4.44GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
$
Is this just the result of accumulation of bad sectors on device #1?