In a performance-critical system, we run Btrfs RAID0. We have three devices relevant to this question:
/dev/sda2
, an SSD that is too small and slow, and limits the size and overall performance of the system./dev/sdb
, actually multiple drives in a hardware RAID0 configuration, seen as one in software./dev/sdc
, identical to/dev/sdb
, but not currently part of the Btrfs system.
To get the performance and size we want, we are replacing /dev/sda2 with /dev/sdc:
btrfs replace start /dev/sda2 /dev/sdc /
btrfs replace status /
Checking the status of this, we get 2.3% done, 0 write errs, 0 uncorr. read errs
and counting, but it cancels without errors at random points. 34.7% is as far as we have gotten. The status reverts to 0.0%: Started on 31.Aug 22:31:17, canceled on 31.Aug 22:43:28 at 0.0%, 0 write errs, 0 uncorr. read errs
. Similarly, there is absolutely nothing added to dmesg
.
$ uname -sr
Linux 5.5.11-1.el7.elrepo.x86_64
Addendum:
It looks like we get further with more available space on the current volumes. We're at 20% free space now, and got all the way to 75.0% done before it failed. It could be a red herring, though.
$ btrfs filesystem usage /
Overall:
Device size: 4.33TiB
Device allocated: 1.06TiB
Device unallocated: 3.27TiB
Device missing: 0.00B
Used: 841.52GiB
Free (estimated): 3.50TiB (min: 1.86TiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID0: Size:1.01TiB, Used:802.82GiB
/dev/sda2 517.00GiB
/dev/sdb 517.00GiB
Metadata,RAID1: Size:24.19GiB, Used:19.35GiB
/dev/sda2 24.19GiB
/dev/sdb 24.19GiB
System,RAID1: Size:32.00MiB, Used:112.00KiB
/dev/sda2 32.00MiB
/dev/sdb 32.00MiB
Unallocated:
/dev/sda2 1.00MiB
/dev/sdb 2.75TiB