2

I had a btrfs partition on a 6 disk array without raid (metadata in raid10, but data in single), and one of the disks just died.

So I lost some of my data, ok, I knew that.

But two question :

  • Is it possible to know (using metadata I suppose) what data I have lost ?

  • Is it possible to do some kind of a "btrfs delete missing" on this kind of setup, in order to recover access in rw to my other data, or I must copy all my data on a new partition

Thank you for any help

(Sorry for my poor english)

Edit : just to be clear, I can mount it in read only whith mount -o recovery,ro,degraded

And btrfs fi df /Data

Data, single: total=6.65TiB, used=6.65TiB
System, RAID1: total=32.00MiB, used=768.00KiB
Metadata, RAID1: total=13.00GiB, used=10.99GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
pums974
  • 31
  • 4
  • After small exchanges with the btrfs mailing list, mounting my drive with degraded and in ro, allowed me to do a scrub of my disk. Seems like I'm a very lucky guy, and lost no data at all :) Otherwise, I would have seen in dmesg or journalctl, a lot of error, including complete path to the affected file. – pums974 Sep 19 '16 at 22:08
  • Now I would like to regain access to my data in rw, but btrfs forbid that because my data was "single". Even if no data were affected. Seems like I have only two choice : Buy at least 7To of new hard drive, move my data, destroy and recreate my partition, and re-move my data. Take more risks to loose data with some hacking. The later will be my next move. I tried to patch the kernel to remove the test forbiding me the rw access, and try a simple "btrfs remove missing". but it doesn't work, I have no idea what to do. I'm open to suggestion. – pums974 Sep 19 '16 at 22:09
  • The test I did before (related to the previous message) was in a virtual machine with a manufactured test case that I thought was representative of my real problem. It was not. In my situation "btrfs-debug-tree -t 3 /dev/sda6" does not mention the missing disk anywhere (data or metadata). So there was nothing at all in the missing device. In the test case, there were metadata (in raid 10) stored in the missing device. That's why he was screaming. With my real array, the patch was sufficient. – pums974 Sep 29 '16 at 13:14

1 Answers1

1

I'm a very very lucky guy, and I think I fixed my problem (thanks to the help of btrfs mailing list).

In my situation "btrfs-debug-tree -t 3 /dev/sda6" does not mention the missing disk anywhere (data or metadata). So there was nothing at all in the missing device.

Thus, patching the kernel with this patch allow me to mount the array in rw in degraded and a simple btrfs device remove missing did the trick.

So my array is fixed and my data seems fine (scrub in progress)

One thing I learned though is that the single mode should never ever be used.

max
  • 103
  • 1
pums974
  • 31
  • 4