1

I have four drives as RAID-10. I've been having I/O errors on my system due to unknown reasons and been having to redo the volume a couple of times. Each time another drive has the I/O errors. I thinks it's either because of bad cables or power supply.

However when I created the current volume, one of my drives kept showing a lot of errors. Still I was so happy I had the Raid "working" that I didn't want to do anything about it.

Now I think I got the I/O errors down after replaced cables. Still when I mount my RAID volume I get this in dmesg:

BTRFS: error (device dm-5) in btrfs_drop_snapshot:9496: errno=-5 IO failure

Also I get errors such as

bad tree block start
parent transid verify failed on

I've tried mounting as recovery with all drives, and as degraded without the drive that's been having the I/O errors (there are a lot of them).

I havent been successfull in getting any more info about the btrfs_drop_snapshot but I guess it's something about a failing snapshopt from before I replaced the cables.

Also, I've zeroed the journal and did a successfull chunk recovery. Stil no avail regarding the error message that remounts the volume as read-only. Scub fails after a few gigabytes.

I'm using a cronjob for snapshots.

Is there any way forward regarding this or do I need to redo the volume from scratch again?

What if I delete the previously failing drive and add it again?

Can I through away the snapshot (if that's the case) that's failing?

Thanks in advance, Daniel

Running Ubuntu 18.04 LTS

$ uname -r
5.0.0-37-generic

Btrfs-progs
Version: 4.15.1-1build1
Daniel Holm
  • 131
  • 3
  • Marc Merlins wrote some stuff about how to recover such a file system: http://marc.merlins.org/perso/btrfs/post_2014-03-19_Btrfs-Tips_-Btrfs-Scrub-and-Btrfs-Filesystem-Repair.html – Marc Stürmer Dec 23 '19 at 10:58

0 Answers0