I had 3 drives fail in a matter of a week on a RAID6. Luckily, one drive seemed to be mostly fine. I was able to run ddrescue and it copied all but one small area (couldn't read 14MB out of 3TB).
However, when trying to assemble the array using the cloned drive (after removing the original), I receive some issues (sdm being the cloned drive):
# mdadm --assemble --scan --force /dev/md127
mdadm: failed to add /dev/sdm1 to /dev/md127: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md127: Input/output error
Examining the drive shows fine:
# mdadm -E /dev/sdm1
/dev/sdm1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9112a098:66dde535:f258911c:3af7e312
Name : cstor2.localdomain:127 (local to host cstor2.localdomain)
Creation Time : Wed Aug 27 01:34:29 2014
Raid Level : raid6
Raid Devices : 12
Avail Dev Size : 5859110912 (2793.84 GiB 2999.86 GB)
Array Size : 29295549440 (27938.41 GiB 29998.64 GB)
Used Dev Size : 5859109888 (2793.84 GiB 2999.86 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=18446744073701229568 sectors
State : active
Device UUID : 4d8e7a74:f9dca0be:0d899e70:cc798c51
Update Time : Sat Jan 2 21:15:23 2016
Checksum : dc798583 - correct
Events : 9341937
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA.A.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
This matches what the other drive was reporting. However, it throws the invalid argument any time I try to assemble the original RAID6. Does anyone have any ideas on the invalid argument error or how I can work around this.
I was pondering whether I should recreate the array with assume-clean but I'm not sure if that would work correctly with 10 of 12 drives.