i had an mdadm raid6 array with 13x1TB drives. Within 10 minutes 3 of these drives fell out of the array... we assume bad cable to the controller card and replaced, however now we need to get the drives back into a working array.
because md0 was marked as failed we removed the mdadm array and made a new md0 with the original 13 drives. 1 failed again during rebuild so we now have a degraded md0. The problem is that lvm does not see the array that exists within mdadm. Is there anything we can do to get our data back?
$pvscan
PV /dev/sda5 VG nasbox lvm2 [29.57 GiB / 0 free]
Total: 1 [29.57 GiB] / in use: 1 [29.57 GiB] / in no VG: 0 [0 ]
$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdc1[1] sdg1[5] sdb1[0] sdf1[4] sde1[3] sdd1[2] sdi1[7] sdl1[10] dm1[11] sdh1[6] sdj1[8] sdn1[12]
10744336064 blocks super 1.2 level 6, 64k chunk, algorithm 2 [13/12] [UUUUUUUUU_UUU]
unused devices: <none>
what i think we need to do is get lvm to detect the mdadm array so that we can mount it, but if i create a new volume group in LVM, it'll wipe all the data from the array.
So to put it simply, How do we get our data from md0...
UPDATE: one of our sysadmins was able to restore an LVM config backup so it shows up in LVM now, however we are still unable to mount the drive for viewing of the data. Maybe bad partition table?
$pvscan
PV /dev/sda5 VG nasbox lvm2 [29.57 GiB / 0 free]
PV /dev/md0 VG zeus lvm2 [10.01 TiB / 4.28 TiB free]
Total: 2 [10.04 TiB] / in use: 2 [10.04 TiB] / in no VG: 0 [0 ]
$mount /dev/md0
mount: /dev/mapper/zeus-data already mounted or /mnt/zeus busy