I've been trying to understand how disks are distributed in a mdadm RAID10, but i'm not completly sure. AFAIK, a native RAID10 is similar to RAID0+RAID1, i.e. data splitted in two chunks, each one write in different RAID1.
Questions are:
- Using mdadm, how can I tell which drive belongs to each RAID1, so I can know which combinations of 2 drives i can afford. In a typical RAID10
mdadm --detail /dev/mdX
showsset-A
andset-B
. Is a set equivalent to a mirror? In that case, i can't lose a complete set, right? - What does the number inside braces mean (
sda[0]
)?. How is it related to "sets"? Example output:
md0: active raid10 sdb[3] sda[2] sdc[0] sdd[1]
- Making some test, i noticed that if I remove any combination of two drives and reboot the machine it won't boot and end up in a
grub>
prompt beacuase RAID10 couldn't be built. Test was made on a virtual machine, with UEFI enabled, and a 4 disks RAID10 with everything (/
) in a single partition, ESP replicated in every disk. - Testing with layouts that have multiple partitions for different mount points, say
/
,/boot
,/boot/efi
, and swap, i noticed that breaking an reassembling RAID10 can end up with a healthy but messed up layout. Given the following case, I think that i can't lose more than one drive, because stripes are mixed up:
md0 : active raid10 sdb3[0] sdc3[2] sdd3[4] sda3[5]
md1 : active raid10 sdc4[0] sdd4[1] sdb4[2] sda4[3]
md2 : active raid10 sdb2[1] sdc2[3] sdd2[4] sda2[5]
Thank you