0

I have ubuntu server 10.04 installed on a Raid10 array (MD) using 4 HD drives.

As it is known, Raid10 is Raid 1 + Raid 0. So, two HD drives are stripped and they are mirrored (or the other way around).

Is there an easy way to figure out which two of these four drives are stripped and which ones are mirrored?

Here is the output of: /proc/mdstat

Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid10 sda1[0] sdb1[1] sdd1[3] sdc1[2]
      388992 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md2 : active raid10 sda7[0] sdb7[1] sdd7[3] sdc7[2]
      19529600 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md4 : active raid10 sda9[0] sdb9[1] sdd9[3] sdc9[2]
      9762688 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid10 sda6[0] sdb6[1] sdd6[3] sdc6[2]
      19529600 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md5 : active raid10 sda10[0] sdb10[1] sdd10[3] sdc10[2]
      195309440 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md6 : active raid10 sda11[0] sdb11[1] sdd11[3] sdc11[2]
      1558599552 blocks 64K chunks 2 near-copies [4/4] [UUUU]

md3 : active raid10 sda8[0] sdb8[1] sdd8[3] sdc8[2]
      146483072 blocks 64K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
Khaled
  • 36,533
  • 8
  • 72
  • 99
  • What does "cat /proc/mdstat" say? – MadHatter Nov 11 '10 at 10:57
  • I pasted the output of /proc/mdstat – Khaled Nov 11 '10 at 11:16
  • Actually, on Linux, raid10 is *not always* raid1+raid0, though in your specific case with `2 near-copies` and 4 drives, it is basically the same. As for an answer to your question, I've got no idea how to get md to tell you how it's deciding which chunks go on which drives. You can see more about Linux's version of raid10 here: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 – DerfK Nov 12 '10 at 05:45

5 Answers5

2

Probably depends on what options are used to make the arary.

Read man md (4). Default is n2 (near). Here a portion of the manual:

When configuring a RAID10 array, it is necessary to specify ... ... whether the replicas should be 'near', 'offset' or 'far'.
When 'near' replicas are chosen, the multiple copies of a given chunk are laid out consecutively across the stripes of the array, so the two copies of a datablock will likely be at the same offset on two adjacent devices.

When 'far' replicas are chosen, the multiple copies of a given chunk are laid out quite distant from each other. The first copy of all data blocks will be striped across the early part of all drives in RAID0 fashion, and then the next copy of all blocks will be striped across a later section of all drives, always ensuring that all copies of any given block are on different drives.

Hans
  • 160
  • 3
1

Erm...odd question, they're all striped and all mirrored.

Basically you've got two sets of two disks, each set is stripped and the two sets are mirrored, they're active-active, it's not like one set just sits there.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • The question can be asked in another way. Can I just swap the SATA connections of one HD with another? Will the system work after that? – Khaled Nov 11 '10 at 11:04
  • @Khaled - that depends on how your RAID is built, whether it is built using the path to the disk or auto-detection. Of course, I'd have to ask *why* you're wanting to swap connectors around. – Cry Havok Nov 11 '10 at 12:09
  • @Cry Havok: I tried to boot my system in degraded mode (with 3 drives). I was able to boot it unless I unplug the first HD drive. The GRUB is installed on /dev/md0. So, I think I can boot it by unplugging the first HD and plugging the second HD using the 1st HD plug. Make sense? – Khaled Nov 11 '10 at 12:12
1

If the problem is actually booting from different drives as one can perhaps determine from your comments to the answer by Chopper3, then the answer has nothing to do with MD but rather on which drive(s) the master boot record is found, no?

To answer the literal question, IIRC mdadm recognizes array members by UUID so it should be safe to switch them around.

janneb
  • 3,841
  • 19
  • 22
1

I'm not sure myself, I do know that if you boot differently (eg after removing a drive) the letters assigned to the drives get changed. So what was sdb might become sda.

If you have a problem with booting, install grub onto all drives. It won't hurt the raid array as grub sits outside the raid configuration. Grub also doesn't understand raid, which is why you might not be able to boot if drives change. It might appear to be installed on /dev/md0 but that's just the grub files, the MBR isn't mirrored.

gbjbaanb
  • 3,892
  • 1
  • 23
  • 27
0

There are two different issues here.

--swapping disks:

This will always work since md uses the internal UUIDs numbers to say which disk are part of a given array and not the /dev path or physical paths. So, moving disks from a port to other has no effect given that md can see all the disks it needs. check the output of mdadm --misc --detail for the UUID field. This is considered a feature of md.

--Learning which disks are mirrored and which pairs are concatenated.

Let's say a 1+0 topology, with b,c,d,e disks all same sizes For instance:

[root@of ~]# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      181760 blocks super 1.2 64K chunks 2 near-copies [4/4] [UUUU]

the problem we face is how to tell which disks can be removed without failing the volume.

user9517
  • 115,471
  • 20
  • 215
  • 297