2

I built a RAID setup with three drives, two 1.5 TB (sdb and sdd) and one 3TB (sdc). My approach is to combine the two 1.5 TB drives into a RAID0 drive (md3), and create a RAID1 mirror (md2) with the 3TB drive (sdc) and the RAID0 array (md3). This all works.

The problem: whenever I reboot the computer, the RAID1 array (md2) only sees one active drive (sdc), even though the RAID0 array (md2) correctly starts up. I have to manually re-add the md2 array each time. What's going on? Is there someway to make the system assemble md3 before it assembles md2?

I already had the md2 drive with sdc. Generally speaking, I've run the commands (approximately):

mdadm --create --verbose /dev/md3 --level=stripe --raid-devices=2 /dev/sdb /dev/sdd
mdadm /dev/md2 --add /dev/md3
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

I also added "DEVICE partitions containers /dev/md3" to the mdadm.conf file. This is all on Debian 6.0.8.

More information: After rebooting, /proc/mdstat reads (edited out the md0 and md1 info):

Personalities : [raid0] [raid1]

md3 : active raid0 sdb[0] sdd[1]
      2930274304 blocks super 1.2 512k chunks

md2 : active raid1 sdc[4]
      1415577600 blocks super 1.2 [2/1] [U_]

unused devices: <none>

It seems that md3 (the RAID1 array) has forgotten about md2.

There is also something fishy during startup.

dmesg | grep -i 'md2\|md3\|raid'
[    2.537001] md: raid0 personality registered for level 0
[    2.539298] md: raid1 personality registered for level 1
[    2.620402] md: md2 stopped.
[    2.623636] raid1: raid set md2 active with 1 out of 2 mirrors
[    2.623655] md2: detected capacity change from 0 to 1449551462400
[    2.625028]  md2: unknown partition table
[    2.914801] md: md3 stopped.
[    2.919365] raid0: looking at sdb
[    2.919368] raid0:   comparing sdb(2930274304)
[    2.919370] raid0:   END
[    2.919371] raid0:   ==> UNIQUE
[    2.919372] raid0: 1 zones
[    2.919373] raid0: looking at sdd
[    2.919374] raid0:   comparing sdd(2930274304)
[    2.919376] raid0:   EQUAL
[    2.919377] raid0: FINAL 1 zones
[    2.919380] raid0: done.
[    2.919381] raid0 : md_size is 5860548608 sectors.
[    2.919382] ******* md3 configuration *********
[    2.919397] md3: detected capacity change from 0 to 3000600887296
[    2.921296]  md3: unknown partition table
[    3.244104] raid1: raid set md1 active with 2 out of 2 mirrors
[    3.468709] raid1: raid set md0 active with 2 out of 2 mirrors
emarti
  • 51
  • 4

2 Answers2

1

My approach is to combine the two 1.5 TB drives into a RAID0 drive (md3), and create a RAID1 mirror (md2) with the 3TB drive (sdc) and the RAID0 array (md3). This all works.

Your approach makes more chances to loose data than when using RAID-10. Either of you're disks in stripe is gone, the other one is useless. That's why usually people tend to use stripe of mirrors, not mirrors of stripes.

Moreover you'd better not use nested RAIDs, it brings in overhead which is rather needless. Linux Software RAID supports RAID-10 on odd number of disks. So you can have some RAID-1 for boot partition on 2 or all 3 disks, and then combine 3 disks into RAID-10. Yep, you'd have some space left beyond RAID-10, but at least you will have pretty good one RAID-10. Left space can be used for not important data.

UPD.: The easiest way to achieve similar set-up would be using LVM-2's ability either to strip or to mirror logicals volumes on physical disks.

poige
  • 9,448
  • 2
  • 25
  • 52
  • Do you have a reference for the statement "*you'd better not use nested RAIDs, it brings in overhead which is rather needless*"? I'm not carping, I'm genuinely interested by this question. – MadHatter Dec 14 '13 at 09:49
  • Surely — at least it would need to update disks superblocks twice. – poige Dec 14 '13 at 10:00
  • I wasn't so curious about whether you thought it, since you'd not have written it if you didn't think it made sense; I was more hoping for some evidence on the subject. – MadHatter Dec 14 '13 at 10:13
  • Will it be able to create 3TB of space with a 3 disk RAID-10? Can I 'upgrade' the existing RAID1 to RAID10? Part of the reason I've set it up this way is, hopefully, so it would be easy to add a 3 TB spare later. (I already have other disks for the /boot and / stuff, that's what md0 and md1 are for. These drives are exclusively for data storage.) – emarti Dec 14 '13 at 11:59
  • RAID-10 always looses half of space for redundancy. But it's the fastest mode for R/W and thus it's a must have RAID level for typical DB servers. In case you'd rather not waste disk space you can set up RAID-5 on 3 disks. – poige Dec 14 '13 at 12:17
  • @emarti, in case you have more q-ns, you can contact me on Skype: poige.ru – poige Dec 14 '13 at 12:17
  • Well, this discussion is interesting but it still doesn't answer my question: why does this setup fail on boot? – emarti Dec 15 '13 at 09:36
  • @emarti, the answer is simple and boring: because of. Because of no one cares to scan built MD arrays to contain other arrays. In case you want to something similar and working out of the box, see UPD. – poige Dec 15 '13 at 12:04
1

It turns out the solution was quite simple: make sure to assemble md3 before md2. This instructions are specifically for Debian 6.

  1. In /etc/mdadm/mdadm.conf, place the drives in the order you want them to assemble. In this example, ARRAY /dev/md3 metadata=1.2 ARRAY /dev/md2 metadata=1.2

  2. Run 'update-initramfs -u'. This was what I was missing before!

Now, when the computer boots, it first assembles md3, and then assembles md2. Previously, it assembled md2 first, and failed because it could not find md3.

emarti
  • 51
  • 4