3

If using entire drives for an mdadm RAID which will be the boot device as well, is it more correct / standard to:

1) Configure the RAID out of partitions that encompass the whole drive (like /dev/sda1 + /dev/sdb1) and then partition the resulting single md device into the various partitions.

OR

2) Create all the partitions on each drive in their desired sizes and then create RAIDs of those (e.g. sda1 + sdb1, sda2 + sdb2, sda3 + sdb3, etc.)

I'm thinking the benefit to #1 would be ease of drive replacement, and also I was told that #1 allows mdadm to parallelize reads across the various member drives more effectively.

Is there some authoritative link which talks about one as being the preferred way to go?

sa289
  • 1,318
  • 2
  • 18
  • 44
  • Create partitions of type linux raid autodetect (of desired size, be it entire drive or not), then create a raid over those partitions – Dan Aug 12 '15 at 15:17

2 Answers2

4

There is a distinct difference in using disk MD (sda + sdb) or partition MD (sda1 + sdb1), that you seem to lump together. Booting from a whole disk MD is not possible. Therefore, I tend to make partioned MD RAID on the disk/array I boot from.

If I have secondary disks that form arrays (like sdc + sdd), I tend to make a whole device MD, make it a LVM volume group and add logical volumes to it. This makes replacing disks a bit easier, because you can just hotremove and hotadd the new disk and you're done, as opposed to doing it for each partition. Additonally, if your replacement disk is bigger, it's easier to add that space to the array (although not impossible when using partitions).

Halfgaar
  • 8,084
  • 6
  • 45
  • 86
  • Thanks. I've edited my question to remove the sda and sdb references - I think that was a misunderstanding on my part as to what's possible for the boot. – sa289 Aug 12 '15 at 18:22
2

There isn't a set standard (or best practice) that I'm aware of. Different distributions and vendors will have different recommendations for the layout.

For an OS installation I'll typically create two MD devices: one for swap (md0) and one for / (md1). If I had to separate OS data from application data I would assign md1 to LVM and create logical volumes to separate them, rather than create an additional MD device.

It all depends on your needs, what your application or OS vendors will support (if you have any), and your personal preferences.

Gene
  • 3,663
  • 20
  • 39
  • Gotcha - is there any way to create a device like /md1 and then partition it in setup without using LVM? It seems like CentOS setup is maybe not allowing that. – sa289 Aug 12 '15 at 16:35
  • @sa289 why not use LVM? – Halfgaar Aug 12 '15 at 16:42
  • No, you cannot apply a partition table to an MD device. If you don't want to use LVM but still want separate file systems you'll need to create multiple MD devices. Personally I see very little merit in separating OS and application/user data and so I just go with a single file system, unless I'm using physically separate media for the two. – Gene Aug 12 '15 at 16:42
  • You can create [partitionable MD devices](https://raid.wiki.kernel.org/index.php/Partitionable). – nh2 May 11 '18 at 02:20