3

What is better suited for a normal Server:

  • several partitions, which are bundled as several RAID1 devices (/dev/md0, /dev/md1, ...) without any partitions, which are not mirrored.
  • one big /dev/md0, and partitions on this device

What are the biggest pros and cons of both approaches? Is there a big difference, which one is the better choice for a normal server without frequent changes to disks and partition setup?

I haven't found any sites giving actual advice on this decision. The only thing i frequently read was: DO NOT bundle /dev/hda /dev/hdb (without at least one partition) to a RAID, because this causes the kernel to detect the RAID partitions on the raw /dev/hdX devices, too.

Zoredache
  • 130,897
  • 41
  • 276
  • 420
allo
  • 1,620
  • 2
  • 22
  • 39
  • I ran into a similar issue yesterday during an HPC cluster upgrade of OS. The 2 of us never used a GUI for installation on a server (we always used an automated provision system like ROCKS or custom kickstart files) but alas I created this document: http://bduc.nacs.uci.edu/guides/CentOS.Software.RAID-Installation.html I think its better to create two raid devices, one for /boot and the another using a large LVM partition. The LVM partition should allow you to create n number of mount points (/home, /tmp, etc). – Adam Dec 04 '12 at 19:42
  • In the "one big raid" scenario, there will be the need for a boot partition anyway, because the bootloader needs to see a "normal" partition on at least one device. I'm not sure if i want LVM, because of the added complexity in recovery situations. But then it would be clear: /boot (like about 500MB) and the rest for a LVM (which can be changed as needed later on) – allo Dec 04 '12 at 19:51

2 Answers2

3

"what is the better choice for a normal server without frequent changes to disks and partition setup?"

To answer your question you posed, there is a reason there are so many options for disk array setup to choose from. Each scenario has its own requirements, wants/needs, performance related issues, etc. If you were to post what the server was going to be used for that might help.

See here: Linux LVM: Single or Multiple Harddisk Partitions? as well.

TheCleaner
  • 32,627
  • 26
  • 132
  • 191
  • no special requirements. The server should get a root partition, swap and a big data partition, boot only when needed. then it should use two identical harddrives, just to provide protection against disk failure. So the question is only, is there a big difference? My first thought was one RAID should be one device, but some example setups in different howtos seem to setup one RAID per partition. – allo Dec 04 '12 at 19:32
2

Having the RAID1 over the entire disk allows you to replace a defective disk without downtime (if the disk controller allows hot swapping).

If you have separate partitions you can do more creative things. You can have /, /boot, and /srv different partitions. This allows you to split the RAID for / and do an OS upgrade in a VM that has access to the unused / copy, then reboot from this disk and then replicate / from the upgraded / partition to the old one. This is similar to Solaris Live Update.

If you have separate partitions you can have different raid levels for those partitions.

It depends on what is more important high availability or flexibility.

Mircea Vutcovici
  • 17,619
  • 4
  • 56
  • 83
  • I do not want to do anything fancy, just normal operation. Why can't i hotswap a drive when i have one raid per partition? Can't i just repartition the new drive and add each partition the the correct raid? – allo Dec 04 '12 at 19:48
  • If you happen to have a partition that is not part of the raid, then you will need to unmount it prior to take out the disk. You must be sure that all partitions are in mirror. Even the swap one. And use /dev/md* device as swap, not /dev/sd* – Mircea Vutcovici Dec 04 '12 at 19:57
  • added this to summary. I do not need or want to have partitions, which are not mirrored. – allo Dec 04 '12 at 20:00