1

I recorded my complete installation until failure in case you have a question about the installation which I did not provide an explanation for below: http://www.youtube.com/watch?v=BVe5vja3keo

During partitioning I created a software RAID 5 volume spanning three identical disks. On that volume I created an encrypted volume, which I created an LVM inside containing two logical volumes inside a volume group. One logical volume for /boot and one for / (the rest): enter image description here

When it is time to install Grub to the MBR I get the error Executing grub-install /dev/sda failed. This is a fatal error: enter image description here

After that I completed the installation without installing a boot loader.

I would greatly appreciate it if someone could help me out!

  • I do want redundancy for /boot so placing it outside the RAID 5 volume is not an option.
  • I have tried placing a /boot partition immediately inside the RAID 5 volume and that doesn´t work automatically either.
  • If it is possible I´d like the /boot inside the LVM, but if it is not having it inside the RAID volume would be sufficient.
  • I know that a software RAID is sub optimal for performance and that a hardware one is preferred. However, my budget does not allow for one and redundancy and encryption are my primary concerns.
Deleted
  • 1,832
  • 8
  • 23
  • 31
  • I'm not sure why no one has mentioned this, but RAID5 on a system volume is a terrible idea and a false economy. – Joel E Salas Jul 25 '12 at 18:24
  • @Joel: Why? Which RAID level would you recommend and why? – Deleted Jul 25 '12 at 18:38
  • RAID5 without an NVRAM cache is incredibly slow for random writes. A RAID10 setup is far more appropriate for the workload an operating system generates. If at all possible, treat your OS and your mass storage as different volumes. – Joel E Salas Jul 25 '12 at 18:49
  • @Joel: Thanks! As long as I use software redundancy it´s a possible solution. I got a tip about ZFS too, it seems pretty nice. I´ll look into that too (it has mirroring built in). https://wiki.ubuntu.com/ZFS/ – Deleted Jul 25 '12 at 18:56

2 Answers2

3

If you absolutely have to do Software RAID, I'd suggest keeping /boot out of your encrypted/LVM partitions.

Magellan
  • 4,451
  • 3
  • 30
  • 53
  • -1 as I know it should be possible to have `/boot` in a RAID volume with Grub2. And if I want redundancy having `/boot` outside the RAID volume isn´t a solution. But thanks for trying to help. – Deleted Jul 25 '12 at 18:08
  • 3
    Possible and Realistic aren't always the same thing. Frankly, if you're trying to set up a server in a professional environment using Software RAID, you're daft. – Magellan Jul 25 '12 at 18:10
  • @Kent While it's *possible* I've yet to see a server handle this arrangement well in the event of a hard drive failure and subsequent reboot - ESPECIALLY for RAID 5. I'm a huge proponent of software RAID on light-weight servers, but even using FreeBSD's `geom_mirror` (RAID 1) for full-disk mirroring (which is MUCH friendlier to server boot sequences than Linux LVM RAID 5) there are numerous caveats... – voretaq7 Jul 25 '12 at 18:16
  • I highly doubt it's possible to have that inside an *encrypted* lvm partition – Lucas Kauffman Jul 25 '12 at 18:17
  • @Adrian: Well sorry for hurting your feelings. It´s clearly stated in the question what I am looking to do, and your answer did not help. – Deleted Jul 25 '12 at 18:34
  • @voretag7: Of course I´ll try it once I have it setup. I´ll unplug the power for one of the disks and go from there. And I´m not married to having it as RAID 5 for the `/boot` partition. Mirroring is certainly a viable alternative for it. – Deleted Jul 25 '12 at 18:36
  • @Lucas: As long as I have redundancy for `/boot` it could just be a normal partition inside the RAID set. But I didn´t get that one to work either. It could certainly be a solution. – Deleted Jul 25 '12 at 18:39
3

Create a separate RAID partition on each of your disks for /boot, then RAID1 it (RAID1, not RAID10).

From my similar server:

$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md1 : active raid6 sdc2[3] sdd2[1] sdb2[0] sda2[2]
      143090816 blocks level 6, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdc1[2] sda1[0] sdd1[3] sdb1[1]
      136448 blocks [4/4] [UUUU]

And of course, don't forget about the MBR!

MikeyB
  • 39,291
  • 10
  • 105
  • 189
  • I'll try this approach! I´ll be back with results. I didn´t get it to work with `/boot` directly inside the RAID 5 volume. But maybe I´ll get better luck with RAID 1. I take it you have such a setup? Did you copy the MBR to the other disks in md0? – Deleted Jul 25 '12 at 18:42
  • 2
    @Kent If you want a prayer of being able to boot your system with the first drive failed you'll need to install a working MBR on each drive (it should go without saying, but you should test this thoroughly before deploying it - Test booting with the 1st drive failed, a rebuilt first drive, and with an array consisting entirely of rebuilt drives that were zero'd / had their MBR blanked before being added to the array. The last one is a major failure mode most people don't account for...) – voretaq7 Jul 25 '12 at 18:51
  • I get `Missing operating system` from Grub2 after installation when I reboot. That is with `/boot` on a RAID1 volume and `/` on an encrypted LVM volume on top of a RAID10 volume. – Deleted Jul 25 '12 at 21:16
  • Installed from scratch? Clear the MBR first. – MikeyB Jul 26 '12 at 01:45