2

I'm running a Linux Server using ArchLinux, mostly for my own development needs.

It runs Arch Linux on a SW-RAID1. There's two disk drives sda and sdb, each with 3 partitions sda1-sda3, sdb1-3.

SDA1+SDB1 and SDA2+SDB2 are SW-RAID1 using dmraid / mdadm.

The system properly detects /dev/md0 and /dev/md1 and will boot from /dev/md0

On /dev/md1 there are 4 Logical Volumes created with LVM:

  • /dev/mapper/vg0-root is mapped to /
  • /dev/mapper/vg0-var is mapped to /var
  • /dev/mapper/vg0-home is mapped to /home
  • /dev/mapper/vg0-swap is mapped to /swap

And obviously the boot device, which is not in an LVM:

  • /dev/md0 is mapped to /boot

Or I should better say is supposed to be mapped to one of these. Because at boot my system isn't able to find vg0-root device. Nor any of the others.

/dev/mapper/control is the only item in /dev/mapper

When booting I get the following messages:

starting device 238
ERROR: device '/dev/mapper/vg0-root' not found. Skipping fsck.
mount: /new_root: no filesystem type specified.
You are now being dropped into an emergency shell.
sh: can't access tty: job control turned off
[rootfs ]#

I can get that repaired by running the lvm tool and then activating the volume group, which it seems to fail by default.

When I boot into the Rescue system (PXE booted minimal Debian system), that one also fails to automap the Volume Group. (i.e. the LVM volumes aren't available in /dev/mapper)

I've got the following HOOKS line in my mkinitcpio.conf:

HOOKS=(base udev autodetect modconf block mdadm_udev lvm2 filesystems keyboard fsck)

And this is the preload line in my /etc/default/grub:

GRUB_PRELOAD_MODULES="part_gpt part_msdos lvm mdraid09 mdraid1x"

Anyone got any idea what I need to configure to activate the volume group by default?

Mastacheata
  • 140
  • 6
  • I have a similar problem - did you ever find a solution? Using CentOS 8, and my local filesystems are all on straight mdraid volumes - they all work fine. However, I have other mounts that are on LVMs on top of mdraid mounts, and they fail, causing boot to fail and drop to emergency shell. Common denominator seems to be having LVM over mdraid. From the shell, if I type "udevadm trigger", the LVMs are instantly found, /dev/md/* and /dev/mapper is updated, and the drives are mounted. Then I can "exit" and boot continues fine. Sounds like a udev ruleset bug. – Paul Nov 24 '20 at 19:07
  • Sadly: no. I copied my stuff to a new system and ditched the LVM part. – Mastacheata Nov 25 '20 at 12:48
  • I ended up finding a solution that worked for me but would be different for non CentOS/RHEL systems. In my case, the lvm dracut module was not being loaded when creating the initrd. After adding that module, the lvms were found and mounted. – Paul Nov 25 '20 at 17:03

0 Answers0