Recently, after a reboot, the system landed in Emergency Mode. The cause appears to be that the system is failing to mount a few of the disks as defined in /etc/fstab. These disks are Logical Volumes.
Of the 6 Logical Volumes on the server, 3 are working ok and 3 are not starting up. All of these are on the one PV (Physical Volume), and in the same Volume Group.
Some relevant errors from journalctl -xb
are (repeated for each of the 3 failed LVs):
Job dev-mapper-cloudlinuz\x2dvar.device/start timed out.
Timed out waiting for device dev-mapper-cloudlinuz\x2dtmp.device
The lvscan
and lvdisplay
commands show these LVs as "NOT Available".
View pv, vg, lv status - screenshot
Running lvchange -ay
on the LV does not display an error (or any output) but the LV remains NOT Available. Similarly, running vgchange -ay cloudlinux
displays the output: 2 logical volume(s) in volume groups "cloudlinux" now active.
View output from lvchange and vgchange - screenshot
Booting from a CentOS Live Disk into recovery mode mounts the Volumes with no issue and all files are present. fsck
reports no errors on the Volumes.
View lvscan output from recovery mode - screenshot
I also tried booting into an older Kernel from the boot screen. This did not help (LVs could not be activated).