Recently I had to replace a faulty HD at a Centos 7.5 server with RAID1 (2 x Samsung NVMe disks)
After disk replacement, server booted using the other drive, I copied the same partitions at the new disk with fsdisk, added partitions to RAID, and after RAID got synched, I installed GRUB at the new HD with:
grub2-install /dev/nvme1n1
in order to make it bootable (so if the other disk fail, server will still be able to boot).
After I rebooted the server, GRUB menu appeared, but after selecting any kernel, the server stops booting with error:
symbol 'grub_efi_secure_boot' not found
I managed to restart the server after changing the boot sequence in BIOS, selecting the old drive first.
How can I make the new disk bootable? Please note that server has BIOS, not UEFI and I got it with a pre-installed image.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:2 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1 [SWAP]
├─nvme0n1p2 259:3 0 512M 0 part
│ └─md1 9:1 0 511.4M 0 raid1 /boot
└─nvme0n1p3 259:4 0 444.4G 0 part
└─md2 9:2 0 444.3G 0 raid1 /
nvme1n1 259:1 0 477G 0 disk
├─nvme1n1p1 259:5 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1 [SWAP]
├─nvme1n1p2 259:6 0 512M 0 part
│ └─md1 9:1 0 511.4M 0 raid1 /boot
└─nvme1n1p3 259:7 0 444.4G 0 part
└─md2 9:2 0 444.3G 0 raid1 /
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 nvme1n1p2[2] nvme0n1p2[0]
523712 blocks super 1.2 [2/2] [UU]
md2 : active raid1 nvme1n1p3[2] nvme0n1p3[0]
465895744 blocks super 1.2 [2/2] [UU]
bitmap: 2/4 pages [8KB], 65536KB chunk
md0 : active raid1 nvme0n1p1[0] nvme1n1p1[2]
33521664 blocks super 1.2 [2/2] [UU]
unused devices: <none>