0

/dev/sdc1 and /dev/sdd1 is raided (RAID1) into /dev/md3. /dev/md3 is encrypted with luksSetup.

Last night I got DegradedArray events for /dev/sdc1. I shutdown the system, physically removed the disk, and booted back up again. Now there is no md3 according to /proc/mdstat

$ mdadm --detail /dev/md3

Working Devices : 1 Failed Devices : 0 Spare Devices : 0

Consistency Policy : resync

          Name : xxx:3
          UUID : 651393c9:1718e07a:6545908f:17100fe6
        Events : 11103

Number   Major   Minor   RaidDevice State
   -       0        0        0      removed
   1       8       49        1      active sync   /dev/sdd1

The physically remaining disk (the working one) is now called /dev/sdc. The partition type of /dev/sdc1 is "Linux RAID autodetect". I cannot open this with luksOpen, it doesn't even ask for password.

How can I physically remove the broken disk and keep using the working one? Right now I must keep the broken disk physically connected to the server to be able to continue using the working one.

1 Answers1

0

Make sure that grub is installed on the second drive and switch your BIOS settings to boot from active disk.

You can install GRUB with:

sudo grub-install /dev/sdd
sudo update-grub2
Comar
  • 281
  • 1
  • 6