1

I have a server where 2 disks were in Raid-1. Disk-1 went faulty and due to some reasons I had to take reboot of the server and below is the current status after that. Now, I am unable to remove the failed disk from MDARRAY and also when I plugin new disk on the server, it doesn't allow me to boot the server, even from the old disk. I need help how I can clean the removed disk from the MDARRAY.

root@compute1:/dev# mdadm --detail /dev/md0 /dev/md0
Version : 1.2 Creation Time : Thu Aug 19 21:36:30 2021 Raid Level : raid1 Array Size : 409280 (399.75 MiB 419.10 MB) Used Dev Size : 409280 (399.75 MiB 419.10 MB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent

Update Time : Wed Jun 22 02:58:21 2022
      State : clean, degraded
Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0

       Name : bootstrap:0
       UUID : c6e17655:3fbf8f0b:5e4d3285:69fcd05e
     Events : 128

Number   Major   Minor   RaidDevice State
   0       0        0        0      removed
   1       8        3        1      active sync   /dev/sda3

root@compute1:/dev# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md5 : active raid1 sda8[1] 790043648 blocks super 1.2 [2/1] [_U] bitmap: 4/6 pages [16KB], 65536KB chunk

md2 : active raid1 sda5[1] 52396032 blocks super 1.2 [2/1] [_U]

md4 : active raid1 sda7[1] 10477568 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sda4[1] 52396032 blocks super 1.2 [2/1] [_U]

md3 : active raid1 sda6[1] 31440896 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sda3[1] 409280 blocks super 1.2 [2/1] [_U]

unused devices:

I have tried using "mdadm /dev/md1 --remove failed and mdadm /dev/md1 --remove detached" but it didn't work.
  • Does this answer your question? [How to delete removed devices from a mdadm RAID1?](https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1) – djdomi Jun 22 '22 at 16:44
  • mdadm /dev/md1 --remove failed and mdadm /dev/md1 --remove detached I have tried above but it didn't do any thing. Can I do below in my scenario? Will it remove the "removed" entry? mdadm --grow /dev/md1 --force --raid-devices=1 – Prince220888 Jun 22 '22 at 17:30
  • 1
    Your command outputs show that all your arrays already have that disk removed (notice "removed" in the `mdadm --detail` output). Now you may e.g. physically replace a failed disk, partition it and add to arrays, and it will begin replication to restore redundancy. – Nikita Kipriyanov Jun 24 '22 at 10:28
  • @didjomi actually this question is **not** a duplicate of that one. That is about actually removing a disk from the array, and this one about what to do next, e.g. physically replacing disk, booting from the one that is left and restoring redundancy. It is not answered in any way in answers to the referenced question. – Nikita Kipriyanov Jun 24 '22 at 11:14
  • @Prince220888 what problems you had with boot from the disk that is left? Are you sure both disks were made bootable (this is *not* automatic)? Are you sure you are removing the correct (failed) disk? Do you have the correct boot order? – Nikita Kipriyanov Jun 24 '22 at 11:16

0 Answers0