7

I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed.

Here's the RAID in question. Note the two devices (0 and 1) with state removed.

$ mdadm --detail /dev/md1

mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
/dev/md1:
        Version : 00.90
  Creation Time : Thu May 20 12:32:25 2010
     Raid Level : raid1
     Array Size : 1454645504 (1387.26 GiB 1489.56 GB)
  Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Nov 12 21:30:39 2013
          State : clean, degraded
 Active Devices : 1
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 2

           UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4
         Events : 0.8717546

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8       34        2      active sync   /dev/sdc2

       3       8       18        -      spare   /dev/sdb2
       4       8        2        -      spare   /dev/sda2

How do I get rid of these devices and add the new partitions as active RAID devices?

Update 1

I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish.

All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason.

Number   Major   Minor   RaidDevice State
   3       8        2        0      spare rebuilding   /dev/sda2
   4       8       18        1      spare rebuilding   /dev/sdb2
   2       8       34        2      active sync   /dev/sdc2

Update 2

After resyncing the problem is exactly the same as before. The two new partitions are listed as spares while the old ones marked as removed are still there.

Is stopping and re-creating the array the only option for me?

Update 3*

# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] [multipath] 
md1 : active raid1 sdb2[3](S) sdc2[0] sda2[4](S)
      1454645504 blocks [3/1] [U__]

md0 : active raid1 sdc1[0] sdb1[2] sda1[1]
      10488384 blocks [3/3] [UUU]

unused devices: <none>
Kabuto
  • 71
  • 1
  • 1
  • 3
  • 3
    Have you tried `mdadm /dev/md1 --remove failed` and `mdadm /dev/md1 --remove detached`? – Zoredache Nov 12 '13 at 20:54
  • Yes, I've tried that before. It doesn't do anything. I only get this output on both commands: mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. – Kabuto Nov 12 '13 at 21:04
  • Sounds to me like you are booted to a kernel that doesn't have the correct support for your old RAID type. Or the mdadm binary you are using doesn't support the old 0.90 format. Reboot to something that supports the 0.90 metadata format? – Zoredache Nov 12 '13 at 21:08
  • I got the metadata error fixed. See my post update. – Kabuto Nov 12 '13 at 21:18
  • Can you post the output of `cat /proc/mdstat`? – Halfgaar Jan 10 '15 at 18:14
  • @Halfgaar: I added the output at the end of my original post. – Kabuto Jan 11 '15 at 18:48
  • Perhaps redundant, but did you try `mdadm --fail` and then `mdadm --remove` on sdb2 and sda2? – Halfgaar Jan 11 '15 at 19:17

3 Answers3

9

In your specific case:

mdadm --grow /dev/md1 --raid-devices=3

For everyone else, set --raid-devices to however many functioning devices are in the array currently.

James
  • 285
  • 2
  • 6
  • Unfortunately this command doesn't do anything at all. I've tried that already. When I remove the two spares from the array I still have the two devices with state removed and without and device name. I can't address them with mdadm to remove them, too. @James – Kabuto Feb 09 '15 at 21:04
  • Hmm, not sure what to say Kabuto. I just went through and validated it by creating a raid 1 array, failing and removing a drive, and growing the array down to 3 drives. http://pastebin.com/2J9zk8gN This is on a fully updated Ubuntu 14.04 system, so given the date your array was created, perhaps the md/kernel version you're using has a bug that has since been fixed? All I can do is speculate. – James Feb 10 '15 at 09:22
  • It might be a kernel/mdadm bug. I couldn't find anything about it though. I guess I'll have to take the server down and reassemble the whole array. Thanks for your help anyway! @James – Kabuto Feb 11 '15 at 13:18
  • For me, mdadm --grow /dev/md1 --force --raid-devices=1 did the trick. – StanTastic Dec 27 '21 at 22:18
0

For me mdadm /dev/md127 --remove failed worked:

# cat /proc/mdstat
md127 : active raid1 sdd3[3] sdc3[0](F) nvme0n1p8[2]
      245732672 blocks super 1.2 [2/2] [UU]
      bitmap: 2/2 pages [8KB], 65536KB chunk
#

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Sat Mar  4 19:33:57 2017
        Raid Level : raid1
        Array Size : 245732672 (234.35 GiB 251.63 GB)
     Used Dev Size : 245732672 (234.35 GiB 251.63 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri May 26 22:32:53 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 4a5400f0:b5fb63b5:c7561cef:0604ef91
            Events : 15727631

    Number   Major   Minor   RaidDevice State
       3       8       51        0      active sync   /dev/sdd3
       2     259        8        1      active sync   /dev/nvme0n1p8

       0       8       35        -      faulty   missing
#
# mdadm /dev/md127 --remove failed
mdadm: hot removed 8:35 from /dev/md127
md127 : active raid1 sdd3[3] nvme0n1p8[2]
      245732672 blocks super 1.2 [2/2] [UU]
      bitmap: 2/2 pages [8KB], 65536KB chunk
#
Martin
  • 1
0

I think this should do the job:

mdadm /dev/md1 -r detached
Henrik
  • 698
  • 5
  • 19
  • Unfortunately it doesn't do anything. I've tried it before, and all I get is this output: mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. – Kabuto Nov 12 '13 at 21:03
  • Did you try to stop and reassemble the raid? – Henrik Nov 12 '13 at 21:12
  • Not yet. I did see this as a last resort. – Kabuto Nov 12 '13 at 21:15
  • 1
    At least you should have sufficient backups before doing this (as usual) - but it's worth a try - btw. you can fix this metadata error by changing metadata=00.90 to metadata=0.90 in mdadm.conf – Henrik Nov 12 '13 at 21:18