0

Using Linux mdadm to recover from failed disk / partition. Here are titles of the steps I took so far - I can post detailed copy if it helps. ( I am not sure how to post file link here...)

The last response to "run" is :

nov25-1@nov251-desktop:~$ sudo mdadm --run /dev/md0
mdadm: failed to start array /dev/md/0: Input/output error
nov25-1@nov251-desktop:~$ 

What would be my next step to "activate" the array ?

nov25-1@nov251-desktop:~$ cat /proc/mdstat  1
nov25-1@nov251-desktop:~$ cat /proc/mdstat  2
nov25-1@nov251-desktop:~$ sudo mdadm --assemble /dev/md0    3
nov25-1@nov251-desktop:~$ sudo mdadm --assemble --force /dev/md0    3
nov25-1@nov251-desktop:~$ sudo mdadm --detail /dev/md0  4
nov25-1@nov251-desktop:~$ sudo mdadm --run /dev/md0recovering 
Romeo Ninov
  • 5,263
  • 4
  • 20
  • 26
Jan Hus
  • 117
  • 4
  • Have you checked the status of the members of the array? What is the output of `dmesg` after the `mdadm --run` command? It could be that two of your devices are broken, which means all your data can be lost. – Tero Kilkanen Dec 05 '22 at 20:07
  • Please read last comment - on run. Here it is again nov25-1@nov251-desktop:~$ sudo mdadm --run /dev/md0 mdadm: failed to start array /dev/md/0: Input/output error nov25-1@nov251-desktop:~$ – Jan Hus Dec 06 '22 at 00:53
  • Here is a part of dmesg [ 3193.003771] md/raid:md0: not clean -- starting background reconstruction [ 3193.003843] md/raid:md0: device sdb17 operational as raid disk 1 [ 3193.003849] md/raid:md0: device sdb4 operational as raid disk 0 [ 3193.006900] md/raid:md0: cannot start dirty degraded array. [ 3193.007721] md/raid:md0: failed to run raid set. [ 3193.007733] md: pers->run() failed ... nov25-1@nov251-desktop:~$ nov25-1@nov251-desktop:~$ sudo mdadm --run /dev/md0 mdadm: failed to start array /dev/md/0: Input/output erro – Jan Hus Dec 06 '22 at 00:54
  • it saiz "starting background reconstruction" what does that mean ? – Jan Hus Dec 06 '22 at 00:55
  • 1
    How many devices are supposed to be in your RAID array? I see only two in your dmesg output (`sdb17`, and `sdb4` .. both of which seem to be on the same physical device - which is probably a Very Bad Idea). If your array consisted of three devices - it should start up in a degraded (at risk) state. If it consisted of more than three - then it won't be able to start up until the missing devices are made available again. Review `/proc/partitions` (or `lsblk`, or `blkid`) to find out which devices are currently "visible" – JMusgrove Dec 06 '22 at 09:53
  • Please edit the original question and add additional information there with proper formatting. The information is very hard to read when it is in comments. – Tero Kilkanen Dec 06 '22 at 17:04

1 Answers1

-1

Wouldn't the next step be to mount it?

I think I've used this guide before, maybe it will help you out?

https://kb.synology.com/en-uk/DSM/tutorial/How_can_I_recover_data_from_my_DiskStation_using_a_PC

You should definitely consider whether any disk has failed because trying to recover with a faulty drive still in place may be worse than removing it. I'd at least try to check out the SMART status of each disk. The read-only nature of the setup in the article means if you get it wrong it won't damage your array, you can just start over with more or different disks.

  • I have installed lvm2 and run sudo cat /proc/mdstat lvs getting this nov25-1@nov251-desktop:~$ sudo cat /proc/mdstat lvs Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 26 : inactive sde24[0](S) sde25[1](S) 204666880 blocks super 1.2 md0 : inactive sdb17[1] sdb4[4] 409335808 blocks super 1.2 unused devices: cat: lvs: No such file or directory nov25-1@nov251-desktop:~$ I had to delete some mdx output to fit here – Jan Hus Dec 06 '22 at 01:10
  • Using lvs give no output nov25-1@nov251-desktop:~$ sudo lvs [sudo] password for nov25-1: nov25-1@nov251-desktop:~$ – Jan Hus Dec 06 '22 at 01:13