PROBLEM: I had a degraded disk in my RAID 10 array which I created using mdadm. I first removed the disk, replaced it with a new one, but when I rebuilt, I got an error saying it couldn't detect the file system of the disk. I realized with fdisk that the disk label was dos(/dev/sdc), and the inner partition was GPT(/dev/sdc1). I used /dev/sdc. The computer was starting up fine, so I decided to remove the disk and retry after wiping and putting a GPT table on it. I did that, then I re-added it to the raid array. It worked. The Array spent two days recovering the disk. After that, when I restarted, I get an error.
I don't have a backup, hoping I can at least salvage some data from this. I was reading up on this I think if you readd a disk twice, the metadata can cause filesystem corruption. I would like to salvage this, but if there is no option perhaps someone can give me a better option to save it.
Error on startup:
systemd-fsck[687]: fsck.ext4: Bad magic number in super-block while trying to open /dev/md0
systemd-fsck[687]: /dev/md0:
systemd-fsck[687]: The superblock could nto be read or does not describe a valid ext2/ext3/ext4
systemd-fsck[687]: filesystem. If the device is valid and it really contains an ext2/ext3/ext4
systemd-fsck[687]: filesystem (and not swap or ufs or something else), then the superblock
systemd-fsck[687]: is corrupt, and you might try running e2fsck with an alternate superblock:
systemd-fsck[687]: e2fsck -b 8193 <device>
systemd-fsck[687]: or
systemd-fsck[687]: e2fsck -b 32768 <device>
systemd-fsck[687]: fsck failed with exit status 8.
Uh oh. Further down:
kernel: EXT4-fs (md0): VFS: Can't find ext4 filesystem
mount[693]: mount: /media/raid10: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
systemd[1]: media-raid10.mount: Mount process exited, code=exited status=32
systemd[1]: media-raid10.mount: Failed with result 'exit-code'
systemd[1]: Failed to mount /media/raid10.
I tried a few stackoverflow solutions for this, but a lot of them seemed too destructive. Any effort to mount has failed. I haven't tried forcing anything but mdadm seems to output ok. I removed the disk that got readded in an effort to restore things, but Ubuntu still refuses to start up.
Output for mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Mar 23 (01:06:26)
Rid Level : raid10
Array Size : 19532609536 (18627.75 GiB 20001.39 GB)
Used Dev Size : 9766304768 (9313.87 GiB 10000.70 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Sep 18 15:56:34 2020
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devces : 0
Layout : near=2
Chunk Size : 512k
Consistency Policy : bitmap
Name : ubuntu-server:0
UUID : d4c7dc04:6db4b430:66269a3b:44ee5c02
Events : 3501
Number Major Minor RaidDevice State
0 8 0 0 active sync set-A /dev/sda
1 8 16 1 active sync set-B /dev/sdb
2 8 48 2 active sync set-A /dev/sdd
- 0 0 3 removed
Output for cat /proc/mdstat
Personalities: [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md 0: active raid10 sdd[2] sda[0] sdb[1]
19532609536 blocks super 1.2 512k chunks 2 near-copies [4/3] [UUU_]
bitmap: 0/146 pages [0kb], 65536KB chunk
unused devices: <none>
EDIT: This is what I get when I run dumpe2fs -h -o superblock=8193 /dev/md0; dumpe2fs -h -o superblock=32768 /dev/md0
dumpe2fs 1.44.1 (24-Mar-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Couldn't find valid filesystem superblock
dumpe2fs 1.44.1 (24-Mar-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Couldn't find valid filesystem superblock