1

I had a power plug removed for 2 of the 4 disks of my raid 5. Since then it cannot start. Even with --run --force, --readwrite, etc. I quickly saw it so the data should not be (too much) corrupted.

Here's the detail:

/dev/md2:
           Version : 1.2
     Creation Time : Mon Oct 28 14:46:16 2019
        Raid Level : raid5
     Used Dev Size : 243138560 (231.88 GiB 248.97 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 20:37:49 2020
             State : active, FAILED, Not Started
    Active Devices : 0
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 4

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed

       -       8       50        3      spare rebuilding   /dev/sdd2
       -       8       34        1      spare rebuilding   /dev/sdc2
       -       8       18        2      spare rebuilding   /dev/sdb2
       -       8        2        0      spare rebuilding   /dev/sda2

Now can I simply re-create the array with sudo mdadm --create /dev/md2 --level=5 --raid-devices=4 --chunk=512 /dev/sdd2 /dev/sdb2 /dev/sda2 /dev/sdc2 ? I should care about the order of the drives, right?

As asked, the output of --examine:

/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8b
     Array UUID : 0956b1a2:4ed1052a:016155a1:940db446
           Name : raspberrypi3:2
  Creation Time : Mon Oct 28 14:46:16 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 486277120 (231.88 GiB 248.97 GB)
     Array Size : 729415680 (695.63 GiB 746.92 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 36416 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : da9f05d3:64d47a14:78eedb5e:dd69151c

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Apr  9 20:37:49 2020
  Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
       Checksum : ccf72dc4 - correct
         Events : 8802

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8b
     Array UUID : 0956b1a2:4ed1052a:016155a1:940db446
           Name : raspberrypi3:2
  Creation Time : Mon Oct 28 14:46:16 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 486277120 (231.88 GiB 248.97 GB)
     Array Size : 729415680 (695.63 GiB 746.92 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 36416 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : active
    Device UUID : e709a92a:c10f1949:c9868538:f5cc2acc

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Apr  9 20:37:49 2020
  Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
       Checksum : e2f7ce99 - correct
         Events : 8802

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8b
     Array UUID : 0956b1a2:4ed1052a:016155a1:940db446
           Name : raspberrypi3:2
  Creation Time : Mon Oct 28 14:46:16 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 486277120 (231.88 GiB 248.97 GB)
     Array Size : 729415680 (695.63 GiB 746.92 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 36416 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : active
    Device UUID : e1c4f361:af07c81c:1e6b8f72:962869a5

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Apr  9 20:37:49 2020
  Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
       Checksum : 139c176 - correct
Events : 8801

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8b
     Array UUID : 0956b1a2:4ed1052a:016155a1:940db446
           Name : raspberrypi3:2
  Creation Time : Mon Oct 28 14:46:16 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 486277120 (231.88 GiB 248.97 GB)
     Array Size : 729415680 (695.63 GiB 746.92 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 36416 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : active
    Device UUID : ce27edb8:365853c7:e0434b32:5ae01e33

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Apr  9 20:37:49 2020
  Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
       Checksum : 50300573 - correct
         Events : 8801

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

MappaM
  • 111
  • 4
  • If that create command is wrong in the slightest way, you'll lose all the data permanently. Can you post the ouput of `mdadm -Evv /dev/sd[bcde]2` ? That shows the RAID superblock in detail, and should help figure out what's the right way to proceed. – Mike Andrews Apr 10 '20 at 12:34
  • Honestly, that looks fairly OK. None of the superblocks mark any of the drives failed. Those last two drives are each one event behind... maybe that's what is throwing it off. If you don't do `--run`, and just do `mdadm --assemble /dev/md2 /dev/sd[bcde]2`, what error do you get? – Mike Andrews Apr 10 '20 at 15:08
  • Same state... :( I'm copying the disk content to try a create on one of the array. – MappaM Apr 10 '20 at 17:30
  • I'm not getting anywhere with create either. If anyone has a clue... pvscan does not find my new re-created /dev/md2. I tried multiple orders without success... – MappaM Apr 12 '20 at 21:42

1 Answers1

0

So after one week of various tries, the answer was the bad block list. Before the hard fail, the raid array started to add all access to underlying disks as bad blocks. So even assembling the disk fine by forcing the array to start with "echo [X,Y,Z,W] | sudo tee /sys/block/md2/md/dev-sd[a,b,c,d]2/slot" it would lead to an I/O error when accessing /dev/mdX.

So I forced the array to start ignoring the block list with "sudo mdadm --assemble --update=force-no-bbl /dev/md2 /dev/sda2 /dev/sdd2 /dev/sdc2 /dev/sdb2". What is crazy is that there is no clue on the reason. I just had "Buffer I/O error on dev md2, logical block 0, async page read". And that leads to nothing on the web...

MappaM
  • 111
  • 4