1

just for getting some experience with mdadm I've put some HDD together and try a bit. I have 2 250 GB and one 500 GB HDD. I know thats not optimal for RAID5 and I'll only get 500 GB capacity in total. 250 GB of 500 GB HDD is wasted. But as I said, I'm just playing around a bit.

First lets see the disk sizes:

lsblk /dev/sdb /dev/sdc /dev/sdd
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb    8:16   0 232.9G  0 disk
sdc    8:32   0 465.8G  0 disk
sdd    8:48   0 232.9G  0 disk

Create the RAID5:

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd -c 4 --auto md

Show info for created Raid5:

cat /proc/mdstat                                     Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[3] sdc[1] sdb[0]
      488132976 blocks super 1.2 level 5, 4k chunk, algorithm 2 [3/2] [UU_]
      [>....................]  recovery =  0.5% (1354384/244066488) finish=59.7min speed=67719K/sec
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

Show a bit more details:

 sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Feb 26 14:52:54 2020
        Raid Level : raid5
        Array Size : 488132976 (465.52 GiB 499.85 GB)
     Used Dev Size : 244066488 (232.76 GiB 249.92 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Feb 26 14:57:43 2020
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 4K

Consistency Policy : bitmap

    Rebuild Status : 7% complete

              Name : raspberrypi:0  (local to host raspberrypi)
              UUID : 3291b54e:fad8f43b:cc398574:a1845ff9
            Events : 57

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      spare rebuilding   /dev/sdd

Dmesg shows

[ 2462.122882] md/raid:md0: device sdc operational as raid disk 1
[ 2462.122892] md/raid:md0: device sdb operational as raid disk 0
[ 2462.126278] md/raid:md0: raid level 5 active with 2 out of 3 devices, algorithm 2
[ 2462.142439] md0: detected capacity change from 0 to 499848167424
[ 2462.222689] md: recovery of RAID array md0

So what am I doing wrong with the creation of RAID5? I'm also confused in output of mdadm --detail /dev/md0 with disknumber 0, 1 and 3 and not 0, 1, 2

Hannes
  • 307
  • 2
  • 12

1 Answers1

2

This is correct behavior for a newly created RAID5 array: just after creation, it needs to compute the correct parity for each stripe.

You can append --assume-clean to your mdadm command to skip the initial sync, but I strongly suggest against it: if your parity does not match, any check will report thousands of errors, and you will be unable to recognize real error from "fake" ones. To fix this very ambiguous situation, you need to run a repair command - which will recompute parity just as the initial array creation does.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • While this initial rebuilt iotop shows most of the time 0 B/s ioactivity , same for htop. `Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s Current DISK READ: 0.00 B/s | Current DISK WRITE: 0.00 B/s` However there's a lot disk activity going on. Why isn't it shown? – Hannes Feb 27 '20 at 15:14
  • Raid resync happens at kernel level, and `iotop` sometime seems to miss kernel I/O. Can you try with `dstat -d -f` or `iostat -x -k 1` ? – shodanshok Feb 27 '20 at 15:23
  • Thanks `iostat -x -k 1` works. `dstat -d -f` prints terminal too small (on a FullHD monitor). – Hannes Feb 27 '20 at 15:30
  • Try with `dstat -d` then – shodanshok Feb 27 '20 at 15:50
  • Thank you. That works. – Hannes Feb 28 '20 at 07:34