3

I just built my first raid5 yesterday (with 4 hds) and was reading about monitoring with /proc/mdstat. My understanding is an ideal dispaly with 4 drives should be [UUUU] however mine reads [UUU_]. See below:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sde[4] sdc[1] sda[0] sdd[2]
      11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [==>..................]  recovery = 11.3% (442037248/3906887168) finish=7793.7min speed=7408K/sec

unused devices: <none>

Is this normal since my raid is still syncing? I can use fdisk on each drive to view the appropriate size so I don't believe I have any DOA drives. Thanks

Justin
  • 67
  • 1
  • 4

1 Answers1

3

Yes, that is normal for a RAID5 array. From the man page.

When creating a RAID5 array, mdadm will automatically create a degraded array with an extra spare drive. This is because building the spare into a degraded array is in general faster than resyncing the parity on a non-degraded, but not clean, array. This feature can be overridden with the --force option.

So, you most likely ran your command to create the array, it was created in the degraded state. Now it is 'recovering' the parity drive.

Zoredache
  • 130,897
  • 41
  • 276
  • 420