3

I'm currently rebuilding a RAID6 MDADM array from 5 devices to 9.

cat /proc/mdstat:

Personalities : [raid6] [raid5] [raid4] 
md0 : active raid6 sde1[0] sdg1[9](F) sdh1[8](F) sdi1[6](F) sdj1[7](F) sdd1[4] sdc1[3] sdb1[5] sdf1[1]
      2926751232 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/5] [UUUUU____]
      [>....................]  reshape =  0.0% (112640/975583744) finish=142795.3min speed=113K/sec

unused devices: <none>

mdadm --detail /dev/md0:

/dev/md0:
        Version : 1.2
  Creation Time : Sun Apr  8 18:20:33 2012
     Raid Level : raid6
     Array Size : 2926751232 (2791.17 GiB 2996.99 GB)
  Used Dev Size : 975583744 (930.39 GiB 999.00 GB)
   Raid Devices : 9
  Total Devices : 9
    Persistence : Superblock is persistent

    Update Time : Tue Dec  3 08:34:44 2013
          State : active, FAILED, reshaping 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 4
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

 Reshape Status : 0% complete
  Delta Devices : 4, (5->9)

           Name : ares:0  (local to host ares)
           UUID : 97b392d0:28dc5cc5:29ca9911:24cefb6b
         Events : 7020

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       81        1      active sync   /dev/sdf1
       5       8       17        2      active sync   /dev/sdb1
       3       8       33        3      active sync   /dev/sdc1
       4       8       49        4      active sync   /dev/sdd1
       9       8       97        5      faulty spare rebuilding   /dev/sdg1
       8       8      113        6      faulty spare rebuilding   /dev/sdh1
       6       8      129        7      faulty spare rebuilding   /dev/sdi1
       7       8      145        8      faulty spare rebuilding   /dev/sdj1

Right now it displays my new devices as faulty. Should I be worried that its building a unusable array?

Steven Lu
  • 165
  • 1
  • 6
  • yeah, you should. It is not a good thing. Are those drives you added are the same as the ones in the current array? – Danila Ladner Dec 03 '13 at 13:54
  • They 4 faulty are a different brand of drive, but all drives are separate and are 1TB each if that's what you mean. – Steven Lu Dec 03 '13 at 14:23
  • Ok, what is the version of OS and mdadm? – Danila Ladner Dec 03 '13 at 14:35
  • 1
    `mdadm - v3.2.5 - 18th May 2012` Debian 7 – Steven Lu Dec 03 '13 at 17:11
  • What model are the drives (try smartctl -i /dev/sdg)? Does 'dmesg' show any disk-related errors? – Andrew Dec 04 '13 at 21:27
  • So I realized that my SATA card has some issues with the Linux kernel. `SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 10)`. Whenever the SMART command is sent to the drives during rebuild, the card will fail. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=700975 – Steven Lu Dec 05 '13 at 04:58

1 Answers1

1

here are the steps which you have to follow in order to rebuild raid array firstly, remove partitions that are marked as "faulty spare rebuilding". use fdisk to create new raid partition

then execute following command

mdadm --manage /dev/md0 --add /dev/sdg1
mdadm --manage /dev/md0 --add /dev/sdh1
mdadm --manage /dev/md0 --add /dev/sdi1
mdadm --manage /dev/md0 --add /dev/sdj1

monitor the rebuild process with cat /proc/mdstat

Pavan
  • 21
  • 3