I got disk failure on my Centos Linux soft raid 5 array (mdadm). I replaced one of the disk and started to rebuild the array. Next time I checked the status, the rebuild was failed.
This is the status right now:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sdc1[3](S) sdd1[2] sdb1[0]
4883277760 blocks
unused devices: <none>
-
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Aug 23 22:37:36 2010
Raid Level : raid5
Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jan 1 23:30:32 2002
State : active, degraded, Not Started
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 6af06755:6fc93cba:c083764e:1e719c94
Events : 0.27470
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 0 0 1 removed
2 8 49 2 active sync /dev/sdd1
3 8 33 - spare /dev/sdc1
/dev/sdc is the brand new drive. If I try to remove it and add again, it still stays in spare. How should I try to start rebuild this?