Last night I received an e-mail from mdadm about the possible failure of two drives on my array. The raid array was set up as a 4 2TB drive raid5 with one hot spare. Is this system truly fried? Did the hot spare pick up anything at all, or did the two drives fail at once? Did one drive fail, start to rebuild onto the spare, and then cause another drive failure? I'm fairly new to working with raids, and this system is one I inherited from a previous employee, so I'm unsure of what the proper troubleshooting steps are here. Any help would be much appreciated.
Output of cat /proc/mdstat:
sudo cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdc[4](F) sdd[5](F) sda[6](S) sdb[0] sde[3]
5860543488 blocks level 5, 64k chunk, algorithm 2 [4/2] [U__U]
Output of mdadm --detail:
#sudo mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Jun 21 13:54:13 2010
Raid Level : raid5
Array Size : 5860543488 (5589.05 GiB 6001.20 GB)
Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Apr 29 10:52:27 2013
State : clean, FAILED
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 2874db80:a0f02d66:999df3c7:ff8f8e6e (local to host bigkahuna)
Events : 0.10984
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 0 0 1 removed
2 0 0 2 removed
3 8 64 3 active sync /dev/sde
4 8 32 - faulty spare /dev/sdc
5 8 48 - faulty spare /dev/sdd
6 8 0 - spare /dev/sda