The current RAID controller I am using is an AMCC 3ware 9690SA-8I SAS RAID Controller. I have just received this server unit used, and going into this blind; I have no idea how long it has been this way, nor what actions or steps were performed on it in the past.
I got these results when booting from a LiveCD of System Rescue CD (so I'm not actually running from, or even mounting the volume). Two of the three RAID units are showing up as DEGRADED
:
root@sysresccd /root % ./tw_cli /c2 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-1 OK - - - 298.013 OFF OFF
u1 RAID-10 DEGRADED - - 64K 1862.62 OFF OFF
u2 RAID-10 DEGRADED - - 16K 1862.62 OFF ON
From what I understand, this happens when one of the drives has failed and needs to be switched out. What confuses me is all the drives are showing up as OK
, and none of the red lights on the drives is lit up.
VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 OK u0 298.09 GB SATA 0 - Hitachi HDP725032GL
p1 OK u1 931.51 GB SATA 1 - Hitachi HDS721010CL
p2 OK u2 931.51 GB SATA 2 - ST31000340AS
p4 OK u0 298.09 GB SATA 4 - Hitachi HDP725032GL
p5 OK u1 931.51 GB SATA 5 - Hitachi HDS721010CL
p6 OK u2 931.51 GB SATA 6 - ST31000340AS
If it's not a failed drive, what is meant by this DEGRADED
status? What is causing it, and what steps can I do to fix this?