0

The current RAID controller I am using is an AMCC 3ware 9690SA-8I SAS RAID Controller. I have just received this server unit used, and going into this blind; I have no idea how long it has been this way, nor what actions or steps were performed on it in the past.

I got these results when booting from a LiveCD of System Rescue CD (so I'm not actually running from, or even mounting the volume). Two of the three RAID units are showing up as DEGRADED:

root@sysresccd /root % ./tw_cli /c2 show

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       298.013   OFF    OFF    
u1    RAID-10   DEGRADED       -       -       64K     1862.62   OFF    OFF    
u2    RAID-10   DEGRADED       -       -       16K     1862.62   OFF    ON     

From what I understand, this happens when one of the drives has failed and needs to be switched out. What confuses me is all the drives are showing up as OK, and none of the red lights on the drives is lit up.

VPort Status         Unit Size      Type  Phy Encl-Slot    Model
------------------------------------------------------------------------------
p0    OK             u0   298.09 GB SATA  0   -            Hitachi HDP725032GL 
p1    OK             u1   931.51 GB SATA  1   -            Hitachi HDS721010CL 
p2    OK             u2   931.51 GB SATA  2   -            ST31000340AS        
p4    OK             u0   298.09 GB SATA  4   -            Hitachi HDP725032GL 
p5    OK             u1   931.51 GB SATA  5   -            Hitachi HDS721010CL 
p6    OK             u2   931.51 GB SATA  6   -            ST31000340AS        

If it's not a failed drive, what is meant by this DEGRADED status? What is causing it, and what steps can I do to fix this?

IQAndreas
  • 1,550
  • 2
  • 20
  • 39
  • Just to be clear, bays `p3` and `p7` are empty (that's why they aren't showing up in the list. More details about the hardware can be found at [_GitHub: IQAndreas/computers: Dramatic Dingo_](https://github.com/IQAndreas/computers/tree/master/dramatic-dingo). – IQAndreas Sep 03 '15 at 22:40
  • It is showing 2 RAID 10, that would be a minimum of 8 drives but you only have 6 and two of those are in RAID1. They were either configured like this (unlikely) or more than likely drives have failed and someone has repaired the situation by just building another array – Drifter104 Sep 03 '15 at 22:41
  • @Drifter104 Doh! So obvious, why didn't I see that? You are welcome to expand upon it a bit and add it as an answer. – IQAndreas Sep 03 '15 at 23:58
  • @Drifter104 I don't quite understand what you mean by _"more than likely drives have failed and someone has repaired the situation by just building another array"_. There are only 8 drive bays on the server, you can't fit any more, and as you said, two of them are reserved for the RAID-1 "boot drive". I could see someone configuring for RAID-10 on the four 1TB drives (even though there are two different model numbers), but I don't understand how someone could have chosen these current settings based on the drives available. – IQAndreas Sep 04 '15 at 00:09
  • It would work as 1 x RAID 10 with those 4 HDD, not 2 x RAID 10, but you would loose 2TB. Someone might had think to gain 2TB, and run that like in RAID0 but degraded, a bad thing, really bad thing. – yagmoth555 Sep 04 '15 at 00:17

0 Answers0