1

I lean toward software RAID instead of the hardware RAID for recovery concerns. I've read about disasters when a RAID card fails and heard many with success stories restoring with Linux RAID on different RAID controller. My server is IBM x3550 with a ServeRAID M5110 card intended for use as a KVM virtualization host. I make all eight disks in the ServeRAID setup JBOD. I go into CentOS 7 install and create my RAID10 without a problem, but then I see these POST events, one for each drive 'GPT Corruption' and a ninth entry:

DRIVER HEALTH PROTOCOL: Reports 'Failed' Status Controller

Am I going about this the correct way? Any guidance or recommendations appreciated. Let me also note my CentOS 7 install is working fine without issue, just seeing these POST events.

Failed to mention the disks are SSD and only get the POST events when using JBOD, card RAID works fine.

rwfitzy
  • 233
  • 5
  • 16
  • 2
    Your question title is irrelevant to your actual question. People who may be interested in the question or know the answer will not likely come to it with such a title. Consider editing it to reflect your actual question. – Michael Hampton Mar 03 '17 at 15:27
  • 1
    There is absolutely no problem recovering when a RAID controller dies. All you need is to know what you are doing, which is really the case in this profession anyway. Now, the message you are seeing seems to be coming from the driver in the kernel, I would get in touch with your servers' vendor support and ask them why the Linux kernel complains about JBOD, they are the ones providing that driver to the kernel community after all – dyasny Mar 03 '17 at 19:33
  • Yes, decided to use the hardware raid. I have two identical servers to practice with and keep one backup for other in production, guess I'll try to read up on how to handle without card available. – rwfitzy Mar 03 '17 at 20:23

1 Answers1

0

With our server we've seen this before, our Raid Controller:

16:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)

The solution was defective EEPROM on the Motherboard, we had to call IBM Hardware Service for a warnty claim.