1

I am unable to boot the CentOS Server. It has version 6.10. It has raid 5 (or 10) with 4 hard drives. I can't get into the Bios anymore. It has Phoenix cME FirstBIOS Pro Setup Utility Bios. When it boots up, it just go straight to the blank screen though I was hitting Esc (escape), F2 or F10 keys. The first drive bay led is no longer lit regardless of switching with other drive trays. I was hosting website with it and it has Cpanel installed. I have the backup files from public_html but I really need to get the SQL db so I can use it to host the same site again on a new server.

I thought I had two possible plans to accomplish:

(1) Try to fix this current server so I can get on and copy the data from its Cpanel page. After it was failed for long time, and I was able to boot it up and can ssh to it from my other computer yesterday for a few minutes. Then, it is no longer booting right and first hard drive bay is no longer lit. I now have this server with me at home.

(2) Using one of these hard drives by slaving on another server. Install same version of CentOS on the primary hard drive on another temporary server (I have one old server that can only have two hard drives), and slave one of the old hard drives (I don't know which one has the data so I will try one by one of those four drives) from the failed server and follow the guide from this page (https://documentation.cpanel.net/display/CKB/Full+Disaster+Recovery#96beabb132b941e0b523aaa5e067076a). So I tried that too but I was stucking at step#5 where could not mount the drive.

I really like to try with the option one stated above but no longer able to get into the bios (Phoenix cME FirstBIOS Pro) as there is no more activity on the monitor and first drive bay has no lit. The system also has Supermicro. I am trying to get help from Cpanel team too. As I am not getting any luck, I am here also to get some more suggestions and helps so please let me know which will be the better possible resolution and how to proceed. Thanks!

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972

1 Answers1

1

It is probably hardware problem, if BIOS doesn't show up. It could be possible it hangs during POST because of faulty drive, but still that is a bug in the hardware or in the BIOS, that shouldn't happen.

I remember some Intel motherboards which didn't boot if you have grub2 installed in MBR of hard drive and hang during POST on drive detection, but there wasn't blank screen.

If there is enough alive drives to assemle a RAID, you can connect all of them to another computer and CentOS generally should boot there. Networking could screw up (because there are other interfaces), software license activation could fail if they were tied to old computer hardware (CPU etc.), but that all could be fixed, because OS will boot.

If there is no other computer, you can try to safely disconnect all drives and try to reconnect them one by one, then start system and go to BIOS, until you identify the faulty one. It will generally be not harmful even to let CentOS boot when it's able to assemble array (in degraded state), but it could be dangerous if it happens you assembled a RAID and some drive in it is dying (still works, but could die or if it has some still undiscovered bad blocks). You'll be able to connect rest of drives and rebuild them afterwards.

It is not bad idea to check S.M.A.R.T data of each drive from some live system, and do a read test by reading to null (dd if=/dev/sdX of=/dev/zero) or even to some backup drive (of=/mnt/usbdrive/driveN.img), to have images to work with later, if (when) something goes wrong.

You don't need to install same version of Linux to access data on software RAID. Any distro of any version, supporting MD RAID (which I believe you mean by "Software RAID"), is able to assemble this RAID and access data if there is enough alive drives. This includes live systems. You can assemble and repair RAID from live system, then boot CentOS from them.

I even could mention that Linux is able to assemble and use imsm (Intel Matrix) and ddf (many other vendors) fake RAIDs with its MD RAID layer. Even many real hardware RAIDs use ddf on-disk structure, so Linux could be able to assemble that disks in software, in event of controller failure. So in general the procedure of repairing many fake RAIDs with Linux is exactly same as for its own MD RAID.

Nikita Kipriyanov
  • 10,947
  • 2
  • 24
  • 45
  • Hi Nikita, Thank you for your response and advice. I did some of those test-tries but I will do it again for more and get back here with result. Thanks! Kenny – Kenny Super Noob Oct 09 '19 at 04:54