-1

I have a Proliant DL380 Gen10 server with 4 SAS disks. I replaced one of the disks.

Unfortunately during that time, a colleague of mine removed another sas disk, which should not be removed (white symbol).

Now the health LED is flashing red.

Does somebody know, what I have to do in order to get this working again?


I could reboot the server after I recovered a backup. The server is now not in crucial state anymore.

Still I have the issue that the initial SAS disk which was damaged is still not working.

I took a look at the RAID and after the disk was rebuild it says that it might fail soon. Also it says this error will be fixed automatically when written. Backup and Restore is suggested.

Do I have to make a restore again?

yagmoth555
  • 16,758
  • 4
  • 29
  • 50

1 Answers1

0

Depending on the RAID level that's running and what exact disks failed/were removed, the data integrity is likely lost already.

  • RAID 5 can sustain loss/removal of just a single drive. A second failed drive at the same time or during rebuild fails the entire array = data is lost.
  • RAID 6 can sustain loss/removal of up to two drives - you should still be fine but the red LED seems to indicate otherwise.
  • RAID 10 can sustain loss/removal of one drive per RAID 1 subarray. Losing both disks from the same subarray fails the array.

RAID 6 & 10 are somewhat unlikely with just four HDDs, so chances are very high that your data has been trashed.

You can check the array status in iLO. The red LED very likely tells you that there's been a real problem, so you should check if you can easily/quickly salvage recently changed files, and then restore from backup.

In the future, you might want to upgrade the RAID level for a more resilient setup and make sure that servers aren't handled by unqualified personnel.

Zac67
  • 10,320
  • 2
  • 12
  • 32