Background On an X3500 IBM server with debian jessie one of the 4 SAS disks in the hardware RAID 5 (made with the server controller) is damaged. From that moment sda1 (one of the resulting partitions on the raid sda disk) starts having problems with orphaned inodes.
After a while Debian detects 5 or 6 inodes orphans and goes into read only mode. The operating system remains on but many services are no longer able to write to the disk and stopped.
Restarting the server corrects sda1 and starts again. After a short it starts again with the orphaned inodes and so on.
If I boot the server with minimal lubuntu in rescue mode, the fsck.ext4 -y / dev / sda1 end successful. Everything seems fine, the system restart, debian starts again, everything runs smoothly (apart from ProFTP that does not start alone but I have to restart it) for half an hour and then again are always those 5/6 inodes orphans and the system sda1 is reassembled in read only mode. I I try to copy some files to sda1 the same but at the next restart orphans inodes are much more in quantity.
How do I get out of this infernal loop? I cannot understand if it is a HW problem (why the SAS controller don't detect problems?) or software.
TNX. Ilic
P.S.: all disk where tested with the SAS controller.