4

Yesterday my CentOS Linux 7 (Core) server experienced a hard disk problem. The HDD has since been replaced but the main partition is in read-only mode.

I'm trying to get the main partition usable again (making it read and write) but am having many issues.

mount as rw

I tried mount -n -o remount,rw / but this resulted in:

mount: / not mounted or bad option

       In some cases useful info is found in syslog - try
       dmesg | tail or so

dmesg | tail resulted in:

[  177.305240] Loading iSCSI transport class v2.0-870.
[  177.419722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[  177.446940] NFSD: starting 90-second grace period (net ffffffffa32fca00)
[  233.475427] systemd-readahead[2002]: Failed to open pack file: Read-only file system
[  267.629924] nfsd4: failed to purge old clients from recovery directory v4recovery
[  428.141830] EXT4-fs (md2): error count since last fsck: 1
[  428.141889] EXT4-fs (md2): initial error at time 1542775899: ext4_xattr_block_get:298: inode 6947097
[  428.142014] EXT4-fs (md2): last error at time 1542775899: ext4_xattr_block_get:298: inode 6947097
[  748.786866] EXT4-fs (md2): Couldn't remount RDWR because of unprocessed orphan inode list.  Please umount/remount instead
[  770.787648] EXT4-fs (md2): Couldn't remount RDWR because of unprocessed orphan inode list.  Please umount/remount instead

e2fsck

I tried repairing the orphaned inode list using:

e2fsck -f /dev/md2
e2fsck 1.42.9 (28-Dec-2013)
/dev/md2 has unsupported feature(s): metadata_csum
e2fsck: Get a newer version of e2fsck!

As the filesystem is in read-only mode I'm unable to download a newer version of e2fsck.

tune2fs

I tried tune2fs -O ^metadata_csum /dev/md2 and got:

tune2fs 1.42.9 (28-Dec-2013)
tune2fs: Filesystem has unsupported read-only feature(s) while trying to open /dev/md2
Couldn't find valid filesystem superblock.

How can I get this partition to be read and write again?

Thanks

Magick
  • 143
  • 5
  • 2
    What was done to this filesystem between the time the disk failed and now? This is not something you can really recover from without a complete reinstall. You can recover data using a Live CD from a newer Linux distro (e.g. Fedora 29, RHEL 8 beta, etc). Of course, something went terribly wrong when the system was first installed, because the filesystem should have been XFS. – Michael Hampton Dec 28 '18 at 23:46
  • I've described above what I've done since the disk failure. Im unable to use a live cd as it is a remotely hosted server. I only have ssh access. – Magick Dec 29 '18 at 00:22
  • How is it that you have no remote access to it, then? The datacenter should be able to provide you with some sort of remote access. – Michael Hampton Dec 29 '18 at 00:29
  • I have remote access via ssh, but no physical access. – Magick Dec 29 '18 at 00:34
  • It's time to have a chat with whoever is hosting your server, then. This is really basic stuff, and you should not have accepted the server without it. – Michael Hampton Dec 29 '18 at 00:40
  • My mistake! It looks like my host does provide the ability to boot with a rescue image. Does this only allow me to recover data, or am I able to make the partition writable again? – Magick Dec 29 '18 at 00:47
  • You could try removing the unsupported ext4 feature with `tune2fs -O ^metadata_csum /dev/md###` but I don't know if it's even possible to remove once enabled. If it works, you'll get lucky. If not, then you'll have to recover all your data and reinstall (and make sure it gets done correctly this time). – Michael Hampton Dec 29 '18 at 00:50
  • Thanks. I've tried this and updated the original post. – Magick Dec 29 '18 at 01:56
  • You need to use a newer OS as your live rescue system. – Michael Hampton Dec 29 '18 at 02:49

1 Answers1

3

You need to boot from a rescue system new enough to support the filesystem features you're using. However, given that the filesystem apparently was damaged even though (I assume) you were using a redundancy-providing RAID level, be prepared for the filesystem to be unrecoverable. Time to warm up the backups.

womble
  • 96,255
  • 29
  • 175
  • 230