2

I erroneously restarted my server before completing a filesystem resize. Here's the command that ran successfully before restarting:

lvresize -L -400GB /dev/mapper/vg_yavin-lv_home

At boot, I get this error

/dev/mapper/vg_yavin-lv_home: UNEXPECTED INCONSITENCY run fsck MANUALLY

When I attempt fsck -y /dev/mapper/vg_yavin-lv_home, I get this:

e2fsck 1.41.12 (17-May-2010)
Error reading block 63471616 (Invalid argument).  Ignore error? yes

Force rewrite? yes

Error writing block 63471616 (Invalid argument).  Ignore error? yes

Superblock has an invalid journal (inode 8).
Clear? yes

*** ext3 journal has been deleted - filesystem is now ext2 only ***

Superblock has_journal flag is clear, but a journal inode is present.
Clear? yes

The filesystem size (according to the superblock) is 127047680 blocks
The physical size of the device is 22190080 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? yes

Error writing block 63471616 (Invalid argument).  Ignore error? yes

If I comment out that filesystem in fstab, I can boot, but is there a way to recover that filesystem?

eisaacson
  • 525
  • 3
  • 8
  • 20
  • As I said in your other question, anything can happen. Do you have a backup? – Nathan C Jun 20 '13 at 17:18
  • @NathanC haha!! You're all over this. It's not pertinent data. Like I said, it's a new install so we really didn't have much of anything in our home directory. It's almost entirely what Redhat put there. Maybe I should be looking at how to remove that filesystem and start over. – eisaacson Jun 20 '13 at 17:20

4 Answers4

3

ProTip: When it says

Run fsck manually

What it actually means is

Run fsck in interactive mode and evaluate the output to decide what you want to do

NOT "Blindly use -y because that's what everyone else seems to do and it can't hurt anything".
fsck -y can be destructive. That's why it's not the default behavior.


It sounds like you have some pretty serious corruption (the "Error reading block 63471616" bit makes me think possibly physical disk damage), and frankly fsck may have made things worse.

If you have backups, now would be the time to use them. If not, and the data is important, I would image the partition (you can try running recovery tools like debugfs on the image).

Ultimately you probably want to just newfs (or mkextfs, whatever the Linux equivalent is) the partition -- if you're sure this isn't because of a physical disk failure.

voretaq7
  • 79,879
  • 17
  • 130
  • 214
  • 1
    I didn't blindly use `-y`. I included the results of `-y` because that's how it ended up when I went through line by line anyway. – eisaacson Jun 20 '13 at 17:35
3

Note that the following method will not get your data back. It might restore your volume group metadata. You can always read the lvm documentation in product guide. It has detailed explanation of the commands I am going to tell.

Comment out the fs in fstab and then boot. Now, find the VG on which you ran the lvresize. I think it's vg_yavin.

Run this

vgcfgrestore --list <VG-NAME>

This will give a list of the break points on the VG before any significant operation was done. You have to find out the file corresponding to your lvresize operation. Theoretically it should be the most recent one.

On that file run

 vgcfgrestore --file /etc/lvm/archive/<file-name> <VG-NAME>

This will restore the metadata of the VG before the lvresize.

Boot up normally and see whether it works.

On the error message, your default superblock of the fs is corrupt. Try to boot up with the backup superblocks which should be present if you have a working dumpe2fs output. Then

e2fsck -b <backup_sb> <disk-name>

But depending on corruption and how badly the fs is now, it's all a probability.

Soham Chakraborty
  • 3,584
  • 17
  • 24
1

Given your last question, you probably just need to run resize2fs as directed. Currently, your volume has shrunk, but the filesystem has not shrunk to match it. Do that, run fsck again, and you'll hopefully be OK.

Christopher Karel
  • 6,582
  • 1
  • 28
  • 34
  • That was my thought too. It didn't work. It seems like it would work to reverse the decrease, like +400GB, but I'd have to decrease the other to have space for that. That'd require `umount /` which seems a little too sketchy to me. – eisaacson Jun 20 '13 at 18:47
1

Here's what I ended up doing and it seems to be working fine now

1.Comment out line in /etc/fstab

#/dev/mapper/vg_yavin-lv_home /home                   ext4    defaults        1 2

2.Restart

3.Recreate/Remount

mkfs -c /dev/mapper/vg_yavin-lv_home
fsck /dev/mapper/vg_yavin-lv_home
mount /dev/mapper/vg_yavin-lv_home /home

4.Uncomment line in /etc/fstab

/dev/mapper/vg_yavin-lv_home /home                   ext4    defaults        1 2

5.Restart

Of course, we lost all our files but we really didn't have anything in there to worry about.

eisaacson
  • 525
  • 3
  • 8
  • 20