23

I tried to mount a formerly readonly mounted filesystem read-writeable:

mount -o remount,rw /mountpoint

Unfortunately it did not work:

mount: /mountpoint not mounted already, or bad option

dmesg reports:

[2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list.  Please umount/remount instead

A umount does not work, too:

umount /mountpoint
umount: /mountpoint: device is busy.
    (In some cases useful info about processes that use
     the device is found by lsof(8) or fuser(1))

Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point.

So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

bmk
  • 2,339
  • 2
  • 15
  • 10
  • 1
    Have you tried `fuser -km /mountpoint` yet? Beware though, the -k flag will kill all processes accessing that directory. – Richard Keller May 31 '12 at 00:32
  • Can you provide a little bit more insight to what dm-0 consists of? – thinice May 31 '12 at 01:28
  • I have feeling I know whats up, but can you tell me, was the filesystem originally rw, remounted (due to ata error or whatever) ro, and now you are trying to rw again? – Matthew Ife Jul 01 '12 at 22:47
  • @Mlfe: The filesystem was formerly remountend `ro` by purpose. It's a filesystem on an LVM holding a daily backup snapshot that will be set to `rw` during backup operation and `ro` after finishing the backup. – bmk Jul 09 '12 at 07:39

6 Answers6

34

If you're using ext2 / ext3 / ext4 you should be able to use e2fsck to clean up orphaned inodes:

e2fsck -f

For reiserfs, you can use reiserfsck which will also clean up orphaned inodes.

Richard Keller
  • 2,040
  • 2
  • 19
  • 31
  • Not sure why this was downvoted, perhaps provide a reason for the downvote? Running e2fsck does clean up orphaned inodes, which you'll see in the console output as `clearing orphaned inode XXXX` where XXXX is an inode number. You can easily run e2fsck without rebooting the system. After running e2fsck you should be able to remount the partition. – Richard Keller Jul 30 '12 at 22:49
  • 2
    Thanks thanks a lot.. I spend hours figuring out the error. Doing 'e2fsck -f /dev/sda1' fixed the orphaned nodes for me along with some other fixes. I just said yes to all and works fine now :) – whitehat Jul 11 '16 at 15:26
  • 1
    Thanks a lot!!. Yours commands fixed readonly VirtualBox VM disc after unsucessfull new VirtualBox version install: sudo e2fsck -f /dev/sda1 – Andrew Aug 09 '17 at 22:52
  • 2
    Perfect, worked for me on root partition. The accepted answer (reboot) did not work alone. I did have to reboot after e2fsck so seems like you do still need a maintenance window. – AdamS Sep 01 '17 at 08:05
  • 2
    Better answer than the accepted one. That worked perfectly for my VPS. Found a lot errors and fixed it, than reboot and everything is running again. Saved my day. – Brain Foo Long Nov 06 '17 at 09:37
  • 1
    this worked for me. :yay: – deepdive Sep 09 '19 at 05:39
  • This also saved my VirtualBox VM. However I had to use recovery mode as root, I then found the mount point using mount -t ext4, which then gave me /dev/sda2 – MunkyOnline Jun 11 '20 at 18:50
10

e2fsck -f <mount point> won't work.

First find out the mount points with

sudo mount -l

Then fsck the drive directly.

For example for me

sudo e2fsck -f /dev/xvda2
Ganesh Krishnan
  • 231
  • 2
  • 4
6

You clean up the unprocessed orphan inode list by unmounting and remounting the filesystem.

An extended discussion from the linux-ext4 mailing list has more information about what this message is and why it may appear. In short, one of two things has happened: Either you've run into a kernel bug, or much more likely, some filesystem corruption happened one of the previous times you remounted the filesystem readonly. Which is probably why the system thinks something is still using the filesystem when there isn't.

If it's been a year and you still haven't rebooted the machine, just give up and schedule a maintenance window.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • Meanwhile I scheduled a maintenance window and rebooted the machine. That solved the problem (I didn't expect anything else...). I will accept your answer. Probably you are right that there was some filesystem corruption - although I cannot prove that. – bmk Aug 28 '12 at 16:38
1

I would recommend to first unmount the partition forcefully, i.e. using the -f option, and the running a file system check using fsck.

wolfgangsz
  • 8,847
  • 3
  • 30
  • 34
  • 1
    Unfortunately `umount -f` didn't succeed, too. The error message is the same as with a plain `umount`. – bmk Jun 08 '11 at 11:55
1

You should probably try a lazy unmount, i.e:

umount -l
0

I was facing the same issue on an AWS EC2 machine. To complicate the resolution, the volume that was affected was the root volume of the EC2 instance. Hence the device was failing to boot and SSH was also not possible to the instance.

The following steps helped me resolve the issue:

  1. Detach the volume from the EC2 instance.
  2. Configure a new EC2 instance using the same AMI and in the same AZ as that of the old one.
  3. Attach the volume (detached in Step 1) to the new instance.
  4. Execute the following commands:
# Switch to Root user:
sudo -i

# Identify the device Filesystem name and save it as a variable:
lsblk
rescuedev=/dev/xvdf1    # Mention the right Filesystem for the particular volume.

# Use /mnt as the mount point:
rescuemnt=/mnt
mkdir -p $rescuemnt
mount $rescuedev $rescuemnt

# Mount special file systems and change the root directory (chroot) to the newly mounted file system:
for i in proc sys dev run; do mount --bind /$i $rescuemnt/$i ; done
chroot $rescuemnt

# Download, install and execute EC2Rescue tool for Linux to fix the issues: 
curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz
tar -xf ec2rl.tgz
cd ec2rl-<version_number>
./ec2rl run
cat /var/tmp/ec2rl/*/Main.log | more
./ec2rl run --remediate

# Switch back from the Root user and unmount the volume:
exit
umount $rescuemnt/{proc,sys,dev,run,}
  1. Shut down the EC2 instance and detach the volume.
  2. Attach the volume to the original instance and start the EC2 instance.
Vishwas M.R
  • 101
  • 1