0

I am experiencing a very weird behaviour of my backup drive that I can neither explain nor get fixed.

I have an external drive mounted via NFS. I regularly call rsync jobs that backup files from my server to that drive. In order to save space, I am only copying new/changed files and hard link the others to the previous backup.

My problem is: After two backups my 100G drive is full (according to df). However, a du -hs tells me that the first backup takes about 15G of space while the second only takes 45M (because of the hard links).

This is what I checked so far:

  • I unmounted and mounted the drive, but the problem remained.

  • I checked lsof and lsof +L1 but it didn't show any deleted files that were still lingering around. Unmounting should have taken care of that anyway.

Edit: I also checked, but forgot to mention

  • The number of available inodes: I have enough.

  • Whether something was in the directory before it got mounted: The directory is empty.

Does anybody have an idea what else I could check or what the problem could be? Any help is greatly appreciated.

fabee
  • 111
  • 4

1 Answers1

1

It depends on the underlying file system. If df says the disk is full, possible reasons are :

  • you are not mounting the root of the NFS 100 G file system, and there are other files that you do not see because they are higher up than your mount point.

  • you are mounting some other file system on top of the problem file system, and so hiding files

  • there are snapshots taking up space in the file system. You did not provide the precise output of your du, but check for hidden files. Try du -hx --max-depth=1 /your/nfs/mount where your nfs mount is exactly as shown by df. This snapshot problem is very probable if you have been deleting a lot of files (or compressing a lot of files). If you suddenly get a lot more space tomorrow without having deleted any more files, then it's virtually certain that this is the problem (and if you don't, it still may be).

  • an inode problem. Usually that would show up with a df -i and not in your plain df -h, but I'm including it for completeness because I don't know what the underlying file system is.

Bottom line . . . The infrastructure is not provided by you, so you should contact your infrastructure provider to know why the infrastructure is not behaving as you expect. If your infrastructure provider doesn't know, he can ask the question on an appropriate site, such as serverfault :)

Law29
  • 3,557
  • 1
  • 16
  • 28
  • Thanks for the answer. I tried you last two points and that's not the problem. The `du` command you provided shows 14G from 100G. I guess I'll contact my provider. Thanks for the insights. – fabee May 09 '16 at 04:21
  • Thank you -- did the provider tell you what the problem was? I've edited in another possibility, it was certainly not your problem but it may help someone else. – Law29 May 11 '16 at 04:48
  • No, they said that the file system would not support it, but it clearly did. I guess they don't want people using hard links because they charge for the backup space used. Thanks for the effort. – fabee May 14 '16 at 04:56