1

I'm setting up a backup of my vital data (database and images, mainly) to S3 from my Ubuntu box. I've set up Amazon S3, and installed S3FS to mount the bucket on my machine. This is working well, but when I test my script, I'm getting a warning:

s3fs: MOUNTPOINT directory /aaa/bbb/ccc/ is not empty.
s3fs: if you are sure this is safe, can use the 'nonempty' mount option.

So, I'm not sure of the theory of how the mounted drive should work. I was thinking that I could mount the drive just before I put a new file into the directory to back up, and then unmount the drive when I'm done. Is this flawed logic? Should I just mount the drives once and leave them? If the mount fails for some reason and I need to re-mount, should I be emptying the directory before re-mounting?

For additional information, I'll be overriding database backups on a rotational basis, and rsyncing a large directory of images.

Any help with the theory or ideas for how I can set this up effectively and by using best practice much appreciated!

Edit: Or would s3cmd be better suited to my backup plans?

Andrew Gaul
  • 262
  • 1
  • 7
dKen
  • 193
  • 9
  • -1 So *is* the /aaa/bbb/ccc mount point directory empty at that point in time when your script is executing? Put something like `ls -a /aaa/bbb/ccc` just before the mount. – user Sep 06 '13 at 09:41
  • Downvoter - why and suggested improvements please. Just a downvote doesn't help anyone – dKen Sep 06 '13 at 09:44
  • @MichaelKjörling When I first set it up, it's empty, but as I add files to it, it has contents every subsequent mount. I just had a thought - when I unmount, that directory shouldn't exist any more? – dKen Sep 06 '13 at 09:46
  • You have a specific error message and *show* no effort in fixing it. That falls under "this question does not show any research effort" IMO. Show us what you have tried to correct the error and I'll probably be happy to retract my downvote (and might even change it into an upvote). – user Sep 06 '13 at 09:46
  • Mount point directories exist on both sides of the file system barrier. On the unmounted side, normally you want the mount point directory to be empty (that's what s3fs is warning about; the directory is not empty, and it's saying you probably don't want to do the mount there and then, telling you how to override the error if you really want to). When mounted, it shows whatever is on the file system in question. When unmounted, it goes back to whatever is in that directory on the file system that holds the mount point directory. This is basic *nix file system management, and not s3fs specific. – user Sep 06 '13 at 09:50
  • Thanks @MichaelKjörling. So, I'm mounting the drive and then copying a file into that mounted directory to be sent to `S3`. When I un-mount the drive, the files I copied into that directory when it was mounted are still there. So when I re-mount, I get that warning. Should those files copied in there when it was mounted still be there when I unmount? Sorry for the lack of general knowledge here. – dKen Sep 06 '13 at 10:00
  • Generally no, the files on one file system should not be visible when that file system is unmounted. You *might* have better luck reframing your question as "why does s3fs behave like that, and what can I do about it?" (to which unfortunately I have no immediate suggestion). That said, it might be a better fit for Unix&Linux or AskUbuntu. – user Sep 06 '13 at 12:22
  • @MichaelKjörling Thanks Michael - great suggestion. I'll ask another question and hopefully find out what is going on – dKen Sep 06 '13 at 12:44

0 Answers0