0

I have an EC2 server with s3fs mount.

I noticed that it takes more than 40s to 1min if I'm trying to change permission or owner.

$ ls -ltr directory
-rwxrwxrwx 1 apache apache 6444069321 Feb  6 15:54 big.zip
-rwxrwxrwx 1 apache apache 6444069321 Feb  6 16:12 big_1.zip
$date
Sat  6 Feb 17:30:43 UTC 2021
$ chown apache:apache big.zip
$ date
Sat  6 Feb 17:31:07 UTC 2021

But if I do the same on a Linux server it takes fraction of second to update it. Please let me know suggestion to make it faster.

Maurice
  • 11,482
  • 2
  • 25
  • 45
feroz khan
  • 31
  • 4

1 Answers1

2

S3FS tries to make an S3-Bucket appear as a part of the local filesystem as if it were regular block storage, which it isn't.

S3 is object storage and that means if you want to edit parts of an object or metadata on the object, you need to overwrite the whole object, which is expensive in terms of time.

You changing the owner of a file that's stored in S3 is translated to changes in the object metadata in S3. Object metadata is immutable, which means the whole object needs to be uploaded again with new metadata attached to it.

This is fundamentally different from the way regular filesystems based on block storage work. In that case it would just have to write a single block (usually around 4KB-16KB) to the disk that is changed when you set the bit. With S3 you need to reupload the whole object.

This is a case where S3FS is a leaky abstraction.


If you need shared storage between multiple EC2 instances, the Elastic File System (EFS) is a much better choice, you should look into it. Operations as you described on it will be a lot faster.

Maurice
  • 11,482
  • 2
  • 25
  • 45