S3FS tries to make an S3-Bucket appear as a part of the local filesystem as if it were regular block storage, which it isn't.
S3 is object storage and that means if you want to edit parts of an object or metadata on the object, you need to overwrite the whole object, which is expensive in terms of time.
You changing the owner of a file that's stored in S3 is translated to changes in the object metadata in S3. Object metadata is immutable, which means the whole object needs to be uploaded again with new metadata attached to it.
This is fundamentally different from the way regular filesystems based on block storage work. In that case it would just have to write a single block (usually around 4KB-16KB) to the disk that is changed when you set the bit. With S3 you need to reupload the whole object.
This is a case where S3FS is a leaky abstraction.
If you need shared storage between multiple EC2 instances, the Elastic File System (EFS) is a much better choice, you should look into it. Operations as you described on it will be a lot faster.