1

My Amazon S3 bucket has millions of files and I am mounting it using s3fs. Anytime a ls command is issued (not intentionally) the terminal hangs.

Is there a way to limit the number of results returned to 100 when a ls command is issued in a s3fs mounted path?

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470

2 Answers2

2

Try goofys (https://github.com/kahing/goofys). It doesn't limit the number of item returned for ls, but ls is about 40x faster than s3fs when there are lots of files.

khc
  • 344
  • 2
  • 8
  • Recent versions of s3fs include optimizations to improve ls but it still issues one HEAD request per file. goofys avoids this cost and is much faster. – Andrew Gaul Apr 28 '20 at 02:00
0

It is not recommended to use s3fs in production situations. Amazon S3 is not a filesystem, so attempting to mount it can lead to some synchronization issues (and other issues like you have experienced).

It would be better to use the AWS Command-Line Interface (CLI), which has commands to list, copy and sync files to/from Amazon S3. It can also do partial listing of S3 buckets by path.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470