0

I tested this by comparing the speed of reading a file from a directory with 500,000 and a directory with just 100 files. The result: Both were equally fast.

Test details: I created a directory with 500,000 files for x in {1..500000}; do touch $x; done, run time cat test-dir/some-file and compared this to another directory with just 100 files. They both executed equally fast, but maybe on heavy load there's a difference or is ext4 and btrfs clever enough and we don't need content-addressable paths anymore?

With content-addressable paths I could distribute the 500,000 files into multiple subdirectories like this: /www/images/persons/a/1/john.png /www/images/persons/a/2/henrick.png .... /www/images/persons/b/c/frederick.png ...

The 500,000 files are served via nginx to UA, so I want to avoid a latency, but maybe that is no more relevant with ext4 or btrfs?

jww
  • 97,681
  • 90
  • 411
  • 885
fremon
  • 105
  • 6

1 Answers1

0

Discussing this question at another place the answer seems to be that for read operations you don't need to implement content-addressable storage, because there are no iterations over the lookup table in nowerdays filesystems. The filesystem gets the place to look for the file directly.

With ext4 you only have the # of inodes as limitation.

fremon
  • 105
  • 6