I tested this by comparing the speed of reading a file from a directory with 500,000 and a directory with just 100 files. The result: Both were equally fast.
Test details:
I created a directory with 500,000 files for x in {1..500000}; do touch $x; done
, run time cat test-dir/some-file
and compared this to another directory with just 100 files.
They both executed equally fast, but maybe on heavy load there's a difference or is ext4 and btrfs clever enough and we don't need content-addressable paths anymore?
With content-addressable paths I could distribute the 500,000 files into multiple subdirectories like this: /www/images/persons/a/1/john.png /www/images/persons/a/2/henrick.png .... /www/images/persons/b/c/frederick.png ...
The 500,000 files are served via nginx to UA, so I want to avoid a latency, but maybe that is no more relevant with ext4 or btrfs?