Since reads from disk are block-sized (and ext* doesn't support block suballocation), if you've simply got a bunch of tiny files that don't come anywhere close to filling a block on their own, you'd be better off bundling them into archives. This may not be a win if you can't group related files together, though.
Consider ext4? The dir_index
option in ext3 is standard in ext4 and speeds up anything with lots of files in the same directory. It places metadata, directory, and file blocks much more contiguously on disk, and greatly reduces the number of non-data blocks required to track each data block (although that matters more for large files than small). There's a proposal to inline a small file's data into its inode, but I don't think that's in upstream.
If you're seek-bound (as opposed to bandwidth-bound), it may help to call fadvise(FADV_WILLNEED)
on a set of files before reading from any of them. The kernel takes this as a hint to readahead into the file cache. Do be careful, though: reading ahead more than cache can hold is wasteful and slower. There's a proposal to add fincore
to determine when your files have gotten evicted, but I don't think that's upstream yet either.
If it turns out you're bound by bandwidth, having the files compressed with LZO or gzip can help. The CPU should still be faster decompressing than the disk reads with these compression methods (as opposed to LZMA or bzip2).