I used to do ls path-to-whatever| wc -l
, until I discovered, that it actually consumes huge amount of memory. Then I moved to find path-to-whatever -name "*" | wc -l
, which seems to consume much graceful amount of memory, regardless how many files you have.
Then I learned that ls is mostly slow and less memory efficient due to sorting the results. By using ls -f | grep -c .
, one will get very fast results; the only problem is filenames which might have "line breaks" in them. However, that is a very minor problem for most use cases.
Is this the fastest way to count files?
EDIT / Possible Answer: It seems that when it comes to Big Data, some versions of ls, find etc. have been reported to hang with >8 million files (need to be confirmed though). In order to succeed with very large file counts (my guess is > 2.2 billion), one should use getdents64 system call instead of getdents, which can be done with most programming languages, that support POSIX standards. Some filesystems might offer faster non-POSIX methods for counting files.