It doesn't matter. I just tried this on a sample collection that has 384 entries. According to explain()
, the index scan took 0ms, while the first collection scan took 2ms - every following collection scan took 0ms, too.
Does that decision depend at all on the size of collection?
Yes, the idea of an index is that it adds cost for creating and updating data which is amortized by making queries faster. In particular, a simple list has an asymptotic insert performance of O(1) and a search time of O(N), while a B-Tree has O(log n) for both, i.e. we accept slower inserts because we assume we read more often than we write, or the data is so large that even a few O(N) reads would be impacting performance, i.e. if N >> log N.
At only a few hundred elements, all this doesn't matter much because the difference between log n and n is small, and because the more complex algorithm's runtime overhead (i.e., the constant factor that is hidden through the Landau-Notation because it's largely implementation-dependent) plays in the same league. The same applies to your code: it doesn't make sense to put 200 elements in a hashtable, a list iteration might even be faster because it avoids branching.
If the documents are huge however, the collection scan will have to wrangle more data (instead of just looking at the index).