UPDATE: I've narrowed this down to what appears to be a different issue, and as such have asked a separate question here.
=======
I have a mongoDB instance running on localhost with two collections, "mydocs
" (which has ~12,000 documents in it) and "mydoctypes
" (which has only 7 documents in it).
I have a standalone NodeJS script which gets a connection to the database and then fires off the following:
myDb.collection('mydoctypes').find().toArray(function(err, results) {
console.log("Got results.");
if (err) {
console.log("err: " + err);
} else {
console.log("Got doctypes: " + results.length);
}
});
The output of that script is:
Got results.
Got doctypes: 7
If I modify the same script to access the 'mydocs' collection instead:
myDb.collection('mydocs').find().toArray(function(err, results) {
console.log("Got results.");
if (err) {
console.log("err: " + err);
} else {
console.log("Got docs: " + results.length);
}
});
I get no output at all. The callback, apparently, never gets fired.
== UPDATE ==
So it looks like the problem was likely too many documents causing toArray() to run out of RAM.
Now, I'm using .each() to iterate, but having a different issue: each() is only running through the first batch (whatever I set batchSize to), and never loading any more documents. The code is this:
myDb.collection('mydocs').find().batchSize(50).each(function(err, item) {
if (item != null) {
process.stdout.write(".");
}
}