as I use Node.js for not too long time yet, I've ran into a following problem. I understand that using callback-driven paradigma we need to convert loops we used in synchronous code into recursion. My problem is I cannot understand how deep node can go with recursion, results with different tests are inconsistent. For example, I tried code I've found on Web:
var depth = 0;
(function recurseBaby() {
// log at every 500 calls
(++depth % 500) || console.log(depth);
// bail out ~100K depth in case you're special and don't error out
if (depth > 100000) return;
recurseBaby();
})();
It gives me node exception (max recursion depth) after 18500 recursions. So I tried to add some functionality, like working with queue:
var depth = 0,
Memcached = require('memcached'),
memcacheq = new Memcached('127.0.0.1:22201');
(function recurseBaby() {
// log at every 500 calls
(++depth % 500) || console.log(depth);
// bail out ~10M depth in case you're special and don't error out
if (depth > 10000000) return;
memcacheq.set('test_queue', 'recurs' + depth, 0, function (error, response) {
return recurseBaby();
});
})();
It didn't finish yet, but works so far for more than 4 million recursions (and queue is actually getting filled). So I would like to clarify, how recursion depth limit works in node. My guess would be that if I do something in that recursively called function, node has more time to free call stack, but I may be terribly wrong. Any clarifications from more experienced node users are welcome.