-1

As we are building a data feed REST service with node.js and MongoDB/Express, it works very well when the query result is small. But it will hang the server when the client query a large dataset, such as 1m rows (already using gzip to compression). Is this caused by node.js single thread design?  

I would like to consulting you about any idea to handle this.

Any comments are welcome:)

Following are the code about the service (with JayData OData Server Module)

app.use('/d.svc', $data.ODataServer({
    type: TYPE,
    CORS: true,
    database: 'odata',
    responseLimit: -1,
    checkPermission: function (access, user, entitySets, callback) {
        logger.info('Check Access Permission for User');// + JSON.stringify(user));
        if (access & $data.Access.Create) {
            if (user == 'admin') callback.success();
            else callback.error('Auth failed');
        } else callback.success();
    },
    provider: {
        name: 'mongoDB',
        databaseName: 'odata',
        address: settings.host,
        port: settings.port,
        username: USER,
        password: PASSWORD
    }
}));

Thank you very much.

Luke.

LukeHan
  • 260
  • 1
  • 2
  • 7
  • Yes, it seems like the large result set is consuming the single thread of Node. – WiredPrairie Sep 27 '13 at 10:48
  • Using cluster resolved hand issue, then it will leverage all cpu cores and most of the time the response is very well. But still need to find out a way to improve millions query result performance, thanks. – LukeHan Oct 12 '13 at 12:53

1 Answers1

0

You should never transmit this amount of data in one query. Imagine that you have to keep the results in the memory on the client-side. No matter what technology you use, paged downdload is recommended. In case of JayData, you can implement it based on this blogpost - Synchronized Online/Offline data applications, part 2: Syncing large tables and tables with foreign relations

Robesz
  • 1,646
  • 11
  • 13
  • I'm using MongoDB as my cache layer for analytics application and it require to transfer millions records data to the client (such as Tableau, there's no way to do the pagination in Tableau now). The JayData is using on server side to perform a rest service, not client side (that's easy to do paging ) – LukeHan Oct 12 '13 at 12:50