3

I have a node.js server A with mongodb for database. There is another remote server B (doesn't need to be node based) which exposes a HTTP/GET API '/status' and returns either 'FREE' or 'BUSY' as the response.

When a user hits a particular API endpoint in server A(say POST /test), I wish to start polling server B's status API every minute, until server B returns 'FREE' as the response. The user doesn't need to wait till the server B returns a 'FREE' response (polling B is a background job in server A). Once the server A gets a 'FREE' response from B, it shall send out an email to the user.

How can this be achieved in server A, keeping in mind that the number of concurrent users can go large ?

Sampath Kumar
  • 165
  • 2
  • 10
  • Polling server B will just be a background process or do you need the user gets aware of the return values FREE or BUSY? Anyway, check https://nodejs.org/api/http.html#http_http_get_options_callback for a basic implementation of a get request in node.js. – tiomno May 17 '17 at 11:33
  • @timo: Polling server B will just be a background process. Once the server B returns FREE, i shall do some task like sending an email. – Sampath Kumar May 18 '17 at 05:23
  • please, check my answer below. – tiomno May 18 '17 at 06:21

3 Answers3

3

I suggest you use Agenda. https://www.npmjs.com/package/agenda With agenda you can create recurring schedules under which you can schedule anything pretty flexible.

I suggest you use request module to make HTTP get/post requests. https://www.npmjs.com/package/request

Sharjeel Ahmed
  • 2,051
  • 2
  • 17
  • 25
1

Going from the example in node.js docs I'd go with something like the code here. I tested and it works. BTW, I'm assuming here that the api response is something like {"status":"BUSY"} & {"status":"FREE"}

const http = require('http');

const poll = {
    pollB: function() {
        http.get('http://serverB/status', (res) => {
            const { statusCode } = res;

            let error;
            if (statusCode !== 200) {
                error = new Error(`Request Failed.\n` +
                    `Status Code: ${statusCode}`);
            }

            if (error) {
                console.error(error.message);
                res.resume();
            } else {
                res.setEncoding('utf8');
                let rawData = '';
                res.on('data', (chunk) => { rawData += chunk; });
                res.on('end', () => {
                    try {
                        const parsedData = JSON.parse(rawData);

                        // The important logic comes here
                        if (parsedData.status === 'BUSY') {
                            setTimeout(poll.pollB, 10000); // request again in 10 secs
                        } else {
                            // Call the background process you need to
                        }
                    } catch (e) {
                        console.error(e.message);
                    }
                });
            }
        }).on('error', (e) => {
            console.error(`Got error: ${e.message}`);
        });
    }
}

poll.pollB();

You probably want to play with this script and get rid of unnecessary code for you, but that's homework ;)

Update:

For coping with a lot of concurrency in node.js I'd recommend to implement a cluster or use a framework. Here are some links to start researching about the subject:

How to fully utilise server capacity for Node.js Web Apps

How to Create a Node.js Cluster for Speeding Up Your Apps

Node.js v7.10.0 Documentation :: cluster

ActionHero.js :: Fantastic node.js framework for implementing an API, background tasks, cluster using http, sockets, websockets

Community
  • 1
  • 1
tiomno
  • 2,178
  • 26
  • 31
  • Your code answers the first part of my question, `How can this be achieved in server A` . I would also like to know if this simple setTimeout based implementation is enough to satisfy the second part of my question `keeping in mind that the number of concurrent users can go large`? – Sampath Kumar May 18 '17 at 07:21
  • If this process has to run for every single request you just need to implement a cluster or a similar solution to escalate in case there is too much demand on your server. That's a quite complex task and I'd recommend a search for a solution online. I'll update the answer with a few links. – tiomno May 19 '17 at 03:28
0

Use a library like request, superagent, or restify-clients to call server B. I would recommend you avoid polling and instead use a webhook when calling B (assuming you are also authoring B). If you can't change B, then setTimeout can be used to schedule subsequent calls on a 1 second interval.

Kevin
  • 24,871
  • 19
  • 102
  • 158