6

I've noticed a strange behavior of the performance of the following code in node.js. When the size of content is 1.4KB, the response time of the request is roughly 16ms. However, when the size of content is only 988 bytes, the response time of the request is strangely much longer, roughly 200ms:

response.writeHead(200, {"Content-Type": "application/json"});
response.write(JSON.stringify(content, null, 0));
response.end();

This does not seem intuitive. Looking into Firebug's net tab, the increase/difference all comes from receiving (waiting on the other hand is 16ms for both).

I've made the following change to fix it so that both cases have 16ms response time:

response.writeHead(200, {"Content-Type": "application/json"});
response.end(JSON.stringify(content, null, 0));

I've looked through node.js doc but so far haven't found related info. My guess this is related to the buffering, but could node.js preempt between write() and end()?

Update:

This was tested on v0.10.1 on Linux.

I tried to peek into source and have identified the difference between the 2 path. The first version has 2 Socket.write calls.

writeHead(...)
write(chunk)
  chunk = Buffer.byteLength(chunk).toString(16) + CRLF + chunk + CRLF;
  ret = this._send(chunk);
    this._writeRaw(chunk);
      this.connection.write(chunk);
end()
  ret = this._send('0\r\n' + this._trailer + '\r\n'); // Last chunk.
    this._writeRaw(chunk);
      this.connection.write(chunk);

The second, good version has just 1 Socket.write call:

writeHead(...)
end(chunk)
  var l = Buffer.byteLength(chunk).toString(16);
  ret = this.connection.write(this._header + l + CRLF +
                              chunk + '\r\n0\r\n' +
                              this._trailer + '\r\n', encoding);

Still not sure what makes the first version not working well with smaller response size.

bryantsai
  • 3,405
  • 1
  • 30
  • 30
  • 1
    A far as I know those should do essentially the same thing and I can't reproduce this locally. Can you try coming up with a full reproducible test case and throw it in a fiddle or gist for us to check out? Are you running all this locally? – loganfsmyth May 24 '13 at 02:19
  • Also, can you clarify your env? windows? osx? linux? – bryanmac May 24 '13 at 02:26
  • 1
    possible duplicate of http://stackoverflow.com/questions/15422411/node-js-response-time – Andrey Sidorov May 24 '13 at 02:43
  • 1
    somewhat related - http://stackoverflow.com/questions/11335510/nodejs-response-speed-and-nginx – bryantsai May 24 '13 at 04:45

1 Answers1

10

Short answer:

you can explicitly set the Content-Length header. It will reduce to response time from around 200ms to 20ms.

var body = JSON.stringify(content, null, 0);
response.writeHead(200, {
    "Content-Type": "application/json",
    'Content-Length': body.length
});
response.write(content);
response.end();

Facts:

After a few experiments, I found that if the content is small enough (In my case, less than 1310 bytes) for a single MTU to carry on, the response time would be around 200ms. However, for any content larger than that value, the response time would be roughly 20ms.

Then I used wireshark to capture the server side's network packages. Below is a typical result:

For small content:

  • [0000ms]response.write(content)
  • [0200ms]received the ACK package from client
  • [0201ms]response.end()

For larger content:

  • [0000ms]response.write(content) //The first MTU is sent
  • [0001ms]the second MTU is sent
  • [0070ms]received the ACK package from client
  • [0071ms]response.end()

Possible Explanation:

If Content-Length header is not set, the data would be transferred in "Chunked" mode. In "Chunked" mode, neither server nor the client know the exactly length of the data, so the client would wait for while (200ms) to see if there is any following packages.

However, this explanation raise another question: why in the larger content case, the client did not wait for 200ms (instead, it waited only around 50ms)?

Calvin Zhang
  • 926
  • 7
  • 7
  • but in chunked transfer encoding, there's a explicit ending signal, so "chunked" can't explain this issue. – vilicvane Aug 26 '14 at 02:44
  • Wait time is proportional to empty bits in the last MTU received, I dont have data to prove but seems logical way to implement logic – Laukik Apr 18 '23 at 12:55