-2

This is rather a question to satisfy curiosity.

How does standard HTTP 1.1 stacks compute chunk-sizes on a HTTP response socket? Is it timeout based, max size based or depends on when the application does a flush on the socket, or an algorithm based on all of them? Is there any open HTTP 1.1 stack implementation guideline available on this?

Thanks in advance.

1 Answers1

1

There is no "standard" HTTP/1.1 stack. Often you have to do it yourself, e.g. make sure a transfer-encoding: chunked header is send, then send all the chunks prefixed with length and then the last empty chunk.

Steffen Ullrich
  • 114,247
  • 10
  • 131
  • 172
  • thanks for replying. In that case I wonder if I am building a http server socket layer what should be my approach to effectively computing chunk-size on each sock write, flush close activity – alienfromouterspace Jan 20 '14 at 17:42
  • I would suggest, that you give preference to an explicit content-length in any case. If this is not possible keep the chunks as large as possible w/o waiting too long (e.g. 100ms?) for more data from the application. Too small chunks (e.g. like 10 bytes) have too much overhead, while there should be no problems with really large chunks. But that's just how I would do it, I think that there is no general rule. – Steffen Ullrich Jan 20 '14 at 19:29
  • Thanks @SteffenUllrich, that was exactly what I was looking for. Though I got it through comment I am accepting your answer. – alienfromouterspace Jan 21 '14 at 09:20