3

I have a web HTTP/1.1 server implementation that I've written in C++ using Berkeley sockets. I'm looking at implementing support for HTTP/2.0 (or SPDY) which allows for request and response multiplexing:

The binary framing layer in HTTP/2.0 enables full request and response multiplexing, by allowing the client and server to break down an HTTP message into independent frames, interleave them, and then reassemble them on the other end.

My question is as follows; how can I enable HTTP/2.0 (or SPDY) type request and response multiplexing with my already existing HTTP/1.1 program that is writting using the Berkeley Socket API? Perhaps the aformentioned frame multiplexing that is supported by HTTP/2.0 (or SPDY) is already handled by the existing mechanisms in the TCP/IP Stack, or?

Clarification:

I'm specifically interested in the part of multiplexing that use a single connection to deliver multiple requests and responses in parallel , I don't understand from the specs just how this is implemented in the application level protocol? Any ideas?

  • SPDY is a different protocol -- What are you looking for in an answer other than "implement the protocol"? – janm Jul 13 '14 at 08:39
  • @janm There are many types of multiplexing within TCP/IP Stack, I'm looking for a solution of howto implement the kind of request/response multiplexing that HTTP/2.0 (and SPDY) supports. The rest of the HTTP/2.0 (or SPDY) protocol is not in the scope of the question at hand. Thank you. –  Jul 13 '14 at 08:40
  • SPDY isnt within the TCP/IP stack, it is above TCP, traditionally it would be considered an application protocol. Its control and data frames are documented in the draft spec. You implement multiplexing by implementing the protocol. Have you read the protocol draft? – codenheim Jul 13 '14 at 09:27
  • @mrjoltcola Yes, I'm aware that HTTP/2.0 (or SPDY) is not part of the TCP/IP stack, and I never implied that it was. Could you add a link to the multiplexing protocol implementation protocol that you mentioned? -TIA –  Jul 13 '14 at 09:34
  • Pardon my confusion but the last sentence in your post seemed to imply that to me. Anyway, see my answer. – codenheim Jul 13 '14 at 09:44

1 Answers1

4

No, the TCP stack doesnt handle any of this because SPDY isn't part of the TCP/IP stack, it is above TCP, traditionally what is considered an application protocol. Its control and data frames are documented in the draft spec. You implement multiplexing by implementing the protocol. TCP stack knows nothing about HTTP or SPDY.

In short, SPDY is made up of frames within a single TCP connection that include fairly simple headers with session id and frame length, among other things. You have to implement that to multiplex. You should be able to implement it all with standard SSL/TLS enabled socket code.

As far as I know, this is the spec -

http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft2

codenheim
  • 20,467
  • 1
  • 59
  • 80
  • What about the part of multiplexing that [_use a single connection to deliver multiple requests and responses in parallel_](http://chimera.labs.oreilly.com/books/1230000000545/ch12.html#REQUEST_RESPONSE_MULTIPLEXING) , I don't understand from the specs just how this is implemented in the application level protocol? Any ideas? –  Jul 13 '14 at 10:16
  • @IngeHenriksen It just keeps the connection open and sends another request. No mystery. – user207421 Jul 13 '14 at 11:37
  • 3
    @IngeHenriksen The link you gave shows how SPDY allows multiple "streams" to exist concurrently using a single TCP connection. This is done by the SPDY framing identifying which data belongs to which stream. The confusion might come from the term "in parallel" -- In the single TCP connection the frames from the different streams must appear one after another but it is not required that one stream complete before another starts. – janm Jul 13 '14 at 12:14
  • 3
    So, you read frames one frame at a time in order, looking at each frame header to know which stream it belongs to, buffering/parsing each frame's data as needed. When you reach the last frame for a request on a given stream, process that stream's buffer and send a response as needed, similarly framing the response data in parallel with other streams' responses. You need to implement parallel processing of streams, not parallel processing of frames. The framing needs to be serialized so frames do not overlap in each direction. Send a frame for one stream, send a frame from another stream, etc. – Remy Lebeau Jul 13 '14 at 18:18
  • @RemyLebeau So if a send failed to send all of the stream framed at one go do i need to re frame the remaining bytes of the stream again and possibly attach a part number for the chunk? and at the receiver side parsing the data by removing each frame to get the stream? – RCECoder Apr 11 '18 at 07:14
  • 1
    @amaninlove if a send actually fails, the state of the TCP connection is indeterminate, so the only viable option is to close the connection and start over with a new connection. But, if a send merely sends fewer bytes than requested, then yes, you have to resend the remaining bytes, and no, you do not need to re-frame the data. Simply finish sending the frame you have started sending. Do not start sending a new frame until a previous frame is finished. Same on the receiving end. Read a full frame, however many reads it takes to finish, then process the frame, then read the next frame, etc. – Remy Lebeau Apr 11 '18 at 15:12
  • @Remy Lebeau Thank you for your answer. However what you explained is subject to the issue of head of line blocking. I was pointing to an issue where multi commands can be exchanged over a single connection at the same time without waiting for a data to finish to fire another request, for instance you request a file to be downloaded while at the same time a ping might be issued using a timer. Now obviously this is an issue over a single tcp connection because data might get overlapped with each other. What is the ideal solution to this issue over a single tcp connection? – RCECoder Apr 11 '18 at 21:18
  • @amaninlove Re-read what I have already commented before, it applies to multiplexing multiple commands over a single TCP connection. Multiplexing requires framing, and each frame must be sent/received in full before sending/receiving the next frame, do not overlap frames. In your example, you would have a separate frame for a ping request, a ping response, a file request, and each chunk of file data. You can intermingle frames for different commands (send a ping request, read a file chunk, read ping response, read a file chunk, etc), just don't overlap the frames or you will corrupt them. – Remy Lebeau Apr 11 '18 at 21:37
  • @Remy Lebea Yes but wouldn't data gets interleaved? i mean imagine the server is sending a request for a file to be downloaded that is path is in the frame now suppose a timer issued a ping in between, on the receiver's side when the receiver calls WSARecv he may get a part of file download, another WSARecv might get some of the ping command or partial ping command then another part of file download command. I have personally tried that and the data gets mixed up. – RCECoder Apr 11 '18 at 22:29
  • 1
    @amaninlove that means you are not managing your reads/sends correctly. Go re-read my earlier comments again more carefully, I'm tired of repeating myself. You CANNOT do things the way you have described. You MUST send a frame in full (it may require multiple writes) before sending a new frame. You MUST receive a frame in full (it may require multiple reads) before receiving the next frame. If the sender wants to send a new frame while an earlier frame is still busy being sent, it will have to delay the new frame until a later time. Put it in a queue, and then send queued frames in order. – Remy Lebeau Apr 11 '18 at 22:49
  • @amaninlove TCP is a byte stream, it has no concept of message boundaries. You are responsible for managing message boundaries yourself. *EVERY* TCP socket developer has to deal with this correctly. Multiplexing handles that by wrapping messages in frames, allowing the receiver to know where one frame ends and next begins. If `WSARecv` receives portions of multiple frames, you are responsible for buffering the data and breaking it up at the frame boundaries yourself. Otherwise, use a library that handles it for you – Remy Lebeau Apr 11 '18 at 22:50
  • Thanks for your explanation that is what i was missing. The order. I know about to delay the frame and queue though in separate connection its not needed. I thought we could send data like what i thought and that what i understood about HTTP /2 i was wrong then. I have watched your answers and you seem to be really professional. I could not ask a question because my account is banned. Sorry for the inconvenience this may have caused. – RCECoder Apr 11 '18 at 22:53
  • @amaninlove "*I could not ask a question because my account is banned*" - there is probably a good reason for that. If you don't agree with it, ask the admins about it. – Remy Lebeau Apr 11 '18 at 22:56