1

When looking at client->server interaction for fetching images, I see the following HTTP GET request from client where the packet contains 2 HTTP GET requests and I am not sure how the server would respond to such requests?

  1. Will the server ignore the second GET request?
  2. Will the server send the response one by one to each GET request?
  3. This doesn't seem to be HTTP pipelining. Please advise if it is.

    Transmission Control Protocol, Src Port: 59649 (59649), Dst Port: 8080 (8080), Seq: 1, Ack: 1, Len: 648
        Source Port: 59649
        Destination Port: 8080
        [Stream index: 86]
        [TCP Segment Len: 648]
        Sequence number: 1    (relative sequence number)
        [Next sequence number: 649    (relative sequence number)]
        Acknowledgment number: 1    (relative ack number)
        Header Length: 32 bytes
        Flags: 0x018 (PSH, ACK)
            000. .... .... = Reserved: Not set
            ...0 .... .... = Nonce: Not set
            .... 0... .... = Congestion Window Reduced (CWR): Not set
            .... .0.. .... = ECN-Echo: Not set
            .... ..0. .... = Urgent: Not set
            .... ...1 .... = Acknowledgment: Set
            .... .... 1... = Push: Set
            .... .... .0.. = Reset: Not set
            .... .... ..0. = Syn: Not set
            .... .... ...0 = Fin: Not set
            [TCP Flags: *******AP***]
        Window size value: 683
        [Calculated window size: 43712]
        [Window size scaling factor: 64]
        Checksum:  [validation disabled]
            [Good Checksum: False]
            [Bad Checksum: False]
        Urgent pointer: 0
        Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
            No-Operation (NOP)
                Type: 1
                    0... .... = Copy on fragmentation: No
                    .00. .... = Class: Control (0)
                    ...0 0001 = Number: No-Operation (NOP) (1)
            No-Operation (NOP)
                Type: 1
                    0... .... = Copy on fragmentation: No
                    .00. .... = Class: Control (0)
                    ...0 0001 = Number: No-Operation (NOP) (1)
            Timestamps: TSval 6345, TSecr 6344
                Kind: Time Stamp Option (8)
                Length: 10
                Timestamp value: 6345
                Timestamp echo reply: 6344
        [SEQ/ACK analysis]
            [iRTT: 0.000099000 seconds]
            [Bytes in flight: 648]
    
    Hypertext Transfer Protocol
        GET  HTTP/1.1\r\n
            [Expert Info (Chat/Sequence): GET  HTTP/1.1\r\n]
                [GET  HTTP/1.1\r\n]
                [Severity level: Chat]
                [Group: Sequence]enter code here
            Request Method: GET
            Request URI: 
            Request Version: HTTP/1.1
        Host: \r\n
        sent: \r\n
        User-Agent: \r\n
        Accept-Encoding: gzip, deflate\r\n
        Accept-Language: en-GB,*\r\n
        Connection: keep-alive\r\n
        \r\n
        [Full request URI: ]
        [HTTP request 2/2]
        [Prev request in frame: 1254]
        [Response in frame: 1272]
    
    Hypertext Transfer Protocol
        GET \r\n
            [Expert Info (Chat/Sequence): GET  HTTP/1.1\r\n]
                [GET  HTTP/1.1\r\n]
                [Severity level: Chat]
                [Group: Sequence]
            Request Method: GET
            Request URI: 
            Request Version: HTTP/1.1
        Host: \r\n
        sent: \r\n
        User-Agent: \r\n
        Accept-Encoding: gzip, deflate\r\n
        Accept-Language: en-GB,*\r\n
        Connection: keep-alive\r\n
        \r\n
        [Full request URI: ]
        [HTTP request 2/2]
        [Prev request in frame: 1254]
        [Response in frame: 1272]
    

Are there any online tool that I can use to test such requests?

Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770

1 Answers1

3

It is perfectly acceptable for multiple HTTP requests to be in a single TCP packet, if they fit.

What you are seeing is indeed HTTP pipelining, which is covered in RFC 2616 Section 8 and RFC 7230 Section 6.3.2 of the HTTP 1.1 spec. The client is sending a new GET request without first waiting for a response to a previous GET request. That is the very definition of pipelining:

HTTP requests and responses can be pipelined on a connection. Pipelining allows a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time.

TCP is just optimizing things by using a single TCP packet for both HTTP requests. The client likely has send coalescing (aka the "Nagle algorithm") enabled (which most socket libraries do by default) to reduce network traffic.

In order for the server to respond to pipelined requests, a persistent connection MUST be used, which is another requirement of pipelining, and is clearly visible in your example (the Connection: keep-alive request header).

TCP is a byte stream, the lower level TCP framing does not matter to the higher level protocol layers. A properly written HTTP receiver will be able to separate the individual HTTP messages regardless of the TCP framing used, and process them individually as needed. The HTTP 1.1 spec requires all requests to be responded to in the same order that they were received (HTTP 2.0 changes that, but that is a much more involved process to handle - multiplexing - which I won't get into).

So, to answer your questions:

  1. Will the server ignore the second GET request? - NO

  2. Will the server send the response one by one to each GET request? - YES

  3. This doesn't seem to be HTTP pipelining. Please advise if it is. - IT IS, but not for the reason you are thinking.

Community
  • 1
  • 1
Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770
  • Thanks Remy, that's really helpful. But if the client sends 2 concatenated http requests, the server is not able to see the second http request at all. I am using lighttpd for the experiment and that supports pipelining. What could be the reason behind it? – Muralitharan Perumal Jan 17 '19 at 17:05
  • @MuralitharanPerumal I can't answer that. You will have to ask the lighttpd author about it. Sounds like a logic bug in that software. I have my own http server implementation and it handles concatenated pipelined requests just fine. – Remy Lebeau Jan 17 '19 at 17:29
  • Thanks for the quick response. Can you please share any reference code that implements concatenated requests? – Muralitharan Perumal Jan 17 '19 at 18:30
  • I am looking at lighttpd code (connections.c) https://github.com/lighttpd/lighttpd1.4/blob/master/src/connections.c#L804 [Where we read the whole request i.e., in this case concatenated http requests] and https://github.com/lighttpd/lighttpd1.4/blob/master/src/connections.c#L816 [we read only one header]. Am I missing something. Any inputs will be helpful. – Muralitharan Perumal Jan 17 '19 at 18:50
  • *ALL* TCP-based software must deal with the fact that TCP is just a byte stream and has no concept of messaging. TCP packets contain arbitrary data. Higher level protocols define what their messages look like. A single TCP packet may contain (a piece of) a single message, or it may contain (pieces of) multiple messages. It may take multiple TCP packets to complete a full message. It is the receiver's job to buffer the raw data and piece the messages back together according to their structure. And HTTP has a very well-defined message structure. – Remy Lebeau Jan 17 '19 at 19:39
  • If lighttpd can't piece the HTTP messages back together correctly when pipelining is used then it has faulty logic. I'm not going to go digging through its source code trying to decipher its logic to fix it. That is not my job. That is the author's job. Report the problem and move on. Or fix it yourself and submit a patch to the author, if you want. – Remy Lebeau Jan 17 '19 at 19:39
  • Thanks. After going through lighttpd code, it indeed supports reading concatenated requests on Keep live connection but the application voluntarily closes the connection so, it is a design limitation. – Muralitharan Perumal Jan 18 '19 at 16:57