2

I got a problem while testing an nginx server patched with Quiche implementation of HTTP/3 with curl: when I try to send multiple consecutive request for a small html page (~1kb), nginx responds correctly

    root@cUrlClient:~# ./curl/src/curl https://192.168.19.128?[1-5] -Ik --http3

[1/5]: https://192.168.19.128?1 --> <stdout>
--_curl_--https://192.168.19.128?1
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes


[2/5]: https://192.168.19.128?2 --> <stdout>
--_curl_--https://192.168.19.128?2
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes


[3/5]: https://192.168.19.128?3 --> <stdout>
--_curl_--https://192.168.19.128?3
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes


[4/5]: https://192.168.19.128?4 --> <stdout>
--_curl_--https://192.168.19.128?4
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes


[5/5]: https://192.168.19.128?5 --> <stdout>
--_curl_--https://192.168.19.128?5
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes

If I try to make a single request to a medium/big html file, nginx respond correctly again, but when I try to make multiple consecutive request to a medium/big html page (>=30kb), nginx stop responding after an arbitrary number of requests (2-5 requests normally). Here's an example made of 10 requests to the https://cloudflare-quic.com html page (which I downloaded on my server):

   root@cUrlClient:~# ./curl/src/curl -Ik https://192.168.19.128/cloudflare.html?[1-10] --http3 -v

[1/10]: https://192.168.19.128/cloudflare.html?1 --> <stdout>
--_curl_--https://192.168.19.128/cloudflare.html?1
*   Trying 192.168.19.128:443...
* Sent QUIC client Initial, ALPN: h3-23
* h3 [:method: HEAD]
* h3 [:path: /cloudflare.html?1]
* h3 [:scheme: https]
* h3 [:authority: 192.168.19.128]
* h3 [user-agent: curl/7.67.0-DEV]
* h3 [accept: */*]
* Using HTTP/3 Stream ID: 0 (easy handle 0x5614ee569460)
> HEAD /cloudflare.html?1 HTTP/3
> Host: 192.168.19.128
> user-agent: curl/7.67.0-DEV
> accept: */*
>
< HTTP/3 200
HTTP/3 200
< server: nginx/1.16.1
server: nginx/1.16.1
< date: Mon, 25 Nov 2019 13:53:43 GMT
date: Mon, 25 Nov 2019 13:53:43 GMT
< content-type: text/html
content-type: text/html
< content-length: 106072
content-length: 106072
< vary: Accept-Encoding
vary: Accept-Encoding
< etag: "5ddbdc21-19e58"
etag: "5ddbdc21-19e58"
< alt-svc: h3-23=":443"; ma=86400
alt-svc: h3-23=":443"; ma=86400
< accept-ranges: bytes
accept-ranges: bytes

<
* Excess found: excess = 27523 url = /cloudflare.html (zero-length body)
* Connection #0 to host 192.168.19.128 left intact

[2/10]: https://192.168.19.128/cloudflare.html?2 --> <stdout>
--_curl_--https://192.168.19.128/cloudflare.html?2
* Found bundle for host 192.168.19.128: 0x5614ee56db00 [can multiplex]
* Re-using existing connection! (#0) with host 192.168.19.128
* Connected to 192.168.19.128 (192.168.19.128) port 443 (#0)
* h3 [:method: HEAD]
* h3 [:path: /cloudflare.html?2]
* h3 [:scheme: https]
* h3 [:authority: 192.168.19.128]
* h3 [user-agent: curl/7.67.0-DEV]
* h3 [accept: */*]
* Using HTTP/3 Stream ID: 4 (easy handle 0x5614ee56b2b0)
> HEAD /cloudflare.html?2 HTTP/3
> Host: 192.168.19.128
> user-agent: curl/7.67.0-DEV
> accept: */*
>
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
[...]

It stucks on this screen repeating "Got h3 for stream 0, expects 4". Also I noticed that, when testing on smaller pages, that the smallest the file the bigger is the number of requests fullfilled before stop responding and start printing the error "Got h3 for stream x, expecting y", whith the relation that y=x+4. Also the access.log and the error.log are clean, meaning that it could maybe be some king of parameter missing in the server configuration, but I'm not sure about it. Does anyone have an idea of what the problem could be?

My config

nginx version:

nginx version: nginx/1.16.1
built by gcc 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)
built with OpenSSL 1.1.0 (compatible; BoringSSL) (running with BoringSSL)
TLS SNI support enabled
configure arguments: 
--prefix=/root/nginx-1.16.1 
--with-http_ssl_module 
--with-http_v2_module 
--with-http_v3_module 
--with-openssl=../quiche/deps/boringssl 
--with-quiche=../quiche

nginx.conf:

user root;
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# only log critical errors
error_log logs/error.log crit;
error_log  logs/error.log debug;
error_log  logs/error.log  notice;
error_log  logs/error.log  info;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}

http {
    # cache informations about FDs, frequently accessed files
    # can boost performance, but you need to test those values
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # to boost I/O on HDD we can disable access logs
    access_log on;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece, it is better than sending them one by one
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    tcp_nodelay on;

    # reduce the data that needs to be sent over network -- for testing environment
    gzip on;
    # gzip_static on;
    gzip_min_length 10240;
    gzip_comp_level 1;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types
        # text/html is always compressed by HttpGzipModule
        text/css
        text/javascript
        text/xml
        text/plain
        text/x-component
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        image/svg+xml;

    # allow the server to close connection on non responding client, this will free up memory
    reset_timedout_connection on;

    # request timed out -- default 60
    client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    send_timeout 2;

    # server will close connection after this time -- default 75
    keepalive_timeout 30;

    # number of requests client can make over keep-alive -- for testing environment
    keepalive_requests 100000;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
  ########################################################
    ########################################################  
        server {


        access_log  logs/access.log  main;
          sendfile on;
          tcp_nopush on;
          tcp_nodelay on;
          keepalive_timeout 65;
          types_hash_max_size 2048;
          # server_tokens off;
        gzip  on;

            # Enable QUIC and HTTP/3.
        listen 443 quic reuseport;

        # Enable HTTP/2 (optional).
        listen 443 ssl http2;

        ssl_certificate      certificate.pem;
        ssl_certificate_key  key.pem;

        # Enable all TLS versions (TLSv1.3 is required for QUIC).
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

        # Add Alt-Svc header to negotiate HTTP/3.
        add_header alt-svc 'h3-23=":443"; ma=86400';

        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        } 
        ###Limits the maximum number of concurrent HTTP/3 streams in a connection.
        http3_max_concurrent_streams 256;

        ###Limits the maximum number of requests that can be served on a single HTTP/3 connection, 
        ###after which the next client request will lead to connection closing and the need of establishing a new connection.
        http3_max_requests 20000;

        ###Limits the maximum size of the entire request header list after QPACK decompression.
        http3_max_header_size 100000k;

        ###Sets the per-connection incoming flow control limit.
        http3_initial_max_data 2000000m;

        ###Sets the per-stream incoming flow control limit.
        http3_initial_max_stream_data 1000000m;

        ###Sets the timeout of inactivity after which the connection is closed.
        http3_idle_timeout 1500000m;
    }
 ########################################################
    ########################################################
}

Curl version

curl 7.67.0-DEV (x86_64-pc-linux-gnu) libcurl/7.67.0-DEV BoringSSL zlib/1.2.11 nghttp2/1.39.2 quiche/0.1.0
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTP3 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets

EDIT

We discussed about this issue on the Cloudflare quiche repo and we find that it's known curl problem: GitHub Issue

oguz ismail
  • 1
  • 16
  • 47
  • 69
Difettoso
  • 51
  • 1
  • 1
  • 10
  • 1
    Smells like potentially a curl bug... – Daniel Stenberg Nov 26 '19 at 08:39
  • We tried the same requests on the https://cloudflare-quic.com site directly and we had no problem with consecutive multiple http3 requests. So if it's a curl bug, should curl also fail when requesting that website? That's why I think it could be a misconfiguration of our Nginx Server, but I could be (probably) wrong. – Difettoso Nov 26 '19 at 09:35

0 Answers0