0

I've been facing some issues with nginx and PUT redirects: Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)

The client does a PUT /my/api with Expect: 100-continue. My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3). However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads

I am wondering if there is a way to:

  • Prevent nginx to send 100-continue unless the service actually does send that.
  • Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).

Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.

I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue

Any pointers appreciated :)

EDIT 1: Here's a sample setup to reproduce the issue:

  • An nginx listening on port 80, forwarding to localhost on port 9999
  • A simple HTTP server listening on port 9999, that always returns redirects on PUTs
  1. nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
 server { 
  listen 80;
  server_name frontend;
  keepalive_timeout  75s;
  keepalive_requests 100;

  location / {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://127.0.0.1:9999/;
  }
 }
}

I'm running the above with

docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
  1. Simple python3 HTTP server.
#!/usr/bin/env python3

import sys
from http.server import HTTPServer, BaseHTTPRequestHandler

class Redirect(BaseHTTPRequestHandler):
   def do_PUT(self):
       self.send_response(307)
       self.send_header('Location', 'https://s3.amazonaws.com/test')
       self.end_headers()

HTTPServer(("", 9999), Redirect).serve_forever()

Test results:

  • Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
> 
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
< 
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
*   Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> 
  • Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
  • After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
*   Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
< 
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>

Notice how nginx sends a HTTP/1.1 100 Continue With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:

127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"

So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx

kiv
  • 1,595
  • 1
  • 9
  • 11
  • Please share your NGINX configuration. I will have a look on it and make some suggestions to work arround this issues. – Timo Stark Jun 29 '21 at 05:01
  • @TimoStark just saw your message. Thanks so much for lending a hand, very appreciated :) I have edited my post with nginx configuration and test setup. – kiv Jul 15 '21 at 10:28
  • Were you able to find a solution here? – LinusGeffarth Feb 18 '23 at 10:01
  • 1
    @LinusGeffarth sorry just saw this. The above person did not respond - I think I ended up finding one config to try, but never got around to it. We lived with the double-buffering for a while, then we eventually didn't need that nginx server anymore. I'd be interested to know if you found something though. – kiv Jul 28 '23 at 01:06
  • No, we had to change our integration to handle this header on our end @kiv – LinusGeffarth Aug 17 '23 at 10:06

0 Answers0