3

Summary:

I'm trying to accomplish streaming large file uploads directly to PHP without having nginx fastcgi buffering in place so I can deal with upload data from php://input right away (i.e. as a stream).

Details:

The desired behavior I'm looking for is to have nginx deliver each byte (or block of bytes) as it arrives in nginx and deliver it upstream, namely php-fpm in this case. The reason is to be able to get very large file uploads and deal with them as quickly as possible (i.e. say move the data to S3 as soon as each block is available). Now nginx has added support for disabling fastcgi buffering in 1.7.11 http://nginx.org/en/CHANGES in the form of fastcgi_request_buffering configuration option.

When implementing this it appears that either nginx isn't honoring this directive or php-fpm is not streaming the data to the worker. Here's what happens with the curl request:

[root@localhost app]# ll -h large.txt 
-rw-r--r--. 1 nginx nginx 307M Nov 10  2015 large.txt
[root@localhost app]# time curl -vs -XPOST --data-binary @large.txt http://127.0.0.1:80
* About to connect() to 127.0.0.1 port 80 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1
> Accept: */*
> Content-Length: 321912832
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
< HTTP/1.1 404 Not Found
< Server: nginx
< Date: Fri, 06 Nov 2015 19:18:27 GMT
< Content-Type: text/html
< Content-Length: 162
< Connection: keep-alive
* HTTP error before end of send, stop sending
< 
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Closing connection 0

real    7m43.438s
user    0m0.653s
sys 0m1.219s
[root@localhost app]# 

While doing that, I'm tailing nginx and php-fpm logs:

==> /var/log/php-fpm/www-error.log <==
[06-Nov-2015 12:18:27 America/Denver] PHP Fatal error:  Allowed memory size of 268435456 bytes exhausted (tried to allocate 268173441 bytes) in Unknown on line 0

==> /var/log/nginx/error.log <==
2015/11/06 14:18:27 [error] 31657#0: *1 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 127.0.0.1, server: myserver.whatever.com, request: "POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"
2015/11/06 14:18:27 [error] 31657#0: *1 open() "/etc/nginx/html/50x.html" failed (2: No such file or directory), client: 127.0.0.1, server: myserver.whatever.com, request: "POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"

==> /var/log/nginx/access.log <==
127.0.0.1 - - [06/Nov/2015:14:18:27 -0500] "POST / HTTP/1.1" 404 187 "-" "curl/7.29.0"

==> /var/log/php-fpm/error.log <==
[06-Nov-2015 14:18:27] WARNING: [pool www] child 31619 exited with code 70 after 479.683142 seconds from start
[06-Nov-2015 14:18:27] NOTICE: [pool www] child 31809 started

So the problem is clear with this line: PHP Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 268173441 bytes) in Unknown on line 0

The question is, is nginx not honoring fastcgi_request_buffering off; directive or is php-fpm process manager messing up and not streaming the data directly to the php worker and gets passed as the whole request object instead.

Alternatively, perhaps I'm missing some configuration for php-fpm: http://php.net/manual/en/install.fpm.configuration.php


Here's some info that's relevant:

PHP fpm configs:

[root@localhost nginx]# cat /etc/php-fpm.conf | grep -vE '^;' | grep -v '^$'
include=/etc/php-fpm.d/*.conf
[global]
pid = /run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/error.log
daemonize = yes

pool config:

[root@localhost nginx]# cat /etc/php-fpm.d/www.conf | grep -vE '^;' | grep -v '^$'
[www]
user = apache
group = apache
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path]    = /var/lib/php/session
php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache
php_value[upload_max_filesize] = 100G
php_value[post_max_size] = 100G
php_value[max_execution_time] = 3600
request_terminate_timeout = 3600
request_slowlog_timeout = 60
php_value[max_input_time] = 3600

Versions:

[root@localhost nginx]# php -v
PHP 5.6.15 (cli) (built: Oct 29 2015 14:18:11) 
Copyright (c) 1997-2015 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologies
    with Xdebug v2.3.3, Copyright (c) 2002-2015, by Derick Rethans
[root@localhost nginx]# php-fpm -v
PHP 5.6.15 (fpm-fcgi) (built: Oct 29 2015 14:18:34)
Copyright (c) 1997-2015 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologies
    with Xdebug v2.3.3, Copyright (c) 2002-2015, by Derick Rethans
[root@localhost nginx]# nginx -v
nginx version: nginx/1.8.0
[root@localhost nginx]# uname -a
Linux localhost.localdomain 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost nginx]# cat /etc/redhat-release 
CentOS Linux release 7.1.1503 (Core)

And nginx config:

server {
    listen       0.0.0.0:80 default_server;
    server_name               myserver.com;
    location / {
        root   /opt/my/app/public;
        index  index.html index.htm index.php;
        try_files $uri $uri/ /index.php?$args;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {
        root /opt/my/app/public;
        try_files      $uri = 404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    fastcgi_request_buffering off;
    fastcgi_read_timeout 3600;
        include        fastcgi_params;
    }
}
nicktacular
  • 373
  • 3
  • 10
  • The use case (*to be able to get very large file uploads and deal with them as quickly as possible*) is very common, have you tried [to upload without file being passed through backend](https://coderwall.com/p/swgfvw)? It doesn't work for multipart form data, but works for AJAX and direct uploads from non-web form such mobile devices or server-to-server communication (curl --data-binary) – Anatoly Nov 11 '15 at 16:41
  • Anatoly, thanks for the suggestion. Yes, you're right, it's a very common problem. However, I'm also trying to stream each block of data to another storage location and I want to be able to control the destination path. Furthermore, it appears that unless I can support streaming, I am necessarily going to block the client until I am done working with that file data, so I'm looking to be as efficient as possible here. – nicktacular Nov 12 '15 at 20:02
  • Related: http://stackoverflow.com/q/33416924/290338 – Anatoly Nov 13 '15 at 12:43

1 Answers1

0

You need to set php_value[enable_post_data_reading] = Off to disable POST body buffering. In this case, you won't be able to use $_FILES but you will be able to read POSTbody through php://input

Ram Chander
  • 1,088
  • 2
  • 18
  • 36
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Aug 03 '23 at 05:51