0

I have been successful in setting up an PHP fast CGI application fronted by nginx. If I hit nginx directly everything works as expected. However, in production the traffic is routed through a HAProxy instance and I can't get that configuration to work.

When going through HAProxy I get 502 Bad Gateway and the following error from nginx:

[info] 7#7: *1 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too (104: Connection reset by peer) while reading upstream, client: 172.18.0.6, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.18.0.3:9000", host: "localhost:8888"

Configuration

Here's my docker-compose.yml describing the stack:

version: '2'
services:
    logger:
        image: colstrom/syslog
    proxy:
        build:
            context: .
            dockerfile: Dockerfile-haproxy
        ports:
            - "8888:12000"
    web:
        build:
            context: .
            dockerfile: Dockerfile-web
        ports:
            - "9999:80"
    php:
        build:
            context: .
            dockerfile: Dockerfile-app

Dockerfile-haproxy:

FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EXPOSE 12000

haproxy.cfg

global
    maxconn 4096
    maxpipes 1024
    log logger:514 local2

defaults
    timeout client 50000
    timeout connect 5000
    timeout server 50000

frontend bak
 bind *:12000
 mode http
 option httplog
 log global
 default_backend bak

backend bak
 mode http
 server web web:80

Dockerfile-web:

FROM nginx
COPY site.conf /etc/nginx/conf.d/default.conf
COPY src /var/www/html

site.conf (nginx config):

server {
    index index.php index.html;
    error_log  /var/log/nginx/error.log info;
    access_log /var/log/nginx/access.log;
    root /var/www/html;

    location / {
        fastcgi_pass php:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME index.php;
    }
}

The full log output from docker-compose is as follows:

php_1      | 172.18.0.2 -  11/Feb/2017:11:09:19 +0000 "GET /" 200
web_1      | 2017/02/11 11:09:19 [info] 7#7: *1 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too (104: Connection reset by peer) while reading upstream, client: 172.18.0.6, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.18.0.3:9000", host: "localhost:8888"
web_1      | 172.18.0.6 - - [11/Feb/2017:11:09:19 +0000] "GET / HTTP/1.1" 200 8005 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36"
logger_1   | Feb 11 11:09:19 a94d638768c9 user.notice root: <150>Feb 11 11:09:19 haproxy[9]: 172.18.0.1:54508 [11/Feb/2017:11:09:19.285] bak bak/web 0/0/1/-1/37 502 8524 - - PH-- 0/0/0/0/0 0/0 "GET / HTTP/1.1"

Seems like there needs to be some additional configuration in HAProxy or nginx but I'm none the wiser as to what.

Updates / Things I've tried

Set nginx fastcgi_ignore_client_abort on; is one suggested fix I've come across. The net result is the same but the nginx error is slightly different:

2017/02/11 11:40:35 [info] 7#7: *3 writev() failed (104: Connection reset by peer) while sending to client, client: 172.18.0.5, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.18.0.2:9000", host: "localhost:8888"

Set nginx proxy_buffering on;. This makes no difference.


Setting haproxy mode tcp will correctly serve the page. However I need mode http because I'm doing L7 load balancing (not shown in this test case). This lead me to realise that the HAProxy logs are reporting PH which I understand to mean a bad response from the upstream.

Is there any way I can get more info on that? If I go straight to nginx the response headers look fine as far as I can tell:

HTTP/1.1 200 OK
Server: nginx/1.11.9
Date: Sat, 11 Feb 2017 12:05:07 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
X-Powered-By: PHP/5.4.45
Expires: Sat, 11 Feb 2017 12:15:07GMT
Cache-Control: public,max-age=600
djskinner
  • 131
  • 1
  • 6

1 Answers1

0

The PH-- in the haproxy logs revealed the problem in the end. The HTTP response headers were invalid. I just got rid of Date and Expires and everything now works perfectly.

djskinner
  • 131
  • 1
  • 6