3

I'd love to know if this is actually posible but I'm sure I've seen it demonstrated by one of our old AWS TAMs.

I am serving PHP-FPM containers (port 9000) out of ECS hosting a PHP application. I am looking at replacing the nginx box(es) with just an ALB.

Esentially, requests over port 80 into the ALB should execute the application's entrypoint at port 9000 with the original request data.

I have tried messing around with the target groups but am unable to work out how to perform the same ProxyPass functionality that nginx provides.

Is this possible? And, if so, how?

Wildcard27
  • 131
  • 6

1 Answers1

2

I'd love to know if this is actually posible but I'm sure I've seen it demonstrated by one of our old AWS TAMs.

I am looking forward to this solution.

As per my understanding, I have come to a conclusion that PHP-FPM behind NGINX is the easiest solution. Reasons:

  1. FastCGI is a binary protocol for interfacing interactive programs with a web server. Hence the port 9000 exposed by PHP-FPM is not suitable directly behind an AWS ELB.
  2. PHP's Built-in web server should not be used in production environments.
  3. Its a bad practice to allow the same server to be the web server and application server. The application server's resources will be hogged-up by the web server and vice-versa. Each server has its benefits. We use NGINX since its battle tested as a web server. We use PHP-FPM as its the primary PHP FastCGI implementation. We shouldn't use an AK-47 to kill a mouse, we should employ a mouse trap.
  4. Django + Gunicorn apps behind an AWS ELB works smoothly until a slow client starts sending requests. NGINX makes it easy to deal with slow clients as it buffers and forwards complete requests(all TCP packet) to Gunicorn. Ref: Gunicorn deployment. This is applicable for PHP-FPM too.
  5. NGINX helps serve static files with ease and compresses it using GZIP. Having said that, static files should be served using an object storage like S3.
K M
  • 21
  • 4