We have a web app in Ruby which is deployable via a Docker setup, among other things to have exactly to same Ruby environment from developers to the production server.
We chose to deploy it on AWS via Elastic Beanstalk to take advantage of their auto-scaling configuration and deployment ease. Beanstalk supports deployment of Docker containers.
We chose to use Phusion Passenger, which requires to run on the same host as the Ruby app (as it runs it) and the web server (nginx or apache) as strongly interconnected with it. So nginx+passenger is running in our Docker too.
If we deploy it on Beanstalk, an nginx web server is installed (by Beanstalk) on the Beanstalk EC2 as a simple proxy to the Docker 80 port. (Actually it was kind of a surprise as it is not clear from AWS documentation.)
1) Isn't it a useless overhead to have this nginx proxy on the EC2 host, just to proxy port 80 to port 80 ? If we consider an Elastic Load Balancer (ELB) in front, it gives 3 web servers in cascade for a simple request.
2) Is this the way it is supposed to work?
One of the unwanted inconvenience is for instance that changing a parameter in the nginx parameter as increasing the max request size or the timeout needs to be done in both nginx configurations (on the EC2 host through .ebextensions and in the Docker).