We got 3 hosts in a swarm cluster and we've got a web application deployed in this cluster. The web application will run on only 1 host at any given time. If one host dies, the web application will be moved to another host.
Docker will take care of routing your request to the web application regardless of which host your request hit.
To ensure that we will always reach a host which is up and running, we though of using Nginx is front. We'd create a virtual host on the Nginx which would proxy requests to the Docker swarm.
We have two approaches for this.
A. We would simply "round robin" requests across the hosts.
This is a simply approach. We would use Nginx to take out services when they fail. However, even though Nginx got a 500 error from host 1 it may be that the web application returned that 500 error from host 3. Nginx would incorrectly think that the service fails on host 1 and take the service on that host out.
B. We would direct all requests to the swarm leader.
We do not use Nginx to load balance across the hosts. We simply send all requests to the Docker Swarm leader (through various scripts we configure Nginx to do this). This way, we are not doing "double" load balancing (both Nginx and Docker Swarm) - but all traffic goes through the Docker Swarm leader only.
On the one hand, solution A is simply and easy to understand, but it may also add complexity in terms of double load balancing. The second solution may be more convoluted in the sense that it is less standard but it may also keep things easier to understand.
Which approach should we - from a pure technical perspective - prefer?