23

Suppose you have two services on your topology

  1. API
  2. Web Interface

Both suppose to be running on port 80.

On docker swarm when you create a service if you wanna to access it outside the cluster you need to expose and map the port from the service to the nodes (external ports). But if you map port 80 to lets say API service then you cant map the same port for Web Interface service since it will be already mapped.

How can this be solve?

As far as i see this use case is not supported. Even though if you wanna to have a big swarm cluster and through in there all your services and applications will not be possible because this behavior.

I'm missing something?

Any pattern to solve this?

sendmoreinfo
  • 582
  • 6
  • 22
bitgandtter
  • 2,179
  • 6
  • 31
  • 60
  • Confused by the question. There's nothing docker specific about not being able to map 2 things to the same port. They would run on port 80 inside your container but you would map them to different external ports. – johnharris85 Jul 23 '16 at 03:57
  • @JHarris yes i edit the main question to specify external ports. But that is the concern, after you map API service 80 port to external port 80 it cant be mapped again for service Web Interface – bitgandtter Jul 23 '16 at 04:21
  • Correct, how do you solve this without docker? – johnharris85 Jul 23 '16 at 14:07
  • 3
    @JHarris Without docker we may have a set of cloud instances running API service on port 80 behind a load balancer, can be another cloud instance running nginx or can be a cloud load balancer like aws elb. At the same time the same setup for a set of Web Interface service. All of them running on port 80. The question is how can we archive that with docker swarm mode? – bitgandtter Jul 23 '16 at 14:37

3 Answers3

7

You can look into Docker Flow:Proxy to use as a easy-to-configure reverse proxy.

BUT, I believe, as other commentators have pointed out, the Docker 1.12 swarm mode has a fundamental problem with multiple services exposing the same port (like 80 or 8080). It boils down (I THINK) to the mesh-routing magic - which is a level 4 four thing, meaning basically TCP/IP - in other words, IP address + port. So things get messy when multiple services are listing on (for example) port 8080. The mesh router will happily deliver traffic going to port 8080 to any services that exposes the same port.

You CAN isolate things from each other using overlay networking in swarm mode, BUT the problem comes in when you have to connect services to the proxy (overlay network) - at that point it looks like things get mixed up (and this is where I am now having difficulties).

The solution I have at this point is to let the services that need to be exposed to the net use ports unique as far as the proxy-facing (overlay) network is concerned (they do NOT have to be published to the swarm!), and then actually use something like the Docker Flow Proxy to handle incoming traffic on the desired port.

Quick sample to get you I started (roughly based on this:

    docker network create --driver overlay proxy
    docker network create --driver overlay my-app
    # App1 exposed port 8081
    docker service create --network proxy --network my-app --name app1 myApp1DockerImage
    docker service create --name proxy \
    -p 80:80 \
    -p 443:443 \
    -p 8080:8080 \
    --network proxy \
    -e MODE=swarm \
    vfarcic/docker-flow-proxy
    #App2 exposes port 8080
    docker service create --network proxy --network my-app --name app2 myApp2DockerImage

You then configure the reverseProxy as per it's documentation.

NOTE: I see now there is new AUTO configuration available - I have not yet tried this.

End result if everything worked:

  • proxy listening on ports 80, 443 (and 8080 for it's config calls, so keep that OFF the public net!)
  • proxy forwards to appropriate service,based either on service domain or service path (I had issues with service path)
  • services can communicated internally over isolated overlay network.
  • services do not publish ports unnecessarily to the swarm

[EDIT 2016/10/20]

Ignore all the stuff above about issues with the same exposed port on the same overlay network attached to the proxy.

I tore down my hole setup, and started again - everything is working as expected now: I can access multiple (different) services on port 80, using different domains, via the docker flow proxy.

Also using the auto-configuration mentioned - everything is working like a charm.

demaniak
  • 3,716
  • 1
  • 29
  • 34
  • From the docker flow proxy repo: "This project needs adoption. I (@vfarcic) moved to Kubernetes and cannot dedicate time to this project anymore. Similarly, involvement from other contributors dropped as well. " Any alternatives? – Yuki May 03 '20 at 15:01
  • Love it, hate it - k8s won. I am not aware of any alternatives currently (except to move to k8s). – demaniak May 04 '20 at 19:30
5

If you need to expose both API and Web interface to public, you have two options. Either use different port for the services

http://my-site.com       # Web interface
http://my-site.com:8080  # API

or use a proxy that listens port 80 and forwards requests to correct service according to path:

http://my-site.com      # Web interface
http://my-site.com/api  # API
ronkot
  • 5,915
  • 4
  • 27
  • 41
1

[Revisiting this after 4 years because it seems to still be getting votes and there's been a lot that's changed since the question was asked]

You can't have multiple services listening on the same port in swarm mode, or linux in general. However, you can run some kind of layer 7 proxy on the port that performs the routing to the correct container based on application level data. The most common example of this is the various http reverse proxies that exist.

Specifically with Swarm Mode, traefik seems to be the most popular reverse proxy. However, there are other solutions based on HAProxy and Nginx that also exist.

With a reverse proxy, neither of your containers would publish a port in swarm mode. Instead you would configure the reverse proxy with it's port published on something like 80 and 443. Then it would communicate requests to your containers over a shared docker network. For this to work, each container needs to be able to separate what traffic to transmit to it based on something in the http protocol, e.g. the hostname, path, cookies, etc in the request.


[Original answer]

Use different ports if they need to be publicly exposed:

docker service create -p 80:80 --name web nginx

and then

docker service create -p 8080:80 --name api myapi

In the second example, public port 8080 maps to container port 80. Of course if they don't need to be public port exposed, you can see the services between the containers on the same network by using the container name and container port.

curl http://api:80

would find a container named api and connect to port 80 using the DNS discovery for containers on the same network.

BMitch
  • 231,797
  • 42
  • 475
  • 450
  • hey @BMitch but the use case is not accomplished, since the API is exposed to the internet on port 8080 when suppose to be on port 80. That's the use case that im trying to complete. Either way if you want to have just one big swarm cluster and multiple different services on it and several of those services need to be expose on port 80 to the internet – bitgandtter Jul 23 '16 at 23:02
  • I'm providing a description of the solution that Docker provides. Their current implementation doesn't provide any other options that would allow publicly exposing the same port through their service discovery mechanism for two different services. Workarounds I can think of are to use another tool that exposes services by IP instead of port, or a proxy that interprets the requests and sends to the API or your web server based on content. – BMitch Jul 24 '16 at 00:18
  • @bitgandtter It makes little sense to have two services exported via the same port. They only way the load balancer would be able to send the traffic to the correct service is by examining the http header. Feasible but certainly not efficient and more complex. In your use case, it's common to use port 80/443 for the web part and something else (range >= 30000) for the api. – Bernard Aug 28 '16 at 22:35
  • 1
    I was thinking more on a setup of a big swarm cluster for several applications. But i think the aim of swarm is more like a cluster for a single application. But even in a single application you can have several services that can and suppose to use the same port, example public API port 80 and public web interface port 80 – bitgandtter Aug 29 '16 at 02:27
  • The model you're looking for where you must expose on a specific port is most like Kubernetes where services are exposed via an IP instead of a port. If you must use the same port, there's proxy services you can create with something like nginx. And if you can configure your clients to use different ports, then exposing each service on a different Swarm port (even if the container ports are all 80) is the Docker designed solution. – BMitch Aug 30 '16 at 02:03