2

Update below!

I am puzzled by Docker swarm on Ubuntu 20.04. I created a clean Ubuntu machine on Scaleway and basically followed the tutorial from https://dockerswarm.rocks/ . I soon found this tutorial also, which is a little shorter and cleaner: https://blog.creekorful.com/2019/10/how-to-install-traefik-2-docker-swarm/ .

Basically (all from SSH'ing into the machine):

  • Installed Docker
  • Initialized a Swarm with # docker swarm init --advertise-addr=x.x.x.x, where I used the remote IP address of the server.
  • I then created an overlay network # docker network create --driver=overlay my-net
  • I started a simple hello world container:
version: '3'
services:
  helloworld:
    image: tutum/hello-world:latest
    ports:
            - 80:80
    networks:
     - my-net
networks:
    my-net:
       external: true

# docker stack deploy -c helloworld.yml helloworld

  • From this point I'd assume I should be able to # curl 127.0.0.1 and get my hello world. However, I get a connection refused: curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused

Things I tried/checked

Output from docker service ls

root@www2:~# docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE                      PORTS
md1bd2ydswo8        helloworld_helloworld   replicated          1/1                 tutum/hello-world:latest   *:80->80/tcp

Output from docker ps

root@www2:~# docker ps
CONTAINER ID        IMAGE                      COMMAND                  CREATED              STATUS              PORTS               NAMES
7c2d9d7379c5        tutum/hello-world:latest   "/bin/sh -c 'php-fpm…"   About a minute ago   Up About a minute   80/tcp              helloworld_helloworld.1.7u99ox2ea6bylb5by8vdca0pt

Enabling UFW on ubuntu

UFW was disabled by default, but I also tried enabling it. This is the output of # ufw status

root@www2:~# ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
2376/tcp                   ALLOW       Anywhere                  
2377/tcp                   ALLOW       Anywhere                  
7946/tcp                   ALLOW       Anywhere                  
7946/udp                   ALLOW       Anywhere                  
4789/udp                   ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)             
2376/tcp (v6)              ALLOW       Anywhere (v6)             
2377/tcp (v6)              ALLOW       Anywhere (v6)             
7946/tcp (v6)              ALLOW       Anywhere (v6)             
7946/udp (v6)              ALLOW       Anywhere (v6)             
4789/udp (v6)              ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)

Messing around with the IP addresses

I also tried curling to the public IP and the private IP, as well as using the private IP as advertise-addr. I also tried curling from my laptop to the remote server, both by IP and domain name. But all to no avail...

Trying on Arch Linux

I run Arch Linux on my laptop and tried to do the same locally on my laptop, which has no problems whatsoever.

Any ideas are very welcome, thanks!

Update

So as I pretty much lost it, I decided to spin up another VPS, clean Ubuntu 18.04 this time (as opposed to 20.04) and it worked right away, no problems whatsoever........

Daniel Kappelle
  • 341
  • 3
  • 7
  • I am having the same problem as you were with Ubuntu 20.04, haven't tried 18.04. Will update either way. Thanks for the post Daniel – hani elabed Aug 18 '20 at 01:52
  • Just an FYI for anybody who may need it. No matter which version of Ubuntu I tried 20.04, 18.04 or 16.04 my swarm with multi-host multi-node could NOT expose the port to the host machine. Even though if I ran a standalone container (i.e not a service) then the port would be exposed alright to the host machine. My setup was based on 3 nodes running ubuntu on 3 VirtualBox machines running in a bridged network on my Mac. I gave up on doing swarm mode on VirtualBox(es). Then, I moved all 3 nodes to run on 'linode.com' micro instances $5/month each and swarm worked beautifully – hani elabed Aug 21 '20 at 03:46
  • I had similar issue with Ubuntu 20.04. Initial swarm joining was successful, but I encountered lots of strange errors in service updating and after restarting servers, swarm didn't set up correctly. Since I was playing in AWS, I tested similar setups with Debian 10 and Amazon Linux 2. Both acted normally, but I picked Amazon Linux 2, since Debian 10 has Docker 18 and they didn't have Debian 11 available officially. – ilvez Nov 09 '21 at 14:35

2 Answers2

2

I had been banging my head against this issue for a few days. On a fresh Ubuntu 20.04 image on IBM Cloud, unable to expose a port from a Docker Swarm service.

I eventually found this message in the logs from sudo journalctl -u docker.service

level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."

This was confirmed by installing ipvsadm with sudo apt-get install ipvsadm, then running sudo ipvasdm, which returns a similar message.

I switched back to 18.04 rather than faff about trying to rebuild a kernel, but no doubt that is possible should you be so inclined.

0

Just my 2 cents. I had a similar behaviour, but this was due to the container not properly starting up, because of an issue with the underlying filesystem for my volume bind mounts.

To ensure you're not in that case, just start your stacks with local volumes only.

Also, make sure the packets are well reaching your swarm nodes. You can check with combinations of iptables commands such as:

iptables -Z #to Zero all counters
iptables -v -L #to list rules & packet counts
iptables -v -t nat -L #to also check in the NAT table. The NAT table is probably the most relevant here, since I guess the first rules applied are in the prerouting section in NAT table.

Regards,

Pivert
  • 755
  • 5
  • 17