7

I have a remote vps working with CentOS 7, related firewalld info is as below, firewalld is running actively.

[root@doer mydir]# firewall-cmd --get-zone-of-interface=eth0
no zone
[root@ doer mydir]# firewall-cmd --list-ports
You're performing an operation over default zone ('public'),
but your connections/interfaces are in zone 'home' (see --get-active-zones)
You most likely need to use --zone=home option.

3306/tcp

I run a Docker container with a Spring Boot program listening on port 8080, which is mapped to 9182 of the host machine, 9182 is not in the open ports list, but I can still access the web server through http://HOST_MACHINE_IP:9182, what is wrong?


I added eth0 to public zone

firewall-cmd --permanent --zone=home --add-interface=eth0

and now

[root@ doer mydir]#  firewall-cmd --get-zone-of-interface=eth0
public


[root@ doer mydir]#  firewall-cmd --list-ports
3306/tcp

still I can access the web server through http://HOST_MACHINE_IP:9182.

    #  firewall-cmd --list-all-zones
    block
      target: %%REJECT%%
      icmp-block-inversion: no
      interfaces:
      sources:
      services:
      ports:
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:


    dmz
      target: default
      icmp-block-inversion: no
      interfaces:
      sources:
      services: ssh
      ports:
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:


    drop
      target: DROP
      icmp-block-inversion: no
      interfaces:
      sources:
      services:
      ports:
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:


    external
      target: default
      icmp-block-inversion: no
      interfaces:
      sources:
      services: ssh
      ports:
      protocols:
      masquerade: yes
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:


    home (active)
      target: default
      icmp-block-inversion: no
      interfaces: eth1
      sources:
      services: dhcpv6-client mdns samba-client ssh
      ports:
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:


    internal
      target: default
      icmp-block-inversion: no
      interfaces:
      sources:
      services: dhcpv6-client mdns samba-client ssh
      ports:
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:
kenlukas
  • 3,101
  • 2
  • 16
  • 26
lily
  • 185
  • 2
  • 7

4 Answers4

5

Docker installs its own firewall rules directly into the kernel of the host server when you publish a port, without using the abstraction layer user-friendly firewall management tools, such as firewalld footnote 1and the associated firewall-cmd (or similarly ufw or Shorewall and others) provide.

Since docker doesn't use them any rules docker creates typically won't be shown when you only use those tools to inspect your firewall.

To see what rules Docker (or any other application that creates its own rules) actually creates in your firewall you will need to to use the more low level iptables and/or iptables-save commands that will show the actual live configuration in the kernel.

Try

 [sudo] iptables -L -v -n --line-numbers 

and

[sudo] iptables -L -v -n -t nat --line-numbers

or use

[sudo] iptables-save

Usually the firewall rules Docker creates will have precedent because they are inserted before the rules managed by your user-friendly firewall management tool.

I run a docker with a springboot program listening on port 8080, which is mapped to 9182 of the host machine, 9182 is not in the opening ports list, but I can still access the web server through http://<HOST_MACHINE_IP>:9182, what is wrong?

Nothing is wrong.

That is exactly what you instructed Docker to do when you created a published port:

Published ports
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.

https://docs.docker.com/config/containers/container-networking/

To add more access controls on a port published by Docker requires creating your own rules in the DOCKER-USER iptables chain as documented here: https://docs.docker.com/network/iptables/


footnote 1 Since docker 20.10.0 docker should intgrate with firewalld according to the documentation here: https://docs.docker.com/network/iptables/#integration-with-firewalld

Bob
  • 5,805
  • 7
  • 25
  • What you mean is the `ports` in docker-compse.yml will make 9182 open to outside world? I thought it only do the mapping, so other program in the host machine can access the page by 9182, but not program in outside world. Then, how can I make it not possible to access the page from HOST_PUBLIC_IP:9182, but possible to access it from HOST_PRIVATE_IPD:9182? – lily Dec 10 '19 at 15:27
  • 1
    Applications on the same host and docker network can already access the application running in your docker container by directly going to `:8080` you do not need to publishing any port for that. The `ports` in a docker-compose.yml are only for external (public) access to the application – Bob Dec 10 '19 at 15:33
  • BTW, it is a bit strange, coz each docker-compose.yml creates a private network, multiple each docker-compose.yml files creates multiple private networks(LAN), how can one container in private network A access a private IP in another private network B? – lily Dec 10 '19 at 16:00
  • I should have probably said *"Applications on the same host and **the same** docker network"* don't need you to publish a port to allow connections between separate docker containers (as explained in https://docs.docker.com/compose/networking/ ) - – Bob Dec 10 '19 at 16:26
  • say the public IP of my VPS is 1.2.3.4, it private IP is 10.5.96.4, the container IP is 172.168.7.4, now I want to access the page by 10.5.96.4:9182 instead of 172.168.7.4:8080, but I want to stop outside program to access the page by 1.2.3.4:9182, how to do? – lily Dec 10 '19 at 18:57
  • 1
    @HermanB I am running Docker 20.10.5 and can see the interface `docker0` attached to the `firewall-cmd` zone called `docker`. However, it still allows external traffic to ports opened by Docker despite adding a set of IP addresses as source to the `docker` zone. Am I getting the documentation wrong? – Dibakar Aditya Mar 23 '21 at 04:18
  • @DibakarAditya I have the exact same issue, seems the integration with firewalld is still not good enough. Or what is your experience now? – Mohammed Noureldin Nov 03 '21 at 20:40
  • 1
    @MohammedNoureldin We have worked around the issue by setting the `iptables` key to `false` in the Docker engine’s configuration file at `/etc/docker/daemon.json`. This prevents Docker from manipulating the iptables rules and forces it to obey the firewall-cmd rules. Though Docker documentation warns against this approach, it works for us with the slight overhead of having to find and open every single port required by our application via firewall-cmd. – Dibakar Aditya Nov 05 '21 at 08:22
2

None of the answers really explain the cause.

As said HermanB, Docker creates its own rules... but not anywhere!

There are two chains where the filter table is important for input traffic: INPUT and FORWARD

  • Firewalld uses the INPUT filter table for its rules.

  • Docker creates routings in PREROUTING chain's DNAT table. That has for consequence to route the packets to the FORWARD chain and not INPUT chain.

Therefore, all the traffic supposed to be consumed by Docker containers "bypass" the firewall.

Searching "firewalld docker" on serverfault will give you tens of similar questions.

Solution:

Having to manually add iptables rules defeats the purpose of Firewalld. That's a shame Docker can't work with Firewalld properly. Nevertheless, I believe we can do something much cleaner by telling Firewalld to apply the same rules in the FORWARD chain AND place the Firewalld's custom chains before the Docker's chains.

Currently, the Firewalld custom chains (FORWARD_direct, FORWARD_IN_ZONES, FORWARD_OUT_ZONES) come after the Docker custom chains (DOCKER-USER, DOCKER-ISOLATION-STAGE-1, DOCKER).

Alexis
  • 172
  • 1
  • 12
  • In the GitHub issue you mentioned, they stated that it has been fixed. However I still see the problem described in the question. What is your experience here? In the release notes I saw no changes or enhancement regarding this port showing issue. – Mohammed Noureldin Nov 03 '21 at 20:35
0

I've had a similar issue - open port 8000 from docker container was accessible from external interface eth0, however I need it to be visible only via Nginx as a proxy.

Deep dive in iptables -L -v -n showed, that DOCKER rules chain is run from FORWARD chain, while adding firewall-cmd rich-rules (and direct rules) go to INPUT chain, so it don't work as I expected.

The following command solved the problem: firewall-cmd --direct --add-rule ipv4 filter DOCKER 0 -i eth0 -d 172.22.0.0/24 -j DROP

where 172.22.0.0/24 is the subnet with docker container.

0

Use firewall-cmd --list-all-zone , look at the "home" zone, your interface is connected on that one. Post the complete results if you have other problem.