4

I have been following the tutorials and been experimenting with Docker for a couple of days but I can't find any "real-world" usage example..

How can I communicate with my container from the outside?

All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.

Isn't the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?

I have found a tool called pipework (https://github.com/jpetazzo/pipework) which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?

Is a "hack" necessary to get the outside to talk to my container?

xeor
  • 5,301
  • 5
  • 36
  • 59
  • Possible duplicate of [How to assign as static port to a container?](https://stackoverflow.com/questions/16958729/how-to-assign-as-static-port-to-a-container) – Sergiu Sep 20 '17 at 19:15
  • @Sergiu, yea.. looks like a dupe :) Didn't find it back then 3 1/2 years ago. Things wasn't that good documented back then.. – xeor Sep 20 '17 at 20:16
  • i totally agree with you, i'm glad people started using it more, so they have done lots of documentation for it as well as here there many questions and answers :) – Sergiu Sep 20 '17 at 20:19

2 Answers2

5

You can use the argument -p to expose a port of your container to the host machine.

For example:

  sudo docker run -p80:8080 ubuntu bash

Will bind the port 8080 of your container to the port 80 of the host machine.

Therefore, you can then access to your container from the outside using the URL of the host:

  http://you.domain -> losthost:80 -> container:8080

Is that what you wanted to do? Or maybe I missed something

(The parameter -expose only expose port to other containers (not the host))

Aurélien Thieriot
  • 5,853
  • 2
  • 24
  • 25
  • yea, that's how I thought it should work as well. But the port on the host is only available on the host itself. You can't access it from another machine.. http://blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/ describes the problem in more depth. – xeor Feb 03 '14 at 10:14
  • In this article, at some point the guy is accessing the host from the outside: curl -X POST -H "Content-Type: application/json" -d '{"name":"Albert Einstein", "birthday":"14.03.1879"}' http://10.2.0.10:8080 – Aurélien Thieriot Feb 03 '14 at 10:32
  • 1
    Pipework seems to be used only to give real IPs to a container. This is a different use case – Aurélien Thieriot Feb 03 '14 at 10:32
  • 1
    As I have already use the -p parameter correctly by the past, I have to ask if it could be your host that have issues being accessed from outside? Does it work with standard services? – Aurélien Thieriot Feb 03 '14 at 10:37
  • I was guessing it was my host having issue as well. I tried using nc to listen to the same port, and was able to connect. Netstat does show the protocol docker is listening on as ipv6, however, I dont know if that is the problem.. Looks like others have the ipv6 problem as well (http://serverfault.com/questions/545379/docker-will-only-bind-forwarded-ports-to-ipv6-interfaces) – xeor Feb 03 '14 at 11:03
  • This is real shame :( Until they fix the bug though, some people did find a temporarily solution: https://github.com/dotcloud/docker/issues/2174 (Last post). Did you try this? – Aurélien Thieriot Feb 03 '14 at 11:31
  • yea, I did try -p 0.0.0.0:8080:8080 once.. But it didn't work either. Maybe it was just my test. I will try some more later today :) – xeor Feb 03 '14 at 12:39
  • Good luck! I am curious to know if you end up fixing this. It looks like a pretty annoying bug. – Aurélien Thieriot Feb 03 '14 at 12:57
  • Thanks.. Not only is it annoying, but how can not more people care about this? Is all the Docker hype, all the people trying it, only using it on one single host and using localhost for testing? Maybe the error only occur once in a while on some systems.. Will update this question later with what I find :) – xeor Feb 03 '14 at 13:11
  • I've now tried different version of -p 0.0.0.0:8888:80 and using my server-ip, trying to telnet from localhost (works all the time), then from another machine on the lan (never worked). nc on server host and telneting to it always works. I am also seeing packets hitting -t nat (chain DOCKER) for my container. But from there, it hits nothing.. :( – xeor Feb 03 '14 at 17:58
  • Damn, found the problem.. There was some forward iptables rules still active because of a vpn interface.. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT, to iptables sovled my problem.. :) – xeor Feb 03 '14 at 19:23
  • Sweet! Glad to here that – Aurélien Thieriot Feb 03 '14 at 19:29
  • @AurélienThieriot -- your ``EXPOSE`` syntax interpretation is incorrect. ``EXPOSE 80 8080`` exposes two ports, 80 and 8080, from the container. This does **not** control the ports where these are exposed on the host. Only the operator (== person who executes ``docker run``) can decide how to map ports to the host, never the developer (== who executes ``docker build``). Use ``docker port`` or ``docker ps`` to see the mapping from the host to the container ports. – Andy Feb 03 '14 at 23:52
  • Oh yes! Thank you for that :) I will edit to avoid any mistakes. – Aurélien Thieriot Feb 04 '14 at 09:03
3

This (https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/) blog post explains the problem and the solution.

Basicly, it looks like pipeworks (https://github.com/jpetazzo/pipework) is the way to expose container ports to the outside as of now... Hope this gets integrated soon..

Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..

xeor
  • 5,301
  • 5
  • 36
  • 59