68

I have a Webapp running completely locally on my MacBook.

The Webapp has a Front End (Angular/Javascript) and a Back End (Python/Django) which implements a RESTful API.

I have Dockerized the Back End so that it is completely self-contained in a Docker Container and exposes port 8000. I map this port locally to 4026.

Now I need to Dockerize the Front End. But if I have these two docker containers running on my localhost, how can I get the FE to send HTTP requests to the BE? The FE container won't know anything that exists outside of it. Right?

This is how I run the FE:

$ http-server
Starting up http-server, serving ./
Available on:
  http://127.0.0.1:8080
  http://192.168.1.16:8080
Hit CTRL-C to stop the server

Please provide references explaining how I can achieve this.

Saqib Ali
  • 11,931
  • 41
  • 133
  • 272
  • 1
    You can use sockets, the containers will communicate through ports just like any other server would. Otherwise you can `link` them together which allows separate containers to act as one. – Scot Matson Dec 12 '16 at 04:58

4 Answers4

75

The way to do this today is Docker Networking.

The short version is that you can run docker network ls to get a listing of your networks. By default, you should have one called bridge. You can either create a new one or use this one by passing --net=bridge when creating your container. From there, containers launched with the same network can communicate with each other over exposed ports.

If you use Docker Compose as has been mentioned, it will create a bridge network for you when you run docker-compose up with a name matching the folder of your project and _default appended. Each image defined in the Compose file will get launched in this network automatically.

With all that said, I'm guessing your frontend is a webserver that just serves up the HTML/JS/CSS and those pages access the backend service. If that's accurate, you don't really need container-to-container communication in this case anyway... both need to be exposed to the host since connections originate from the client system.

CashIsClay
  • 2,182
  • 18
  • 18
  • 1
    Hi, what does "and `_default` appended" mean? Thank you in advance! – Yifan Ai Dec 01 '20 at 13:25
  • 1
    @YifanAi It means if your working directory is, say `~/myproject`, when you run `docker-compose up` docker will create a bridge called _myproject_default_. You can check it by running `docker network ls` – Damon Hill Jan 04 '21 at 00:37
  • If the 'backend' server was a database. You wouldn't want to expose that to the public internet and would use the bridge network instead. Is that correct? – Asher Jan 27 '22 at 10:24
  • 3
    "From there, containers launched with the same network can communicate with each other over exposed ports". How? A Port isn't a valid URL! You need something like `http://something/or/other:port`, OP explicitly stated he wanted http. What is "something/or/other"? – MDickten Oct 07 '22 at 09:29
13

There are multiple ways to do this and the simplest answer is to use Docker-Compose. You can use Docker-compose to allow multiple services to run a server.

If you are not using Docker-Compose and running individual containers, then expose both services port with host and use those services on such on links like:

docker run -p 3306:3306 mysql
docker run -p 8088:80 nginx 

Now you can communicate as:

http://hostip:3306 
http://hostip:8088 

Now you can communicate with containers using hostIP.

Manoj Sahu
  • 2,774
  • 20
  • 18
  • 2
    We can even use the service name mentioned in the docker-compose.yaml file instead of the hostip if they are on same network or if they are linked. That should work fine as well – Aniketh Saha Apr 14 '19 at 09:36
6

I think the most elegant solution would be to create a software defined network, but for this simple example, it may be a bit overkill. Nevertheless when you think about running things in production, maybe even on different servers, this is the way to go.

Until then, you may opt to link the containers. E.g., when you used to start your frontend container like this:

$ docker run -p 8080:8080 --name frontend my-frontend

You now could do it like this:

$ docker run -p 8080:8080 --name frontend --link backend:backend my-frontend

The trick here is to also start the backend container and giving it a name using the --name flag. Then, you can refer to this name in the --link flag and access the backend from within the frontend container using its name (--link takes care of automatically adding the linked container to the /etc/hosts file).

This way you do not have to rely on a specific IP address, be it the host's one or whatever.

Golo Roden
  • 140,679
  • 96
  • 298
  • 425
1

Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac: Docker preferences screenshot

As you can see from the screenshot link the ip address is

192.168.65.0

You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).

You can start your containers and should be set for local development/testing.

For some more details, you can see my article: Connect Docker containers the easy way

Saif
  • 21
  • 2
  • What do you mean by +1. – Asher Jan 27 '22 at 07:28
  • 1
    To me this seems a highly individual and non-portable solution. I have my containers running on my local docker host for testing. Then, after I've fixed all the problems, I push them to dockerhub and then deploy them to production on a remote docker host. Surely the IP addresses there are different? And that mysterious phrase "any service can reach any other service at that service’s name" in the Docker docs surely has a meaning, so why do I need those IPs? – MDickten Oct 07 '22 at 09:31