1

I am running docker containers successfully on ubuntu machines.

And I'm having trouble running the same docker on mac machines. I've tried on two macs, and the error messages are the same.

> spark-worker_1  | java.net.UnknownHostException: docker-desktop:
> docker-desktop: Name does not resolve spark-worker_1  |       at
> java.net.InetAddress.getLocalHost(InetAddress.java:1506)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:946)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:939)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:939)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1  |       at scala.Option.getOrElse(Option.scala:121)
> spark-worker_1  |       at
> org.apache.spark.util.Utils$.localHostName(Utils.scala:1003)
> spark-worker_1  |       at
> org.apache.spark.deploy.worker.WorkerArguments.<init>(WorkerArguments.scala:31)
> spark-worker_1  |       at
> org.apache.spark.deploy.worker.Worker$.main(Worker.scala:778)
> spark-worker_1  |       at
> org.apache.spark.deploy.worker.Worker.main(Worker.scala)
> spark-worker_1  | Caused by: java.net.UnknownHostException:
> docker-desktop: Name does not resolve spark-worker_1  |       at
> java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> spark-worker_1  |       at
> java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
> spark-worker_1  |       at
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
> spark-worker_1  |       at
> java.net.InetAddress.getLocalHost(InetAddress.java:1501)
> spark-worker_1  |       ... 10 more docker_spark-worker_1 exited with
> code 51

Here are my docker-compose.yml file

 services:

   spark-master:
     build:
       context: ../../
       dockerfile: ./danalysis/docker/spark/Dockerfile
     image: spark:latest
     container_name: spark-master
     hostname: node-master
     ports:
       - "7077:7077"
     network_mode: host
     environment:
       - "SPARK_LOCAL_IP=node-master"
       - "SPARK_MASTER_PORT=7077"
       - "SPARK_MASTER_WEBUI_PORT=10080"
     command: "/start-master.sh"
     dns:
       - 192.168.1.1  # IP necessary to connect to a database instance external to where the server in which the container is running

   spark-worker:
     image: spark:latest
     environment:
       - "SPARK_MASTER=spark://node-master:7077"
       - "SPARK_WORKER_WEBUI_PORT=8080"
     command: "/start-worker.sh"
     ports:
       - 8080
     network_mode: host
     depends_on:
       - spark-master
     dns:
       -  192.168.1.1  # IP necessary to connect to a database instance external to where the server in which the container is running

** edit **

So I found a way to make it work by commenting few lines out. so why those two are problems?

And even though the container runs fine and connects to the spark-master, it is using some internal ip, as you can see, the 172.18.0.2 is not what we see normally in our network, I think the ip is from docker container not the host

enter image description here

 # network_mode: host
 depends_on:
   - spark-master
 # dns:
 #   -  192.168.1.1  # IP necessary to connect to a database instance external to where the server in which the container is running
eugene
  • 39,839
  • 68
  • 255
  • 489
  • Try to add to IP and host entry in /private/etc/hosts file. Ex " docker-desktop". – Sounak Saha Jan 10 '20 at 10:13
  • you mean in the host or in the docker container?, I already added it to the /etc/hosts in the host machine.. and it seems it is the same file as /private/etc/hosts `127.0.0.1 docker-desktop` – eugene Jan 10 '20 at 10:14

1 Answers1

1

Try changing the docker network type to macvlan in docker compose file. This should attach the container directly to your network (making it seem like another physical machine) with an ip similar to host. And you can try adding this to your etc hosts.

The proper way to run containers on different machine would be to use network type overlay connect the docker demons on these machines.
Or create a docker swarm cluster using the laptops.

https://docs.docker.com/network/

shanmuga
  • 4,329
  • 2
  • 21
  • 35
  • Thanks for the pointers, I switched to docker swarm (https://docs.docker.com/engine/swarm/stack-deploy/) However, I can't still make them connected. By running `docker stack deploy` all 5 hosts starts a docker container. But the ones (ubuntu based ones) which worked before can ping each other. but the mac based ones (which didn't work in the op) still can't ping other dockers .. – eugene Jan 10 '20 at 16:23
  • https://stackoverflow.com/questions/59685602/docer-overlay-network-swarm-init-cant-ping-each-other I created a new question for this, please take a look – eugene Jan 10 '20 at 16:36