3

I want to test out my distributed algorithm on both my laptop (x86_64) and my cluster of Raspberry Pis (armhf) using the new docker swarm mode.

After a bunch of configuration, I can successfully create the swarm cluster, composed of one manager node (the laptop) and N+1 worker nodes (N rasps, plus my laptop). It looks like that:

laptop$ docker swarm init --advertise-addr 192.168.10.1
raspi1$ docker swarm join --token <TOKEN> 192.168.10.1:2377
# [...]
raspiN$ docker swarm join --token <TOKEN> 192.168.10.1:2377

Now, I built two images for my project: an x86_64 one (my_project:x86_64), and an armhf one (my_project:armhf). I really love the nodes/services architecture from this new swarm mode, as creating M (quasi-)independent nodes is exactly what I want, but how can I give the right image to the right node using the docker service create ... command?

From what I see, docker service create only takes one image as parameter! I saw here that I could give a label to each node, and ask a service to use only nodes having this certain label, but it's not what I want. I would end up managing two clusters of tasks, split by architecture, which would crush my desire of leveraging the swarm mode's scheduler and dispatcher.

I am a sad geek on his quest to portability, that's what I am!

PS: Note, this has the tag 'docker-swarm-mode', and not the 'docker-swarm' one, because docker swarm and docker swarm mode are two different things.

Adrien Luxey
  • 552
  • 1
  • 6
  • 16

2 Answers2

2

Adrien, Docker Captain here.

At this moment you can't create a service that pulls a different image depending on the node where the container is scheduled.

However, there's a hack that you might be able to do which is to bundle a static binary in your docker image for both architectures and decide in your entrypoint.sh which binary to call depending on the underlying arch.

On the other hand, the fact that you can create two services per architecture is not really bad. The Arm and x86_64 version of the app might require to scale differently depending on the hardware specs and an additional bonus is that you can also apply different memory / cpu restrictions based on the underlying hardware.

Those services will still be able to communicate between each other if you put them in the same network with the --network option, so you can control how they interact between each other.

Hope this helps to go around your current problem. Feel free to contact me if you're still evaluating alternatives.

marcosnils
  • 399
  • 4
  • 7
  • Thanks for your answer @marcosnils, I've been waiting for it! :) Your hack seems legit, if I manage to statically link my project's binary. [This tutorial](https://eyskens.me/multiarch-docker-images/) seems to be achieving such king of hack, if I got it right. Apart from that, do you know about the [multiarch images](https://hub.docker.com/r/multiarch/goxc/)? I did not understand precisely what they're achieving, and I have a hard time finding examples, but maybe could this help me build per-architecture bins inside my containers? Thanks again anyway, you deserved your bounty! – Adrien Luxey Dec 17 '16 at 10:47
  • Hey @AdrienLuxey!. Seems like the tutorial that you shared is for a different purpose. It basically allows to build multiple arch locally relying on qemu and they also tag them differently so you have them available in your registry. I'm aware about the multiarch repo. Basically they're providing a simple way to build multi-arch images, but each image would be a different tag in the registry. You can of course use some of those tools to build images with multi-arch binaries inside, but it might require some work. – marcosnils Dec 18 '16 at 23:12
  • @AdrienLuxey have good news for you, you can now build multi-arch images by using third party external tools until the docker CLI officially supports building them. Check https://integratedcode.us/2016/04/22/a-step-towards-multi-platform-docker-images/. The docker engine and the registry currently supports this format, so if you push a multi-arch image to the registry and run it in swarm, it should work. – marcosnils Dec 20 '16 at 18:01
0

Thinking about this, I think there's a way to do what you want, at some performance and overhead penalty, by running armhf images everywhere and running some of them emulated.

Look at the image hypriot/qemu-register - the source for it is at https://github.com/hypriot/qemu-register - and read through it to the point where you understand what it's doing. Essentially it allows you to emulate armhf and aarch64 binaries on x86 machines. Then you could run a single image across your whole cluster.

I know that this is not exactly what you asked for - and that others are working on other solutions - but this may be useful nonetheless.

vielmetti
  • 1,864
  • 16
  • 23