30

Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period.

Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No visible downtime to end-user.

Options:

  • Kubernetes has resource controllers which can scale replicas. I suppose it would be possible to keep the number of replicas on 0 and set it to 1 when needed, but how can one achieve that? The user guide says there is something called an auto-scaling control agent but I don't see any further information on this. Is there a pluggable, programmable agent one can use to track requests and scale based on user-defined logic?
  • I don't see any solution in Docker Swarm, correct me if I'm wrong though.
  • Use a custom http server written in chosen language that will have access to the docker daemon. Before routing to correct place it would check for existence of container and ensure it is running. Downside - not a general solution, has to not be a container or have access to the daemon.
  • Use systemd as described here. Same downsides as above, i.e. not general and one has to handle networking tasks themselves (like finding the IP of the spawned container and feeding it into the server/proxy's configuration).

Any ideas appreciated!

Porlune
  • 740
  • 6
  • 16
xificurC
  • 1,168
  • 1
  • 9
  • 17
  • Regarding the custom HTTP server. The DockerUI container has access to the daemon. You just mount the docker socket into the container as a volume and all requests can be made to it – OneCricketeer Apr 02 '16 at 22:35
  • Thanks, I know the daemon can be used when the socket is mounted. I was just hoping there will be built-in solution for this someplace so I don't have to reinvent the wheel. Seems the wheel wasn't invented yet though. – xificurC Apr 03 '16 at 18:24
  • Hi xificurC Did you find a good solution for your problem (on-demand startup of docker container)? Regards David – David Jan 30 '20 at 17:18
  • Hi @David, no, but I'm not seeking any longer, moved on to other projects. If I were I'd look into the serverless solutions. – xificurC Feb 06 '20 at 08:34
  • 8 years later, and still no real solution? i think kubernetes is the wrong way. technically, it should be possible using a modified socat which runs a command (docker start) before passing through the data. but the code for this must likely be somewhere in the docker code itself. this way, the first call go through socat, the following go through iptables. unsure for me, how this two behave during the handover, and how to fix the socat source ip of the outgoing connection. - maybe there is also another approach, where socat terminates in a way that also the first connection goes through iptabl – Daniel Alder Nov 16 '22 at 17:27

3 Answers3

5

You could use Kubernetes' built-in Horizonal Pod Autoscaling (HPA) to scale up from 1 instance of each container to as many are needed to handle the load, but there's no built-in functionality for 0-to-1 scaling on receiving a request, and I'm not aware of any widely used solution.

Alex Robinson
  • 12,633
  • 2
  • 38
  • 55
  • Thank you for your reply. Do you consider this situation uncommon? I thought it will be easy to find an implementation of on-demand startup, but it seems noone is even working on it. – xificurC Apr 03 '16 at 18:26
  • 5
    There are related discussions in https://github.com/kubernetes/kubernetes/issues/484 if you're interested. – Yu-Ju Hong Apr 03 '16 at 18:58
4

Podman that is a drop-in replacement for Docker, supports on-demand start-up of docker containers (aka OCI containers). For this to work the software inside the container image needs to support socket activation.

This works by having systemd create a listening socket that will be inherited all the way down to the executed program inside the container.

Thanks to the fork/exec model of Podman, the socket-activated socket will be first inherited by conmon and then by the OCI runtime and finally by the container as can be seen in the following diagram:

Diagram of how socket activation of containers works with systemd, podman, conmon, crun

See also the Podman socket activation tutorial.

Running rootless Podman with socket-activated containers comes with some advantages:

  • Native network speed. The communication over the socket-activated socket does not pass through slirp4netns so it has the same performance characteristics as the normal network on the host.

  • Improved security, because the container can run with --network=none if it only needs to communicate over the activated socket.

  • The source IP address is preserved. (The rootlesskit port forwarding backend for slirp4netns does not preserve source IP. This is not a problem when using socket-activated sockets)

I wrote two blogs about the security advantages of using socket-activated containers with Podman:

https://www.redhat.com/sysadmin/socket-activation-podman https://www.redhat.com/sysadmin/podman-systemd-limit-access

and two demos: nginx mariadb

Erik Sjölund
  • 10,690
  • 7
  • 46
  • 74
2
  1. You can use systemd to manage your docker containers. See https://developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/

  2. Some time ago I talked to an ops guy for pantheon.io about how they do this sort of thing with docker. I guess it would have been before Kubernetes even came out. Pantheon do drupal hosting. The way they have things set up, every server they run for clients is containerised, but as you describe, the container goes away when it's not needed. The only resource that's reserved then other than disk storage is a socket number on the host.

    They have a fairly simple daemon which listens on the sockets of all inactive servers. When it receives a request, the daemon stops listening for more incoming connections on that socket, starts the required container, and proxies that one request to the new container. Subsequent connections go direct to the container until it's idle for a period, and the listener daemon takes over the port again. That's about as much detail as I know about what they did, but you get the idea.

  3. I imagine that something like the daemon that pantheon implemented could be used to send commands to Kubernetes rather than straight to the Docker daemon. Maybe a systemd based approach to dynamically starting contaners could also communicate with Kubernetes as required. Either of these might allow you to fire up pods, not just containers.

Community
  • 1
  • 1
mc0e
  • 2,699
  • 28
  • 25