0

I have a running docker container with a base image fedora:latest.

I would like to preserve the state of my running applications, but still update a few packages which got security fixes (i.e. gnutls, openssl and friends) since I first deployed the container.

How can I do that without interrupting service or losing the current state?

So optimally I would like to get a bash/csh/dash/sh on the running container, or any fleet magic?

drahnr
  • 6,782
  • 5
  • 48
  • 75

2 Answers2

4

It's important to note that you may run into some issues with the container shutting down.

For example, imagine that you have a Dockerfile for an Apache container which runs Apache in the foreground. Imagine that you attach a shell to your container (via docker exec) and you start updating. You have to apply a fix to Apache and, in the process of updating, Apache restarts. The instant that Apache shuts down, the container will stop. You're going to lose the current state of the applications. This is going to require extremely careful planning and some luck, and some updates will probably not be possible.

The better way to do it is rebuild the image upon which the container is based with all the appropriate updates, then re-run the container. There will be a (brief) interruption in service. However, in order for you to be able to save the state of your applications, you would need to design the images in such a way that any state information that needs to be preserved is stored in a persistent manner - either in the host file system by mounting a directory or in a data container.

In short, if you're going to lose important information when your container shuts down, then your system is fragile & you're going to run into problems sooner or later. Better to redesign it so that everything that needs to be persistent is saved outside the container.

Kryten
  • 15,230
  • 6
  • 45
  • 68
  • Moving storage to the host is not possible for me. And if I refactor the container in such a way that storage is shifted to another container this will detach the issue from frontend upgrades to backend upgrades, correct me if I am wrong. – drahnr Jan 05 '15 at 15:58
  • 1
    Without knowing what sort of state you're worried about losing, I can't really know. What I'm imagining is a web server or database server where you have the HTML/database files in the container plus some changes to the server config done after the container was started. In this case, if you move the data files out of the container and implement any config in the image, then updating becomes as simple as rebuilding the image with any changes and re-running it. See https://docs.docker.com/userguide/dockervolumes/ for info on data management. – Kryten Jan 05 '15 at 16:35
  • Also, I should note that what I'm suggesting is a pretty radical restructuring of your images & containers to bring them more in line with best practices. – Kryten Jan 05 '15 at 16:37
  • Well thanks for the input, my major issue is with i.e. applications like `gitolite` which mingles data and services and splitting it from each other will for sure cause some churn plus additional maintenance (well I could just map the `git` users home dir via `NFS` from another container, but heck, this is nasty and I will end up with two containers for each service - not really what I desire). – drahnr Jan 05 '15 at 16:47
  • If it's a privately hosted github-like service you're looking for, have you tried Bitbucket? For a 5 or fewer users it's free, even for private repositories. We use it extensively at work for our small team, & it's great. – Kryten Jan 07 '15 at 14:54
  • This was just an example, there are 10 more containers up & running on that machine. (And I know the bitbucket and github student plans) – drahnr Jan 07 '15 at 15:07
1

If the docker container has a running bash

docker attach <containerIdOrName>

Otherwise execute a new program in the same container (here: bash)

docker exec -it <containerIdOrName> bash
drahnr
  • 6,782
  • 5
  • 48
  • 75