10

On my current server i use unattended-upgrades to automatically handle security updates. But i'm wondering what people would suggest for working inside docker containers. I have several docker containers running for each service of my app. Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?

Does anyone have any experience with this in production maybe?

stilliard
  • 762
  • 8
  • 27
  • Doesn't unattended-upgrades require a cron process? Are you also spinning up cron in each container? – lid Oct 24 '14 at 17:05
  • Hi @lib, Currently i am, though i guess i could have a cron container, that then executes the unattended-upgrades program directly on each server, as well as any other needed crons... – stilliard Oct 24 '14 at 17:42

2 Answers2

2

I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.

Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

gmuslera
  • 415
  • 2
  • 2
  • Hi @gmuslera, this makes sense, though what if you had many containers to maintain? Would you need to manually keep on top of security patches of the each packages used in each container and rebuild the relevant containers each time? Or would you automate this somehow? Maybe simply having it re-build every week? – stilliard Oct 20 '14 at 13:17
  • There are updates that "matter" (by your own subjective criteria) and updates that don't, and so the policy to follow regarding them. If this targets single apps with not frequent security problems that would affect you should be something special, not normal. YMMV – gmuslera Oct 20 '14 at 13:24
  • Thanks @gmuslera, that makes a lot of sense. I think I'll follow this plan but I'll leave my question open for a bit longer if that's ok, in case there's any other ideas, but otherwise I'll mark this one as the answer as its a great plan :) – stilliard Oct 20 '14 at 13:26
2

I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.

Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.

WRT Time: The time to rebuild is added to the time needed to update so it is 'extra' time in that sense. And if you have start-up processes for your container, those have to be repeated.

WRT Complexity: On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.

Also, the updates do not create a 'golden image' since it is easily repeatable.

And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.

Rondo
  • 3,458
  • 28
  • 26
  • Hi @Rondo, You're right, I'm not worried about a bit of redundant networking activity, and the ability for the containers to stay live/running rather than having to re-build and run each time would be much more suited, thanks, i'll start out using the unattended-upgrades then. – stilliard Oct 21 '14 at 08:58
  • 3
    The container will probably need to be restarted if the update changed shared libraries, since the libraries are usually only loaded when the running process is started. – jochen Mar 12 '15 at 17:27