3

Seeing as there have been more and more exploits recently with shellshock etc. Have the opinions of people changed in regards to having automatic updates on production servers?

And what would the best way to run these be? Just chucking in a crontab with: yum -y update

Or using yum-cron or an alternative?

Thanks!

Shiv
  • 199
  • 1
  • 6

2 Answers2

1

You could only do automatic updates if you have a proper test environment and a way to update test before prod with the necessary time to catch bugs... with so many variables at play you need a solid test system.

Jacob Evans
  • 7,886
  • 3
  • 29
  • 57
1

My experience is that if you meet all these conditions:

  • you use CentOS or RHEL (version 6 or 7)
  • you scheduled a downtime window and you reboot after each update
  • your application/service is able to automatically service stop and start
  • you keep the OS clean on the server
    • no installing off-repo RPMs
    • no deprecated repos like rpmforge
    • no manual changes of files that are not supposed to be changed manually)
    • many other stuff - whatever the vendor is unlikely to predict in their test environment
    • you are sure previous admins didn't do this stuff

With all these, the risk something goes wrong in yum update is minimized. I haven't personally see a service malfunction yet in such situation. So indeed you would keep your OS more secure at an acceptable(?) risk of lower availability.

Ubuntu, no bad experiences there, but I have a couple of them so not enough statistical data to vouch for them.

To further mitigate the risk, you could additionally include:

  • employ monitoring tool that would check whether service is performing and achieving a desired positive result
  • schedule notification so you could start and finish to repair when still within the downtime window
  • employ clustering (like pcs)
    • update secondary node, reboot, make it primary now, etc.
kubanczyk
  • 13,812
  • 5
  • 41
  • 55