My experience is that if you meet all these conditions:
- you use CentOS or RHEL (version 6 or 7)
- you scheduled a downtime window and you reboot after each update
- your application/service is able to automatically
service stop
and start
- you keep the OS clean on the server
- no installing off-repo RPMs
- no deprecated repos like rpmforge
- no manual changes of files that are not supposed to be changed manually)
- many other stuff - whatever the vendor is unlikely to predict in their test environment
- you are sure previous admins didn't do this stuff
With all these, the risk something goes wrong in yum update
is minimized. I haven't personally see a service malfunction yet in such situation. So indeed you would keep your OS more secure at an acceptable(?) risk of lower availability.
Ubuntu, no bad experiences there, but I have a couple of them so not enough statistical data to vouch for them.
To further mitigate the risk, you could additionally include:
- employ monitoring tool that would check whether service is performing and achieving a desired positive result
- schedule notification so you could start and finish to repair when still within the downtime window
- employ clustering (like
pcs
)
- update secondary node, reboot, make it primary now, etc.