It Depends
In my experience with CentOS its pretty safe since you're only using the CentOS base repositories.
Should you expect failed updates once in a while... yes... on the same level that you should expect a failed hard drive or a failed CPU once in a while. You can never have too many backups. :-)
The nice thing about automated updates is you get patched (and therefore more secure) faster than doing it manually.
Manual patches always seem to get pushed off or regarded as "low priority" to so many other things so if you're going to go the manual mode SCHEDULE TIME ON YOUR CALENDAR to do it.
I've configured many machines to do auto yum udpates (via cron job) and have rarely had an issue. In fact, I don't recall ever having an issue with the BASE repositories. Every problem I can think of (off the top of my head, in my experience) has always been a 3rd party situation.
That being said... I do have several machines that I MANUALLY do the updates for. Things like database servers and other EXTREMELY critical systems I like to have a "hands on" approach.
The way I personally figured it out was like this... I think of the "what if" scenario and then try to think of how long it would take to either rebuild or restore from a backup and what (if anything) would be lost.
In the case of multiple web servers... or servers who's content doesn't change much... I go ahead and do auto-update because the amount of time to rebuild/restore is minimal.
In the case of critical database servers, etc... I schedule time once a week to look them over and manually patch them... because the time taken to rebuild/restore is more time consuming.
Depending on what servers YOU have in your network and how your backup/recovery plan is implemented your decisions may be different.
Hope this helps.