0

Is it considered bad practice to use something like Jenkins or Gitlab CI Pipelines to replace cron jobs?

There are two downsides I can think of, giving the CI server access to all servers and having a single point of failure (if the CI server is down no scheduled tasks can run).

Typically the cron jobs running on our example servers are associated to a git repository.

This is in an effort to remove the need for developers to connect into servers to check and/or manage crons and cron failures, as well as being able to manage these in one central place.

Would this be better placed in a configuration management tool (Puppet/Salt/Chef/ansible)?

  • Jenkins schedule jobs only adds them to the queue at the time, so could be delayed execution You still need connectivity to the remotes, etc. cron is local so if the server is up, if will run, but you have a harder time investigating logging, etc. – Ian W Aug 23 '20 at 11:05

1 Answers1

0

Make your own design decisions about which tools to solve the problems. There is no one answer, only a choice between various trade-offs.

No one should be getting a shell, but some cron jobs need to be managed. So, write automation to install and verify cron jobs on each host. Maintain the automation in version control, and trigger it via CI.

Availability and performance of the CI server may be a concern. Address this by implementing high availability and scaling larger.

cron is a basic scheduler, and while it reliably runs jobs on the minute, it is limited. Consider implementing a system with more features, including logging and diagnostics.

John Mahowald
  • 32,050
  • 2
  • 19
  • 34
  • Jenkins also only runs tasks to the minute, cron-like, and then only adds job toothed queue, so could be delayed execution. – Ian W Aug 23 '20 at 11:08