I have a cron job setup to run a task every 5 minutes. But sometimes the task takes > 5 minutes to run, so another copy of that task is run concurrently by cron. Is there a way in whenever or cron to we can make it to wait for the other job to finish before running another copy?

- 81,402
- 15
- 141
- 199

- 12,509
- 11
- 81
- 106
-
1I'd say that you'd do this in the script that cron executes rather than in cron itself, but if cron can do this it might be an easy solution (though there are too many border cases, I suppose you would want to wait, unless there's already another task waiting, in which case it is better to skip a round) – Jasper Oct 30 '12 at 00:38
5 Answers
AFAIK You can't do this using whenever itself but you can handle this in your script. This can be done by one of the following solutions:
Handling this in the database using a flag(or some information like start time, end time, success status) that is set when the job starts and cleared when the job ends, and check this flag every time the job starts to see if the previous job is done or not. But make sure to handle exceptions as if the process dies before clearing the flag, no other process will be able to run.
You can cause the OS to work as a flag for you, by creating a temp file and having an exclusive lock on it for the current process so no other process can have an exclusive lock on this file until the current process finishes, then when the process finishes it'll release the lock and let other processes work. To do so include this in the top of your cron job:
file = File.new("cron.lock", "a") can_lock = file.flock(File::LOCK_EX | File::LOCK_NB) if can_lock == false exit 1 else #do whatever you want end
The advantage of the second method is that even if the process is terminated unexpectedly the lock will be released automatically by the OS.
For me, I chose the first method as I needed to start another process if the previous process finished or took longer than a specific time limit.

- 43
- 11

- 2,248
- 1
- 25
- 32
Use Filesystem or Database Locks
You can't prevent overlap using cron or similar--at least, not directly--but you have a number of choices. You can check the process list for a running task before spawning a new one, but this is still susceptible to race conditions. Some better choices are:
- Use semaphores or file locks in your shell script. flock and lockfile are great shell utilities for this purpose.
- If your cron job involves changes to the database, use a table with row-level locking or a semaphore column to prevent changes while another process is running.
- Increase your interval between cron jobs such that your process has time to finish before the next run. Even if you use one of the other options, this is probably a good idea.
- Make your script idempotent, so that concurrent operations do not impact one another.
- See if a queue or singleton process is a better option for you than a cron job.
There's no perfect answer for this sort of issue. A lot depends on what your script is doing, and the overall architecture of your system. Your mileage will vary.

- 81,402
- 15
- 141
- 199
Here's my variant with file lock for rails rake tasks.
Put this in your rake task file (under namespace, so it wont overlap with other rake tasks):
def cron_lock(name)
path = Rails.root.join('tmp', 'cron', "#{name}.lock")
mkdir_p path.dirname unless path.dirname.directory?
file = path.open('w')
return if file.flock(File::LOCK_EX | File::LOCK_NB) == false
yield
end
usage:
cron_lock 'namespace_task_name' do
# your code
end
full example:
namespace :service do
def cron_lock(name)
path = Rails.root.join('tmp', 'cron', "#{name}.lock")
mkdir_p path.dirname unless path.dirname.directory?
file = path.open('w')
return if file.flock(File::LOCK_EX | File::LOCK_NB) == false
yield
end
desc 'description'
task cleaning: :environment do
cron_lock 'service_cleaning' do
# your code
end
end
end

- 2,175
- 5
- 21
- 23
I think the best options is any kind of lock(using file, database, etc.), but when you are using locks you need to implement error handling in your process very cleverly else if your lock is not released then your cron will never run the process again.

- 4,038
- 4
- 18
- 17
usage script_with_lock 'script_name', lock: 'lock_name'
job_type :script_with_lock, "cd :path && :environment_variable=:environment flock -n /var/lock/:lock.lock bundle exec script/:task :output"
usage runner_with_lock 'ruby code', lock: 'lock_name'
job_type :runner_with_lock, "cd :path && flock -n /var/lock/:lock.lock script/rails runner -e :environment ':task' :output"

- 71
- 4