21

Is there a way to get the retry count for the current job?

I want the job to stop, not crash, after x retries. I would like to ask the retry count in the perform method so I could simply return if the retry count equals x.

def perform(args)
  return if retry_count > 5
  ...
end

Using Sidekiq 2.12.

Edit

I (not the OP) have the same question but for a different reason. If the job is being retried I want to do additional sanity checking to make sure the job is needed and to quit retrying if it is no longer expected to succeed because something external changed since it was queued.

So, is there a way to get the retry count for the current job? The current answers only suggest ways you can get around needing it or can get it from outside the job.

Old Pro
  • 24,624
  • 7
  • 58
  • 106
Cimm
  • 4,653
  • 8
  • 40
  • 66

5 Answers5

22

This can be accomplished by adding a sidekiq middleware to set the msg['retry_count'] as an instance variable of the job class.

Add a middleware (in Rails, it's usually a file in /config/initializers/ folder) like so:

class SidekiqMiddleware
    def call(worker, job, queue)
        worker.retry_count = job['retry_count'] if worker.respond_to?(:retry_count=)
        yield
    end
end

Sidekiq.configure_server do |config|
    config.server_middleware do |chain|
        chain.add SidekiqMiddleware
    end
end

In your job:

include Sidekiq::Worker
attr_accessor :retry_count

def retry_count
  @retry_count || 0
end

def perform(args)
  return if retry_count > 5
  ...
end
Brian Underwood
  • 10,746
  • 1
  • 22
  • 34
Derek
  • 551
  • 4
  • 7
  • 1
    Which folder do I put the middleware class? Can I put it in the same place where I put" Sidekiq.configure_server" ? – Henley Oct 06 '16 at 16:45
  • Why would I be getting this error `NoMethodError: undefined method `retry_count=' for #` – Max Rose-Collins Dec 15 '16 at 19:28
  • 1
    Actually, while this approach is valid, the code for the `retry_count` reader is not correct because it will return, successively 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. The reason for the double 0 is that, at middleware level, `:retry_count` key is not present upon the first execution and when the key is added upon the first retry (aka the second execution of the job) its value is `0` (not `1` as this code assumes). – Dorian Jan 10 '17 at 08:58
  • 1
    It returns `nil`, 0, 1, 2, 3, 4, ... So `if msg['retry_count'].nil? then retry_count = 0 else retry_count = msg['retry_count'] + 1 end` – Vikrant Chaudhary Apr 19 '17 at 14:54
  • This answer originally checked `respond_to?(:retry_count)`, but I changed to `respond_to?(:retry_count=)` because that is the method that it's actually using. That could be part of why @MaxRose-Collins was getting a `NoMethodError` – Brian Underwood Jan 02 '20 at 10:32
18

you dont need to deal with this logic directly to accomplish what you want. simply add some configs to your worker as such..note the sidekiq_options . based on your comment below " prevent Sidekiq from moving the job to the dead jobs queue"

 class MyWorker  
     include Sidekiq::Worker
     sidekiq_options :retry => 5, :dead => false

      def perform
          #do some stuff
      end
 end

then the job should just retry 5 times and fail gracefully. also if you want to execute a code block once 5 retries are spent, the worker has a method called sidekiq_retries_exhausted where you could do some custom logging, etc.

blotto
  • 3,387
  • 1
  • 19
  • 19
  • 2
    Thanks but this will end up as a *failed* job after the 5 tries and I want to try 5 times, if it didn't work, just stop without raising errors. That's not the same. The `sidekiq_retries_exhausted` is the same thing, it's already too late, the job *failed*. I want to stop it *before* it fails. – Cimm Apr 16 '14 at 10:14
  • Maybe I can use `sidekiq_retries_exhausted` to prevent Sidekiq from moving the job to the dead jobs queue? – Cimm Apr 16 '14 at 10:16
  • oh , turns out this is really simple in Sidekiq 3.0, answer updated – blotto Apr 17 '14 at 18:49
  • Great, thanks for that. Will upgrade Sidekiq next week and report back! – Cimm Apr 18 '14 at 07:54
  • documentation link: https://github.com/mperham/sidekiq/wiki/Error-Handling – Constantin De La Roche Oct 06 '21 at 08:23
1

You can access the retries with the Sidekiq API:

https://github.com/mperham/sidekiq/wiki/API#retries

Find the job you need and use job['retry_count'] to get the number of retries.

fuzzyalej
  • 5,903
  • 2
  • 31
  • 49
aledalgrande
  • 5,167
  • 3
  • 37
  • 65
  • 14
    Thanks but access to the retry from *within* the job *while* it's running. The job is no longer in the RetrySet at that time as it's the active, running one. – Cimm Apr 15 '14 at 09:28
1

My use case was to avoid scheduling multiple jobs incase of exception / downtime during deployment. For this i needed the retry_count. The Above solutions didn't work for sidekiq ~> 5.0.4, here is my tested solution

# config/initializers/sidekiq.rb

# define your middleware
module Sidekiq::Middleware::Server
  class SetRetryCountMiddleware
    def call(worker, job_params, _queue)
      retry_count = job_params["retry_count"]
      worker.instance_variable_set(:@retry_count, retry_count)
      yield
    end
  end
end

# add your defined middleware
Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add Sidekiq::Middleware::Server::SetRetryCountMiddleware
  end
  config.redis = {url: "redis://sidekiq:6379/0"}
  config.logger.level = Logger::INFO
end

& in your worker

class YetAnotherWorker < Base
  sidekiq_options  :queue => :critical, :retry => true

  def perform(args)
    begin
      # lines that might result in exception
    rescue => exception
      logger.warn("#{exception.class}")
      raise(exception)
    ensure
      # below line will ensure job is scheduled only once, avoiding multiple jobs if above lines throws an error
      schedule_next_run({my_key: "my_value"})
    end
  end

  def schedule_next_run(args)
    YetAnotherWorker.perform_at(Time.now + 7.days, args) if first_run
  end

  def first_run
    @retry_count.nil?
  end

end

Also the retry_count key isn't available in job_params on first run, so the count would look like nil,0,1,2..

Lubdhak
  • 106
  • 1
  • 8
  • if you don't like the retry count being off by 1 or starting with `nil` you can poach the correction factor from my answer: `(count.nil? ? 0 : 1).then { |correction_factor| count.to_i + correction_factor }` – SMAG Sep 02 '23 at 04:41
0

I believe this can be done without Middleware, but not sure if there is benefit to this method over the Middleware approaches. However, here is what I did:

NOTE: This approach assumes you are using Redis to queue your jobs

I give my jobs access to Redis via:

def redis
  # You may need to disable SSL, depending on your environment, if so use this to do so:
  # @redis = Redis.new(ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE })
  @redis = Redis.new
end

# You can do this directly on your job, in your BaseJob or in a module / concern, reader's choice on implementation.
def retry_count
  # retry_count is not present on the first attempt
  redis_work_json.dig("payload","retry_count").then do |count|
    # the correction factor will give us the retry counts that you would expect, since the actual counts lag by 1, as described in the other answers
    # NOTE: nil.to_i => 0
    (count.nil? ? 0 : 1).then { |correction_factor| count.to_i + correction_factor }
  end
end

Helper Methods:

# convert the Redis data for the current job to JSON
def redis_work_json
  # we may be in a race condition with Redis to save the key we are looking for
  sleep(100) until redis.keys.any? { |key| key.include? ":work" }

  redis.keys.each do |key|
    next unless key.include? ":work"

    return nested_redis_value_with_jid(key).then do |value|
      next if value.nil?

      json_from(value)
    end
  end
end

# find the data stored in Redis for the current Job
def nested_redis_value_with_jid(key)
  # the work key will have a hash value so it needs to be fetched via Redis::Commands::Hashes
  # hvals will skip the random key that Redis nested this hash in
  # find the hash value that matches this job's jid
  redis.hvals(key).find { |v| v.include?(jid) }
end

def flatten_json_str(str)
  # This may seem gnarly but it allows `JSON.parse` to work at it's full potential
  # instead of manually using JSON.parse on nested JSON strings in a loop
  str.gsub(":\"{", ":{").gsub("}\",", "},").delete("\\")
end

def json_from(value)
  JSON.parse(flatten_json_str(value))
end
SMAG
  • 652
  • 6
  • 12