Now if you ExampleRetryJob fails, it will be retried 3 times, with a 60 second
delay between attempts.

For more explanation and examples, please see the remaining documentation.

Failure Backend & Resque Web Additions

Lets say you're using the Redis failure backend of resque (the default).
Every time a job fails, the failure queue is populated with the job and
exception details.

Normally this is useful, but if your jobs retry... it can cause a bit of a mess.

For example: given a job that retried 4 times before completing successful.
You'll have a lot of failures for the same job and you wont be sure if it
actually completed successfully just by just using the resque-web interface.

Failure Backend

MultipleWithRetrySuppression is a multiple failure backend, with retry
suppression.

Using the 'resque-web' command with a configuration file:

Another alternative is to use resque's built-in 'resque-web' command with the
additional resque-retry tabs. In order to do this, you must first create a
configuration file. For the sake of this example we'll create the configuration
file in a 'config' directory, and name it 'resque_web_config.rb'. In practice
you could rename this configuration file to anything you like and place in your
project in a directory of your choosing. The contents of the configuration file
would look like this:

This retries the job once and causes the worker that failed to sleep for 5
seconds after requeuing the job. If there are multiple workers in the system
this allows the job to be retried immediately while the original worker heals
itself. For example failed jobs may cause other (non-worker) OS processes to
die. A system monitor such as monit or god can fix the server
while the job is being retried on a different worker.

@sleep_after_requeue is independent of @retry_delay. If you set both, they
both take effect.

You can override the sleep_after_requeue method to set the sleep value
dynamically.

The first delay will be 0 seconds, the 2nd will be 60 seconds, etc... Again,
tweak to your own needs.

The number of retries is equal to the size of the backoff_strategy array,
unless you set retry_limit yourself.

The delay values will be multiplied by a random Float value between
retry_delay_multiplicand_min and retry_delay_multiplicand_max (both have a
default of 1.0). The product (delay_multiplicand) is recalculated on every
attempt. This feature can be useful if you have a lot of jobs fail at the same
time (e.g. rate-limiting/throttling or connectivity issues) and you don't want
them all retried on the same schedule.

Retry Specific Exceptions

The default will allow a retry for any type of exception. You may change it so
only specific exceptions are retried using retry_exceptions:

The above modification will only retry if a NetworkError (or subclass)
exception is thrown.

You may also want to specify different retry delays for different exception
types. You may optionally set @retry_exceptions to a hash where the keys are
your specific exception classes to retry on, and the values are your retry
delays in seconds or an array of retry delays to be used similar to exponential
backoff.

In the above example, Resque would retry any DeliverSMS jobs which throw a
NetworkError or SystemCallError. If the job throws a NetworkError it
will be retried 30 seconds later, if it throws SystemCallError it will first
retry 120 seconds later then subsequent retry attempts 240 seconds later.

Fail Fast For Specific Exceptions

The default will allow a retry for any type of exception. You may change
it so specific exceptions fail immediately by using fatal_exceptions:

In the above example, Resque would retry any DeliverSMS jobs that throw any
type of error other than NetworkError. If the job throws a NetworkError it
will be marked as "failed" immediately.

Custom Retry Criteria Check Callbacks

You may define custom retry criteria callbacks:

classTurkWorkerextendResque::Plugins::Retry@queue=:turk_job_processor@retry_exceptions=[NetworkError]retry_criteria_checkdo|exception,*args|ifexception.message=~/InvalidJobId/false# don't retry if we got passed a invalid job id.elsetrue# its okay for a retry attempt to continue.endenddefself.perform(job_id)heavy_liftingendend

Similar to the previous example, this job will retry if either a
NetworkError (or subclass) exception is thrown or any of the callbacks
return true.

You can also register a retry criteria check with a Symbol if the method is
already defined on the job class:

Use @retry_exceptions = [] to only use your custom retry criteria checks
to determine if the job should retry.

NB: Your callback must be able to accept the exception and job arguments as
passed parameters, or else it cannot be called. e.g., in the example above,
defining def self.yes; true; end would not work.

Retry Arguments

You may override retry_args, which is passed the current job arguments, to
modify the arguments for the next retry attempt.

classDeliverViaSMSCextendResque::Plugins::Retry@queue=:mt_smsc_messages# retry using the emergency SMSC.defself.retry_args(smsc_id,mt_message)[999,mt_message]enddefself.perform(smsc_id,mt_message)heavy_liftingendend

Alternatively, if you require finer control of the args based on the exception
thrown, you may override retry_args_for_exception, which is passed the
exception and the current job arguments, to modify the arguments for the next
retry attempt.

classDeliverViaSMSCextendResque::Plugins::Retry@queue=:mt_smsc_messages# retry using the emergency SMSC.defself.retry_args_for_exception(exception,smsc_id,mt_message)[999,mt_message+exception.message]enddefself.perform(smsc_id,mt_message)heavy_liftingendend

Job Retry Identifier/Key

The retry attempt is incremented and stored in a Redis key. The key is built
using the retry_identifier. If you have a lot of arguments or really long
ones, you should consider overriding retry_identifier to define a more precise
or loose custom retry identifier.

The default identifier is just your job arguments joined with a dash '-'.

By default the key uses this format:
'resque-retry:<job class name>:<retry_identifier>'.

The expiary timeout is "pushed forward" or "touched" after each failure to
ensure it's not expired too soon.

Try Again and Give Up Callbacks

Resque's on_failure callbacks are always called, regardless of whether the
job is going to be retried or not. If you want to run a callback only when the
job is being retried, you can add a try_again_callback:

You can register multiple callbacks, and they will be called in the order that
they were registered. You can also set callbacks by setting
@try_again_callbacks or @give_up_callbacks to an array where each element
is a Proc or Symbol.

Warning: Make sure your callbacks do not throw any exceptions. If they do,
subsequent callbacks will not be triggered, and the job will not be retried
(if it was trying again). The retry counter also will not be reset.

Ignored Exceptions

If there is an exception for which you want to retry, but you don't want it to
increment your retry counter, you can add it to @ignore_exceptions.

One use case: Restarting your workers triggers a Resque::TermException. You
may want your workers to retry the job that they were working on, but without
incrementing the retry counter.