priority (int) – The actor’s global priority. If two tasks have
been pulled on a worker concurrently and one has a higher
priority than the other then it will be processed first.
Lower numbers represent higher priorities.

middleware (list[Middleware]) – The set of middleware that apply
to this broker. If you supply this parameter, you are
expected to declare all middleware. Most of the time,
you’ll want to use add_middleware() instead.

Middleware that cancels actors that run for too long.
Currently, this is only available on CPython.

Note

This works by setting an async exception in the worker thread
that runs the actor. This means that the exception will only get
called the next time that thread acquires the GIL. Concretely,
this means that this middleware can’t cancel system calls.

Parameters:

time_limit (int) – The maximum number of milliseconds actors may
run for.

interval (int) – The interval (in milliseconds) with which to
check for actors that have exceeded the limit.

raise_on_failure (bool) – Whether or not failures should raise an
exception. If this is false, the context manager will instead
return a boolean value representing whether or not the rate
limit slot was acquired.

A rate limiter that ensures that only up to limit operations
may happen over some time interval.

Examples

Up to 10 operations every second:

>>> BucketRateLimiter(backend,"some-key",limit=10,bucket=1_000)

Up to 1 operation every minute:

>>> BucketRateLimiter(backend,"some-key",limit=1,bucket=60_000)

Warning

Bucket rate limits are cheap to maintain but are susceptible to
burst “attacks”. Given a bucket rate limit of 100 per minute,
an attacker could make a burst of 100 calls in the last second
of a minute and then another 100 calls in the first second of
the subsequent minute.

For a rate limiter that doesn’t have this problem (but is more
expensive to maintain), see WindowRateLimiter.

A rate limiter that ensures that only limit operations may
happen over some sliding window.

Note

Windows are in seconds rather that milliseconds. This is
different from most durations and intervals used in Dramatiq,
because keeping metadata at the millisecond level is far too
expensive for most use cases.

Workers consume messages off of all declared queues and
distribute those messages to individual worker threads for
processing. Workers don’t block the current thread so it’s
up to the caller to keep it alive.