Sometimes we have quite huge boosts of messages rushing to our Sentry error
logger. Providing that Sentry generates a couple of Celery tasks for every
incoming message, it periodicaly overflows our RabbitMQ queue.

We don't mind to lose some messages as long as we have responsive logger,
handling the rest of our tasks.

The solution is quite easy, but it took some time for me to come to a decision.
I added the argument "x-message-ttl" to the queue to ensure it doesn't stuck.
My settings.py file comes to look like this:

In short, this means that no messages will last longer than 10 minutes in a
queue. Stale messages will be quietly removed.

Some notes on top of that:

Adding arguments alone isn't enough if your queues have been already
created. With every change of queue_arguments you should re-create your
queues. Actually, all you have to do is to remove them, and Celery creates
new ones upon the next start up. I use RabbitMQ Management Plugin
for this. By the way, it works with RabbitMQ 3.x only, and don't forget to
remove the default "guest" RabbitMQ user, if you install it!

I've got the list of Celery queues, used by Sentry, from the sentry.conf.server file
(this is its latest version).
Ensure you don't forget any queues.

Try undocumented settings parameter SENTRY_USE_SEARCH = False to reduce
the number of tasks in your queue. Sentry does nasty things when this option is
turned on (proof)

It's a short introduction to a project we have made recently here at
Doist Inc to improve our test codebase. The project is
named resources and it's not available on PyPI, but you
can install it right from the github anyway.

The idea is to provide a yet another way to manage your test fixtures (that is, objects and
other resources you create usually before you start verifying an assertion.)

There are two popular ways to initialize fixtures in Python I'm aware of:

xUnit-style with setup/teardown methods. It is supported by the majority of
frameworks and looks like the most universal way to initialize testing
environment. Yet it's somewhat verbose and makes you either repeat yourself,
or develop extensive set of helper functions, or hierarchy of test classes,
or invent something else to ensure you keep DRY, and your tests are readable
and manageable.

py.test-style, when you inject dependencies in a test functions by passing
parameters in it. The py.test magic instantiates objects for you and calls a
test function by passing them as arguments. It's good, because it's reusable
and granular, but it's not very flexible: there is no easy way to pass
parameters to a py.test fixture function.

Now, there is another way to make the same. The approach we propose is
to create fixtures roughly the way we did it for py.test -- a function per
fixture, and to use it the way how the Michael Foord's mock manages
the lifespan of patches it applies -- with context managers, function decorators
or start/stop methods.

So, without further ado, a short usage example which explains better than
thousands of words. The library should work with py.test, nose or unittest.

# import global instancefromresourcesimportresources# register resource named "user" by defining a function with the same# name. The function must have exactly one "yield" construction and is# used both to set up and tear down the fixtures@resources.register_funcdefuser(email='joe@example.com',password='password',username='Joe'):user=User.objects.create(email=email,password=password,username=username)try:yielduserfinally:user.delete()# use resource with an automatically created "user_ctx" context managerdeftest_user_properties():withresources.user_ctx()asuser:# the resource will be available as an assignment target# of the "with"-constructionassertuser.username=='Joe'# it's also stored in the "resources" global objectassertresources.user.username=='Joe'# instances doesn't exist and isn't accessible anymoreassertnothasattr(resources,'user')

It's a very basic example, though. If you feel like it may be useful for you,
feel free to visit the github page and to read the README we created especially
for this purpose.

This is the second and last part of the series "Fresh soft for your Amazon AMI".
In this post aim to explain how you can re-build srpm package to a new version using
mock and git, and how you can publish your own yum repository.

I recently started using Amazon Linux AMI as the main platform for deployment.
So far I used Debian and Ubuntu distributions
for long time, and very soon I was disappointed how outdated some software,
provided by standard Amazon Linux AMI and EPEL repositories is.

Then I decided to to find out, how easy is to
build your own software for Amazon Linux AMI. Luckily, it turned out to be
easier, when you don't start from scratch, but use existing packages as a
leverage. Here and in the Part 2. Publishing your own work
I share my experience.

These instructions should work for all RHEL-based distributions, and to
build rpm packages for CentOS in particular.