PLEASE NOTE: I'm basically doing README-driven development here,writing documentation for how this code should work before actuallyimplementing it. This notice will go away when django-smoketest isactually implemented and remotely suitable for real-world use. Untilthen, feel free to offer ideas on the interface, but don't expect tobe able to use it (you can look in the "Progress" section to seeexactly where I'm at).

Motivation----------

Smoke test framework for Django.

Smoke tests are tests that are run on a production environment toquickly detect major systemic problems. Eg, after you run a deploy,you want to quickly check that everything is running properly so youcan roll back quickly instead if there are problems. Too often, thisjust means visiting the site and manually clicking around through afew links (at best).

You probably already have unit tests verifying the correctness of lowlevel parts of your code, and integration and acceptance tests runningon a staging server or CI system. Maybe you've even got automaticconfiguration management ensuring that your staging server isconfigured as an exact replica of production. So logically, if yourcode passes all the tests on the staging server and the productionserver is configured the same, everything *must* work right inproduction. Right? Wouldn't it be wonderful if the world were sosimple? Of course we know that it's not. That's why we want smoketests to actually verify that at least the major components of thesystem are all basically functional and able to talk to each other andwe didn't do something stupid like writing code that depends on a newenvironment variable that hasn't been set to the correct value onproduction yet.

You probably don't want to run your unit tests or integration testsin production with production settings in effect. Who knows what kindof insanity would result? Test data sprayed all through yourproduction database, deleting user data from the file system, the sunrising in the west and setting in the east?

This is what smoke tests are for. Smoke tests should be *safe* to runin production. Verify that the application can connect to thedatabase, that whatever filesystem mounts are expected are in place,etc. bridging that last gap between existing test coverage and thewilderness of production. But all while stepping carefully around theproduction data.

I also find myself frequently writing small views to support ad-hocmonitoring. Eg, if an application relies on an NFS mount for someinfrequent operation and that mount has a tendency to go stale, a cronjob that runs every few minutes (or via nagios or some othermonitoring application) and has the application try to read afile off the mount can help ensure that we are alerted to the stalemount before users encounter it.

Getting Started---------------

Install django-smoketest

$ pip install django-smoketest

Add `smoketest` to your `INSTALLED_APPLICATIONS`.

In each application of yours that you want to define smoke tests for,make a `smoke.py` file or a `smoke` directory with an`__init__.py` and one or more python files with your tests.

@rolled_back def test_foomodel_writes(self): """ make sure we can also write to the database but do not leave any test detritus around. """ f = FooModel.objects.create()

@slow def test_something_slow(self): """ this test will not be run in "fast" mode because it uses a lot of resources or otherwise could bog down the production server in bad ways """ # do a bunch of slow stuff # ... self.assertEqual(foo, bar)

Now, if you make a `GET` to `http://yourapp/smoketest/`,django-smoketest will go through your code, finding any `smoke`modules, and run the tests you have defined (if you've used unittestor nose, you get the idea), excluding any marked with the `@slow`decorator. `GET`ing `http://yourapp/smoketest/slow/` will includethose tests as well. All tests passing will result in a response like:

QUESTION: I'm thinking about keeping the output simple to parseautomatically, but maybe we ought to just stick with unittest'sexisting output format instead?

API---

The main class is `smoketests.SmokeTest`, which should be though of asequivalent to `unittest.TestCase`. It will do basically the usualstuff there, running `setUp` and `tearDown` methods, and supportingthe usual array of `assertEquals`, `assertRaises`, `assertTrue`methods.

There is the `@slow` decorator which marks a test as potentially slow,or utilizing a lot of resources. Either way, it lets you have twodifferent levels of smoke tests. Fast tests can be run frequently, eg,from a monitoring script that hits it every five minutes so you canquickly be alerted if something changes in the productionenvironment. The `@slow` tests can then be reserved for only runningafter a new deploy to check things a little more deeply and have moreconfidence that everything is functional.

The `@rolled_back` decorator will make sure that the test gets wrappedin a database transaction which is then rolled back afterrunning. This frees you up to do potentially destructive things andjust let the DB clean up for you. The usual caveats apply about making sureyou are using a database that supports transactions and that it canonly roll back database operations, not other side effects. I'm alsoon the fence about whether this decorator should even exist or if thatshould be the default behavior for all smoke tests. Should a smoketest ever actually commit a transaction?

In your settings, you may define a `SMOKETEST_APPS` variable thatlists the applications want to run smoke tests from (instead oflooking through all your applications). (do we want aSMOKETEST_SKIP_APPS as well/instead?).

All call accepts custom message as the last parameter (msg) just likeall assert calls in unittest libraries.

Open Questions--------------

What other unittest/nose flags, conventions, etc should we support?`--failfast`? output verbosity? ability to target or skip specifictests in certain cases? Automatic timeouts (a lot of smoke testsinvolve trying to connect to an external service and failing if ittakes more than a specified period of time)?