Excellent!
I wrote a few blog posts recently, mostly based on my experience with openstack
automated tests:

http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.htmlhttp://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html
Would love to see some of those changes make it in.
-Sandy
________________________________________
From: openstack-bounces+sandy.walsh=rackspace....@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace....@lists.launchpad.net] on behalf of
Soren Hansen [so...@linux2go.dk]
Sent: Monday, November 21, 2011 8:24 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] [nova-testing] Efforts for Essex
Hi, guys.
We're scattered across enough different timezones to make real-time
communication really awkward, so let's see if we can get by using e-mail
instead.
A good test suite will let you keep up the pace of development. It will
offer confidence that your change didn't break any expectations, and
will help you understand how things are supposed to work, etc. A bad
test suite, on the other hand, will actually do the opposite, so the
quality of the unit test suite is incredibly important to the overall
health of the project. Integration tests are important as well, but unit
tests all we can expect people to run on a regular basis.
I'd like to start a bit of discussion around efforts around unit testing
for Essex. Think of it as brainstorming. Input can be anything from
small, actionable items, to broad ideas, to measurable goals, random
thoughts etc. Anything goes. At some point, we can distill this input to
a set of common themes, set goals, define action items, etc.
A few things from the back of my mind to get the discussion going:
= Speed up the test suite =
A slow test suite gets run much less frequently than a fast one.
Currently, when wrapped in eatmydata, a complete test run takes more
than 6 minutes.
Goal: We should get that down to less than one minute.
= Review of existing tests =
Our current tests have a lot of problems:
* They overlap (multiple tests effectively testing the same code),
* They're hard to understand. Not only are their intent not always
clear, but it's often hard to tell how they're doing it.
* They're slooooow.
* They're interdependent. The failure of one test often cascades and
makes others fail, too.
* They're riddled with duplicated code.
I think it would be great if we could come up with some guidelines for
good tests and then go through the existing tests and highlight where
they violate these guidelines.
= Test coverage =
We should increase test coverage.
Adding tests to legacy code is hard. Generally, it's much easier to
write tests for new production code as you go along. The two primary
reasons are:
* If you're writing tests and production code at the same time (or
perhaps even writing the tests first), the code will almost
automatically designed to be easily tested.
* You (hopefully1) know how the code is supposed to work. This is not
always obvious for existing code (often written by someone else).
Therefore, the most approachable strategy for increasing test coverage
is simply to ensure that any new code added is accompanied by tests, but
of course new tests for currently untested code is fantastically
welcome.
--
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp