Hi,
As we settle into the new workflow (code reviews, etc.) one of the
biggest things that keeps getting us is that our testing system seems
very easy to break in really wierd ways. This is different from not
having tests or having tests that simply fail. In our case, we are
seeing a number of things that actually break the test suite itself:
* Both Ville and Jorgen have seens really odd things in the last few days.
* I am seeing a nasty memory leak that is intermittent on OS X.
* Twisted and nose are somehow very unhappy with each other.
* Sometimes tests rely on dependencies (wx, twisted, etc.). Those
tests get run on systems that don't have the deps and they fail with
ImportError. I have tried to add code that tests for dependencies,
but these keep creeping in. The problem with this is that the person
who finds the problem is not the person who wrote the code or tests.
Part of the difficulty with all of these things is that debugging them
is HELL. Often times it is difficult to even begin to see where the
problem is coming from.
I am wondering what we can do to make our testing framework more
robust. We need an approach that can run tests in a more isolated
manner so we immediately know where such problems are coming from.
Also, we need to come up with a uniform and consistent way of handling
dependencies in tests.
Thoughts?
Brian