river-dev mailing list archives

On 8/22/2010 4:57 PM, Peter Firmstone wrote:
...
> Thanks Patricia, that's very helpful, I'll figure it out where I went
> wrong this week, it really shows the importance of full test coverage.
...
I strongly agree that test coverage is important. Accordingly, I've done
some analysis of the "ant qa.run" output.
There are 1059 test description (*.td) files that exist, and are loaded
at the start of "ant qa.run", but that do not seem to be run. I've
extracted the top level categories from those files:
constraint
discoveryproviders_impl
discoveryservice
end2end
eventmailbox
export_spec
io
javaspace
jeri
joinmanager
jrmp
loader
locatordiscovery
lookupdiscovery
lookupservice
proxytrust
reliability
renewalmanager
renewalservice
scalability
security
start
txnmanager
I'm sure some of these tests are obsolete, duplicates of tests in
categories that are being run, or otherwise inappropriate, but there
does seem to be a rich vein of tests we could mine.
Part of the problem may be time to run the tests. I'd like to propose
splitting the tests into two sets:
1. A small set that one would run in addition to the relevant tests,
whenever making a small change. It should *not* be based on skipping
complete categories, but on doing those tests from each category that
are most likely to detect regression, especially regression due to
changes in other areas.
2. A full test set that may take a lot longer. In many projects, there
is a "nightly build" and a test sequence that is run against that build.
That test sequence can take up to 24 hours to run, and should be as
complete as possible. Does Apache have infrastructure to support this
sort of operation?
Are there any tests that people *know* should not run? I'm thinking of
running the lot just to see what happens, but knowing ones that are not
expected to work would help with result interpretation.
Patricia