Jonathan Griffin <jgriffin@mozilla.com>, 2009-11-30 11:58 -0800:
> It's hard for me to wrap my head around what kinds of metadata we'll need
> and how it should be stored without seeing the larger picture. We know
> there are existing tests for many parts of HTML5 in various places; I think
> it would be helpful to collect these on a wiki somewhere for review:
>
> - how suitable are they for inclusion into an official HTML5 test suite?
> - how much work will they take to cleanup and extend?
> - if they're not automated, will we automate them, and how?
> - will we try to tie them all together with a common test runner (e.g.,
> http://omocha.w3.org/wiki/)?
> - do we want to manage storage of test results, e.g., in a SQL or CouchDB
> server?
>
> Having the answers to these questions might take care of some of these
> metadata questions, or at least provide some framework for answering them.
To give my own feedback on some of those questions:
I think we should make it a goal to automate all tests. Or at
least make it a goal to separate tests into sets that are
automated and sets that aren't, and to make it goal to keep the
sets of non-automated tests to an absolute minimum.
I think we should try to tie tests together into a common test
runner or common group of test runners. I can imagine that we
might want to have, say, Sylvain Pasche's browsertests
cross-browser test-runner be among them, as well as the
cross-browser test-runner system that the Microsoft testing team
has put together. The thing that would unite that is that they
would implement a common client API for submitting test results
back to our central test-results repository.
On that note, I do also think that yes we do want to manage
storage of test results in a central database of some kind. And in
order to be able to do that, the very minimum piece of metadata we
need is a unique identifier for each test case.
--Mike
--
Michael(tm) Smith
http://people.w3.org/mike/