if you look at the old n3 examples, they really liked the simple [:MyPage earl:passes :WCAG10P1] thing...

09:02:16 [nadia]

which looks neat in n3... but doesn't work as well in practice

09:04:05 [libby]

[we wonder whether to slip inrto groups for a while]

09:04:14 [libby]

slip?

09:04:20 [libby]

er split

09:04:26 [nadia]

pop!

09:05:19 [libby]

danbri/nadia/nmg could go argue about reification and danbri's new proposal

09:05:36 [libby]

wendy/chaals could talk about usecases

09:05:48 [chaalsBRS]

chaals is going to look over the use cases and see what we need EARL to provide.

09:06:38 [chaalsBRS]

(In particular, is there anything that requires it be RDF? Is there any need for the reificiation? What kind of queries do we want to support easily? Can we deal with some of those through an XML representation (isomorphism)?

rdf is cool because there are growing databases and query systems, and we can contribute to the semantic web!

09:08:22 [danb_lap]

this tried to break out the quoted document into separate docs. I'm thinking the opposite now, sort of...

09:36:59 [chaalsBRS]

A use case: on a vu un contrat?

09:37:06 [chaalsBRS]

(oops)

09:37:19 [chaalsBRS]

Augmenting automatic test results with information from people.

09:37:37 [chaalsBRS]

Allowing a tool to ask for human evaluation of tests, and then moderating a future test overthe same material with the results provided by human evaluation. This uses the fact that EARL assertions are made by a named person/tool, so it is possible to specify whose results (for a given set of tests, or in general) are to be more trusted. HiSoft's Interview wizard, TAW do something like this.

09:37:50 [chaalsBRS]

another use case

09:38:05 [chaalsBRS]

Finding out a test is no good.

09:38:19 [chaalsBRS]

Using a handful of test results to generate a higher level result, we suddenly discover that one of the tests is unreliable. We need to change the test suite, and to be able to identify results based on the bad test and therefore know not to trust those.

... controversial are things like "this tool doesn't work" - it carries information about who carried the claims

12:56:52 [chaalsBRS]

uncontroversial stuff is things like some page exists.

12:57:49 [chaalsBRS]

...problem is when it is possible to merge uncontroversial things in troublesome ways. For example a page is tested. In one test it has a size of 444 bytes, and in another test it has 555 bytes but in both cases it has the same URI.