* Steve Harris <steve.harris@garlik.com> [2012-09-19 10:30+0100]
> On 18 Sep 2012, at 22:40, David Wood wrote:
> >>
> >> I would like some guidance on how far we are supposed to dumb down for the "casual" user, which I take it means the user who can't be bothered (or maybe hasn't the capacity) to actually read the specs.
> >
> > There is clearly no reason to dumb down the specs for someone who won't read them.
> >
> > Writing specs for those who don't have the capacity is a (much) trickier problem, though. Interop was necessary to ensure that very smart people who had TCP/IP implementations could work with each other. A difference in interpretation is not necessarily the same as a lack of capacity.
> >
> > It is best to be clear in prose, brief in math and provide some non-normative examples. *shrug*
>
> TCP/IP is a very good example. I've used, and implemented things on top of TCP/IP many times, and even read a book on the subject, but never read any of the specs. If it's a requirement for users of RDF to read the model theory (I don't believe it is) then this WG is a waste of time.
>
> If it's a requirement on developers of RDF storage engines, parsers etc, then that's fine, though I suspect authors of many systems in use today haven't read it. It's not written in especially engineer-friendly language*.
>
> * I don't even think that's necessarily a problem - the SQL specs as a counter example are more engineer friendly, but consequently are extremely verbose, and it's hard to pin down the precise intention in some places.
I think SQL provides some good cautionary tales. There's a spec called the Direct Mapping which maps SQL data to RDF, constructing IRIs to identify rows by concatenating the ordered values of the primary key. A recent comment on that spec was "primary keys have order?" There was no abstract syntax for us to go back to to justify our spec, only some distant artifacts which imply that they must have order.
If we provide a sufficiently helpful foundation, the e.g. SPARQL or Provenance WG could say "the interpretation of a <cough>named graph</cough> is conveyed by the predicates between it and the database root of authority." As it stands, RDF graphs are sort of self-describing in that they are all predicated on "if you believe this graph", and the consumer is on their own to decide that. Providing a consistent place from which consumers could follow their noses would allow one to push that condition up a level to "if you believe this network of graphs" (where the network may span datasets).
I'm not sure we can provide such a foundation without commiting robots.txt-y crimes like prescribing what goes into the default graph, but it would at least be nice if the Provenance WG didn't say "the root is in the default graph" while SPARQL said "the root is in the service description graph." Maybe the best we can do is define the concept and let time work out where to implement it.
> - Steve
>
> --
> Steve Harris, CTO
> Garlik, a part of Experian
> +44 7854 417 874 http://www.garlik.com/
> Registered in England and Wales 653331 VAT # 887 1335 93
> Registered office: Landmark House, Experian Way, Nottingham, Notts, NG80 1ZZ
>
>
--
-ericP