Alberto Reggiori wrote:
>
> Didier wrote:
>
> > Jonathan Borden proposed a solution to layer somthing like OWL on RDF
> > with unaserted triples. The problem is always the same: what we do
introduce
> > as new stuff in the semweb is 'context is different for everybody'. This
> > stands
> > also for wondering what semantic level (thinking of layering) a
statement
> > refers to (or its property refers to).
>
> too right Didier! :-) Let me present my own practical experience about
using
> 'statement groups' (or 'contexts' or whatever jargon you want use to name
> that!) in a real-world RDF application I have been writing in the last two
> years . In very simple terms, I used the concept of group to 'scope'
triples
> when inserted, retracted or retrieved from a store (where for scoping here
I
> mean something like the 'variable scope' of usual programming language).
The concept of "dark triples" as a layering option seems to be getting a bit
misunderstood. The essence of "dark" or "unasserted" triples is simply that,
from a technical perspective, it is difficult (some would indeed say
impossible) to define a language such as OWL (and given the constraints
placed on this language by the WebOnt charter etc.) in RDF if OWL is to have
the characteristics we desire, and RDF triples are all "truths".
So "unasserted triples" are a technical device that might, for example, be
used to implement "contexts" and given the constraints placed on RDFCore's
ability to modify _this version_ of RDF. I am sure that if RDF did have a
native concept of contexts that there would be no need to have _another_
discussion of "dark triples" as any such triples would instead be placed in
their own context. But that isn't an option for RDF as of June 2002, and
WebOnt has a requirement to proceed with RDF _as it is today_.
The problem "unasserted triples" is trying to solve is simply: How do we
devise a model theory for OWL that is compatible with the model theory for
RDF, possibly even an extension of the model theory for RDF in some sense of
the word, and avoids the introduction of paradoxes and other such problems?
Now will this same problem exist for every "new stuff" on the semantic web?
I can't say -- the hope is that if we do this correctly, not.
Jonathan