Phil
I agree with your reasoning, but consider that context does not necessarily
require human interrogation to establish credibility. My feeling is that
rich stochastic algorithms could be employed over semantic data layers to
establish 'degrees' of 'relevance' and 'authority' around any contextural
information found (i.e. to what degree can I automatically derive context
from the URI under investigation and to what degree can I believe the
information that has been presented?). I, hence, personally consider that
the trick to success here is establishing and standardising on the
algorithms used to automatically derive context and accepting that the
answers they return may well not be black or white.
This may well have a profound effect on the way that the Semantic Web is
perceived and/or used. Rather than using metadata as an ultimate
declarative source of descriptive annotation, it may well only represent
one piece of a much richer view of the world.
For me it is important to remember that all Information Systems, be they
automated or not, have there roots in the real world and merely abstract
aspects of its the infinite complexity and beauty. Relying on metadata
alone to describe such an incalculably encompassing problem space has to be
a folly. ‘The answer is that within many applications we should (control
capability), but in the Web as a whole we should not. Why? Because when you
look at the complexity of the world that the Semantic Web must be able to
describe, you realise that it must be possible to use any amount of power
as needed. The success of the (current) Web is that hypertext is so
flexible a medium that the Web does not constrain the knowledge it tries to
represent. The same must be true of the web of meaning. In fact, the web of
everything we know and use from day to day is complex; we need a strong
language to represent it’ – TBL’s words not mine.
Regards
Phil Tetlow
Senior Consultant
IBM Business Consulting Services
Mobile. (+44) 7740 923328
"Phil Dawes"
<pdawes@users.sou
rceforge.net> To
Sent by: Patrick Stickler
www-rdf-interest- <patrick.stickler@nokia.com>
request@w3.org cc
www-rdf-interest@w3.org
Subject
08/10/2004 18:59 URIQA thwarted by context problems?
Hi Patrick,
I'm afraid that the more work I do with rdf, the more I'm having
problems seeing URIQA working as a mechanism for bootstrapping the
semantic web.
The main problem I think is that when discovering new information,
people are always required to sort out context (a point made by Uche
Ogbuji on the rdf-interest list recently).
When identifying new terms, some mechanism has to exist to decide
whether the author's definition of the term fits with its use in the
instance data, and that that tallies with the context in which the
system is attempting to use the data. To my mind this prohibits a
system 'discovering' a new term without a human vetoing and managing
its use.
Of course this doesn't prohibit the decentralisation of such
context-management work - e.g. a third party could recommend a
particular ontological mapping of terms based on an agreed context. I
just don't see machines being able to do this work on an ad-hoc basis
any time soon.
You've been doing a lot of work on trust/context etc.. in addition to
URIQA, so I'd be interested to hear your views on this.
Many thanks,
Phil