At 22:14 +0300 5/5/03, Jeremy Carroll wrote:
>> If your contention is that there are some things for which current
>> systems can handle, but optimizations will be needed for larger
>> community acceptance, I probably wouldn't disagree, but also wouldn't
>> think these hold up the move to PR - optimizations generally come
>> later in the day for these sorts of specs - what's more, there are
>> already a couple of implementations where it appears to me that OWL
>> reasoners can call out to other processes for doing artihmetic (or
>> anything else for which there is a well-known process that is believe
>> to be logically consistent) and thus it is a question of these being
>> cited and more integrated.
>
>One of my contentions is that there are *easy* OWL DL tests that no currently
>implemented system can solve.
>This points to one of two things:
>- work that needs to be done by implementors to get their systems to do more
>- work needed by the editors to make it clear that just because you can do
>some reasoning in your head that does not mean that you should expect an OWL
>reasoner to do it. (OWL Full has some health warnings about no complete
>reasoners but I am not aware of any for OWL DL).
adding the health warnings make some sense to me
>
>*can* means before the heat death of the universe, rather than in some longer
>time frame.
>
>The tests I have been working today, fit that pattern.
>
>These can be done in my head, because I can reason about the cardinalities of
>classes. An OWL DL reasoner which does not have this ability is severely
>challenged. Either we should make it clear that users should not expect
>reasoners to have this ability, or we should have tests that exercise these
>abilities, and wait in CR until reasoners can pass them.
I don't like the idea of waiting in CR forever, so I agree we need to
discuss these issues more, but I am still confused about why you
think sitting in CR would make sense for problems that are decidable
but very large.
Just as an example, I can write a trivial prolog tree search program
for chess, it requires about 200 lines of Prolog if I recall
correctly (I did this for a prolog class once) - it will eventually
terminate, and tell me whether there is a forced win in chess. The
problem is that chess has something like 10^120 states in the
standard game tree (assuming I break loops, which can be done in
chess), so I need both a huge amount of memory (beyond the total
number of bits in the world today) and a lot of time (heat death of
the universe). Would you argue there isn't enough evidence for
Prolog to move to PR if we were the Prolog WG?
several implemented reasoners can solve any decidable problem, but
have either physical restrictions (like running out of memory) or
heuristic limits (like Jos's restrictions in Euler as to when to cut
off search) - I guess I don't understand why this wouldn't count as
implementing our system or needing an infinitely long CR.
-JH
p.s. btw, we don't even need to go into DL for this -- OWL Lite
algorithms are exponential, so when things get large they will take a
very long time to terminate -- I guess I expect this of all
implemented systems that don't explicitely promise polynomial time or
better (since I personally don't believe that P = NP in non-quantum
computing)
>
>I take the well known pattern of 1-to-N relationships from UML, and encode
>them in OWL, as relationships between finite sets.
>
>An easy (soluble) example is:
> I take a singleton class., I take a 1-to-2 relationship between that and a
>second class. I take a 1-to-3 relationship between that and a third class,
>now I add a 1-to-6 relationshup between the first class and the third class.
>This is consistent.
>
>If I increase the numbers, to beyond what one count on ones fingers I will
>challenge most systems severly 200, 300, 60000 - not in themselves
>unreasonable numbers. For me, to do in my head, it does not make the problem
>much harder.
>
>If I replace the 6 with a 5 then the file is inconsistent.
>Now, I can replace the original singleton set with an infinite set, and
>because infinity*2*3 = infinity*5 the ontology is now consistent.
>Using a maxCardinality constraint with some large number I can instead insist
>that the first class is finite, but I don't really know what. Since the size
>of the original class has no bearing on the arithmetic it does not matter.
>
>The tests I have been working on are found in
>
>http://www.w3.org/2002/03owlt/editors-draft/draft/proposed-dl-900-arith
>
>In the abstract syntax these are:
[snip]
--
Professor James Hendler hendler@cs.umd.edu
Director, Semantic Web and Agent Technologies 301-405-2696
Maryland Information and Network Dynamics Lab. 301-405-6707 (Fax)
Univ of Maryland, College Park, MD 20742 240-731-3822 (Cell)
http://www.cs.umd.edu/users/hendler