I'm guessing that Carl Hewitt is mindful of the problem of modeling inconsistent assertions of facts in database and other information-handling systems.
It's not clear to me that the problem is one that applies to logic systems. I see it as a computer-science problem involving representation of assertions that are not mutually consistent and that are of a contingent/empirical nature.
If one desired to train some form of machine intelligence that could reason over such a corpus, it would require a robust means for dealing with such conditions including, first of all, a means of distinguishing them.
Accepting all of the observations as asserted propositions in the usual manner (i.e., accepted hypotheses) is not going to work, not least of all because temporality and belief come into it along with pure inaccuracy. Having some more-applicable formalized/heuristic inference system would certainly be valuable in such an undertaking. That's probably not going to be a logic in the sense logic is ordinarily understood. Mistaken conclusions will be an issue. To make this the duty of a system of logic strikes me as some sort of category mistake.
I don't think that any of this is new news. It doesn't seem to me that it's logic's fault that these situations do not satisfy the core requirements of conventional logic (and mathematics) for well-definedness.
I'll go farther and say that "inconsistency" is being abused when the inconsistency of a formal logical system is confused with inconsistency of actions, processes, and statements/justifications in the world.
It is a marvel of human ingenuity that bridges, in fact, do not fall down with alarming regularity though they do fall down and sometimes a human failing is at the root of it. Having been raised in Tacoma, Washington, I lived near a famous example of that, along with floating bridges that sometimes sink, sometimes float away.
I am also mindful that cathedrals and other edifices have stood for centuries without the benefit of our contemporary capabilities. I suppose the first thing that wonderful machine intelligence is going to have to deal with is the human affection for hyperbole.
- Dennis
-----Original Message-----
From: fom-bounces at cs.nyu.edu [mailto:fom-bounces at cs.nyu.edu] On Behalf Of Timothy Y. Chow
Sent: Saturday, August 24, 2013 18:58
To: fom at cs.nyu.edu
Subject: Re: [FOM] "Hidden" contradictions
Carl Hewitt wrote:
> Inconsistencies are pervasive in large software systems. Unfortunately,
> these inconsistencies cause "bridges to fall down" with alarming
> regularity. In some cases, it has been impossible to trace back which
> inconsistencies caused a disaster. See the ACM Risks Forum newsgroup
> moderated by Peter Neumann for an ongoing saga. Some contradictions have
> been discovered using subtle reasoning.
Could you be more specific? I skimmed the file
http://www.csl.sri.com/users/risko/risks.txt
but was not able to identify which "ongoing saga" in particular you were
referring to.
Also, what exactly do you mean by an "inconsistency" in a large software
system? I'm assuming you're not using the word "inconsistency"
interchangeably with the word "bug." I'm guessing that you're using
"inconsistency" to refer to formal software *specifications* rather than
to software itself?
Tim
_______________________________________________
FOM mailing list
FOM at cs.nyu.eduhttp://www.cs.nyu.edu/mailman/listinfo/fom