My question about the distinction between "deduction" and other forms of
inference was posed to help me better understand the points you have been
making about the utility of non-deductive inferences.
From your response, I think that "deduction" is the process of finding a
proof in some theory. Thus, "deductions" (deduced results) are precisely
those that are proven to be true in some (accepted) proof theory. (And,
maybe, for a proof theory to be acceptable, it must be sound with respect
to some accepted model theory.)
On this basis, I understand the further points you are making to be that
there may be useful results (inferences) that cannot be proven. Which, I
guess, takes us into issues of how dependable one needs results to be in
order for them to be useful.
Am I following your key points?
(This leaves me wondering if it is not generally possible to turn any
non-deduction into a deduction by strengthening the accompanying proof
theory. Picking an example from another thread here: based on a given
knowledge of airports, I might usefully infer, via NAF, that the closest to
my current location is LHR, because I don't know of a closer one (and
there's a general presumption that I know about airports close to my
current location). This is not a provable deduction, but maybe it is made
so by adding to the proof theory concerned an axiom to the effect that a
given list of airports is complete.)
#g
--
At 15:58 01/12/03 -0500, Drew McDermott wrote:
> [Graham Klyne]
> Can you please point me at a resource that explains the precise
> distinction
> between "deduction" and other forms of inference?
>
>Consulting my agent undergraduate logic textbook (by Angelo Margaris,
>published 1967), under "deduction" in the index we find a definition
>of "a" deduction, namely, a series of formulas that are either
>axioms or result from application of an inference rule from previous
>formulas. Then one could say that "deduction" (the technique) is
>whatever comes at the end of a "deduction" (the series of formulas).
>But that's not terribly enlightening.
>
>A better definition comes by taking into account the semantics of
>logical languages (found in another chapter). Anything that can be
>deduced is true in all models of a theory (and, if the theory is
>complete, vice versa). This is the reason that deduction is
>conservative: if you can think of any interpretation of the given
>facts, no matter how wild, in which the statements you start with are
>true, then if P is false in that interpretation it cannot be deduced.
>(Unless the statements you start with are inconsistent, in which case
>there _are_ no interpretations that make them all true.)
>
>When one philosopher says "P is possible," and the other retorts that
>it's "only logically possible," it's exactly this sense of possibility
>they have in mind. Those who expect great things from deduction hope
>to make many commonsense inferences logically necessary by supplying
>the appropriate axioms. For instance, we'd like to infer that you
>know your name. It may be physically impossible, or incredibly
>unlikely, that you have forgotten your name, but it's not logically
>impossible unless we supply an axiom that says "Everybody knows their
>own name." Then we think of the possibility of Alzheimer's, and
>realize that this is trickier than we thought.
>
>Techniques like probabilistic reasoning with Bayes nets can be thought
>of as deductive or nondeductive, and it is easy to slip from one mode
>to the other without realizing it. Let's assume that there is a
>deductive theory in which a Bayes net and its boundary conditions can
>be described, and the conclusions you arrive at are precisely those
>licensed by the usual algorithms. (Actually expressing this theory is
>probably harder than you think, but let that pass.) Now we will have
>a theorem such as P("Klyne knows his name", 0.9999976). So far,
>deduction. But if we slip to "Therefore, Klyne knows his name," we
>have interpreted the conclusion nondeductively.
>
>Decision theorists can postpone the inevitable one step further by
>having all _behavior_ depend only on expected utilities rather than
>beliefs. I don't need to actually _believe_ that Klyne knows his
>name; I just have to realize that if I want to answer the question
>"Does Klyne have a middle name?" the action with the highest expected
>utility is to send him an e-mail message with the question. One
>problem is that to prove that an action has the highest expected
>utility I have to be able to reason about all possible actions, not by
>running through an explicit list, but somehow. Another problem is
>that it is much more efficient to reason in terms of possibly wrong
>beliefs than in terms of certain probabilities. In the present
>example, I'd like to believe that after asking Klyne the question and
>getting the answer I will then know whether he has a middle name. But
>all I can conclude is that the conditional probability of "Klyne has a
>middle name" given that he replies "No" is 0.001495. (It's much
>higher than you'd expect because of the chance that he may conceal the
>truth, not out of malice, but in order to spoil the example.)
>
> -- Drew
>
>
>P.S. One might object that I can't really be certain about the
>probabilities, not to very many significant digits. No, but you'll
>almost certainly never be contradicted if you act as though these
>numbers really are completely accurate.
>
>
>--
> -- Drew McDermott
> Yale University CS Dept.
------------
Graham Klyne
For email:
http://www.ninebynine.org/#Contact