“Insofar as we share a common notion of the things in the world, we
do have a common semantics, i.e., via a common reference.”

It seems to me that some attempts at
models (languages, concepts, etc.) try to map the “things in the world” directly
via reference – others do not.

Language is something where this is not
the case – the majority of language use’s goal is not about
describing the world, but persuading, cajoling, etc. – I recall a
particularly good description of this in George Lakoff’s Women, Fire,
etc.

Even when it is about describing a
situation – it is not always clear how reference works. David Armstrong
gives as an example the statement that ‘there are at least two people in
the room” – when there are a lot more. What does the statement
refer to (e.g. which two people?) – you have to go through quite a few
contortions to rescue reference.

To give a more practical example, in a
recent ontology mining exercise on an operational system, the code included the
notion of a node which was a point in space described by the traditional three
co-ordinates. The problem was that as the system needed to be distributed the
(representation of) the same node could (and did) appear a number of times. This
is a very simple example, of a common feature of currently implemented systems,
the requirements of system performance least to a downgrading of the importance
of maintaining a direct relationship with the “things in the world” via
reference and hence (legitimately) reference becomes indirect .

So what seems to me to characterise a
model of an ontology is a desire to map the “things in the world” directly
via reference – and that language, concepts, etc do not necessarily share
that desire.

I am not sure that this desire has been
made explicit in the current Ontology Framework Draft Statement for the Ontology
Summit – and I think it might usefully do so.

This deserves a longer discussion, because it's a good answer.
However, in short, I think you're making my point, not refuting it. My
point was why use the term "concept" to describe that which we're
quantifying over (topic maps, thesauri, et al, are not exempt) in our
ontologies *unless* we're willing to go through something like the exercise you
have below? The point is that the things we're quantifying over are
(hopefully) not concepts at all but rather the things we wish to talk
about. We may also wish to talk about tree-concepts as well as trees,
precisely as you did below (BTW, thanks for previewing to anyone interested
what such a theory might look like). A second and more pressing problem
is that, while computer science ontology-talk blithely discards any kind of
realism by the extensive (and as Barry points out, slipshod) use of the term
"concept", there seems to be a hasty implicit willingness to at least
regard concepts as real. One finds a contradiction here -- concept talk lends
reality to at least one ontological category - the concept - or at least
accepts reliable epistemic access to such things. Why this and not have
any confidence in our epistemic access to trees and dogs. Concepts seem
much more difficult to get a handle on, especially if they're private.
Dogs and trees are public - we share our access to them.

.bill

On Apr 21, 2007, at 16:26 , Obrst, Leo J. wrote:

Bill,

Although I think concepts are internal
(call them instead ideas or semantic senses, if you wish), I think they
appropriately point to things in the world, so the shared semantics has two
aspects:sense and reference. The latter is the thing in the world; the
former is the placeholder for that, and indexed by our language constructs.
Insofar as we share a common notion of the things in the world, we do have a
common semantics, i.e., via a common reference. One might say in fact that the
degree to which our thoughts match or map to the things in the world is the
degree to which we have a common way of thinking about the things of the world.
And then, finally, the degree to which our terminology and compositions of our
terminology align with those common thoughts is the degree to which we can
communicate with reasonably shared semantics.

What I don't understand is how my term can
map directly to a thing in the world and bypass my thoughts.

Let's do an experiment:

Refer directly to a specific tree without
1) using language, 2) pointing, or 3) thinking about that tree. I'd say you can
probably do without (1) or (2) (e.g., most animals), but how can you do without
(3)?

Can the tree be physically in my head? If
you say that the term-reference relation is in my head, then I would say that
term-reference relation is a concept (yes, an entity, class, relation,
property, instance, logical operatory, if you will -- as a way of
characterizing the kinds of concepts), a reified representation in my
head, and that that term-reference relation in fact can dispense with the
term, since we think that animals can know the world without necessarily having
language. Is the tree in the head of my dog? I don't think so: I think the dog
has an idea about the tree.

I still think that one of the causes of
our dissonance is that we are talking about 1) ontology, 2) logic, and 3)
semantics, and not keeping these things straight. I would say that we build
engineering models (call them engineering ontologies) which try to represent the
real world. However, those engineering models consist of two items: 1) labels
and 2) the representation of the meaning of those labels where the meaning is
expressed as formal classes, entities, relations, properties, instances, rules,
etc., that are supposed to align with what we think is the way the real
world is and "means". Now, labels are terms, i.e., the names we give
to these representations. As such they are elements of our language, abstracted
or idealized. We have other terms we use in ordinary communication that index
those labels. Both the terms and the labels are vocabulary; their
interpretations, i.e., the actual formal models (stand-ins or representations
for the real world things) they map to, and the mappings, are their
semantics. Because a logic itself is a language, we have another filegree
of potential dissonance.

I use "concept" not because I am
a conceptualist (I'm not), but because I think that that notion abstracts over
stuff like entity, class, property, relation, attribute, instance, rule, etc.

Your note hits precisely at the issue that I think plagues concept
talk.

(1) If concepts are private to bearers, then why pretend to talk about
"shared semantics", "shared meanings" and so on, as is
common in the literature on ontology in computer science? For this to
work, there would have to be something (anything!) by virtue of which such
concepts could be shared. There are, as you know, theories about how that
may come about (I'm thinking of Carnap's private language argument), but
nobody's talking about that in computer science ontology.

(2) If concepts are not private, then there must be some nexus that
supports the non-private component of them that is shared. Generally, we
of a realist bent take that to be *reality*. Barry's comment concerning
the bio-ontologist who thinks of their computational bio-ontology
representing not concepts, but biological reality, comes to mind. If
we still want concepts (say to talk about someone's personal concepts) some
form of conceptual realism can be employed to relate the two.

In either case, I don't think any useful work is done whatsoever by
calling the things denoted by linguistic terms in *computational* ontologies
"concepts" with no further comment. We humans (at least those
of us who are not concept theorists) seem to resort to to the use of the term
"concept" for the same reason that we call something we can't
remember the name of a "thingy" or "whatchamacallit" - in
this form it's a kind of forgivable intellectual laziness. That, or we're
*really* talking about concepts in which case we have lots of work to do.
Rather, wouldn't it be better - especially if one doesn't care about
(philosophically-motivated) ontology - simply to use the more neutral terms of
"property", "relation", and "object" that can be
taken to correspond to the denotation of relation- (unary and
greater-than-unary) and constant-terms in mathematical logic. Nicola
Guarino, and later with Chris Welty, went this direction. This relates to
Welty's comment of yesterday about there being nothing new in computer science
ontology -- it's almost as if computer scientists engaged in the "semantic
technology" field are afraid to use terms that might make their enterprise
seem less sexy and "semantic", so they stick with "concept"

On Apr 20, 2007, at 21:27 , Obrst, Leo J. wrote:

[Opinion on]

Everything is a concept: entities,
relations among them, properties, attributes, even many instances/individuals
(days of the week; Joe Montana; etc.) Especially when you think of concept in
animal mental apparatus as a placeholder for something real in the real world
(I am a realist). Sure, I have a concept for 'Joe Montana'. Is that concept a
general notion, i.e., a class of something? No.

The general problem (from my perspective)
is that we are typically always addressing two perspectives: 1) ontology, i.e.,
what exists in the world? and 2) semantics, i.e., what is the relationship
between our ways of talking/thinking and those things in the world? To me it's
clear that we are talking about (1) things of the world, but our language (and
our thought, I would say) interposes another layer or two. I would say there
are minimally 3 things: 1) our language (terms and compositions of terms),
2) the senses of terms (and their compositions) which we might characterize as
concepts, and 3) real world referents that those senses or concepts somehow
point to. In formal semantics, a good theory of reference (i.e., (3)) is hard
to come by.

me-thinks this is a leftover from DL-speak
in which 'concept' refers to the classes, not the relationships.
I prefer the broader use of 'concept' whereby one speaks of the concept of
having a brother, or of being a mentor (which of course are relationships).

How come "relations" are a separate category from
"concepts"? Are relations not "conceptual" in the way
that "conceptual" are? If it is the case that 'concept' is just
parlor speak for those things that we typically represent with nodes in a
taxonomy or unary predicates in a logic, and if 'relation' is used to talk
about those things that are not "concepts" (i.e. the things we like
to represent with predicate terms of arity greater than one), then the
distinction seems artificial. Should there not be just
"concepts" divided into the 1-, 2- ... n-ary cases?