Concepts

Concepts are the constituents of thoughts. Consequently, they are crucial
to such psychological processes as categorization, inference, memory,
learning, and decision-making. This much is relatively
uncontroversial. But the nature of concepts—the kind of things
concepts are—and the constraints that govern a theory of
concepts have been the subject of much debate. This is due, at least
in part, to the fact that disputes about concepts often reflect deeply
opposing approaches to the study of the mind, to language, and even to
philosophy itself. In this entry, we provide an overview of theories
of concepts, and outline some of the disputes that have shaped debates
surrounding the nature of concepts. The entry is organized around
five significant issues that are focal points for many theories of
concepts. Not every theory of concepts takes a stand on each of the
five, but viewed collectively these issues show why the theory of
concepts has been such a rich and lively topic in recent years. The
five issues are: (1) the ontology of concepts, (2) the structure of
concepts, (3) empiricism and nativism about concepts, (4) concepts and
natural language, and (5) concepts and conceptual analysis.

The first of these views maintains that concepts are psychological
entities, taking as its starting point the representational theory of
the mind (RTM). According to RTM, thinking occurs in an internal
system of representation. Beliefs and desires and other propositional
attitudes enter into mental processes as internal symbols. For
example, Sue might believe that Dave is taller than Cathy, and also
believe that Cathy is taller than Ben, and together these may cause
Sue to believe that Dave is taller than Ben. Her beliefs would be
constituted by mental representations that are about Dave, Cathy and
Ben and their relative heights. What makes these beliefs, as opposed
to desires or other psychological states, is that the symbols have the
characteristic causal-functional role of beliefs. (RTM is usually
presented as taking beliefs and other propositional attitudes to be
relations between an agent and a mental representation (e.g., Fodor
1987). But given that the relation in question is a matter of having
a representation with a particular type of functional role tokened in
one's mind, it is simpler to say that occurrent beliefs just are
mental representations with a characteristic type of functional
role.)

Many advocates of RTM take the mental representations involved in
beliefs and other propositional attitudes to have internal structure.
Accordingly, the representations that figure in Sue's beliefs
would be composed of more basic representations. For theorists who
adopt the mental representation view of concepts, concepts are
identified with these more basic representations.

Early advocates of RTM (e.g., Locke (1690/1975) and Hume (1739/1978))
called these more basic representations ideas, and took them
to be mental images. But modern versions of RTM assume that much
thought is not grounded in mental images. The classic contemporary
treatment maintains, instead, that the internal system of
representation has a language-like syntax and a compositional
semantics. According to this view, much of thought is grounded in
word-like mental representations. This view is often referred
to as the language of thought hypothesis (Fodor 1975).
However, the analogy with language isn't perfect; obviously, the
internal symbol system must lack many of the properties associated
with a natural language. Nonetheless, like a natural language, the
internal system's formulae are taken to have subject/predicate form
and include logical devices, such as quantifiers and variables. In
addition, the content of a complex symbol is supposed to be a function
of its syntactic structure and the contents of its constituents.
Returning to Sue's beliefs, the supposition is that they are composed
of such symbols as DAVE, CATHY and TALLER and that her beliefs
represent what they do in virtue of the contents of these symbols and
how they are arranged .

The mental representation view of concepts is the default position in
cognitive science (Pinker 1994) and enjoys widespread support in the
philosophy of mind, particularly among philosophers who view their
work as being aligned with research in cognitive science (e.g.,
Carruthers 2000, Millikan 2000, Fodor 2003, Harman 1987, Margolis
& Laurence
2007).[1]
Supporters of this view argue for it on explanatory
grounds. They maintain that concepts and structured mental
representations play a crucial role in accounting for the productivity
of thought (i.e., the fact that human beings can entertain an
unbounded number of thoughts), in explaining how mental processes
can be both rational and implemented in the brain, and in accommodating
the need for structure-sensitive mental processes (Fodor 1987; see also the entry
language of thought hypothesis).

Critics of this view argue that it is possible to have propositional
attitudes without having the relevant mental representations tokened in
one's head. Daniel Dennett (1977), for example, argues that most
people believe zebras don't wear overcoats in the
wild—and a million other similar facts—even though
they have never stopped to consider such matters. Dennett also notes
that computing systems can lack representations corresponding to the
explanations we cite in characterizing and predicting their behavior.
For example, it may make perfect sense to say of a chess-playing
computer that it thinks that it is good to get one's queen out
early, even though we know from how the computer is programmed that it
has no representation with that very content (see Dennett 1978, 1987
for these and related criticisms and Fodor 1987 for a response).

Other critics claim that RTM is too closely associated with
commonsense psychology, which they argue should be abandoned as a
stagnant and degenerate research program (Churchland 1981; see Horgan
& Woodward 1985 for a reply), or that developments in
computational modeling (esp. connectionism and dynamic systems theory)
offer alternatives, particularly to the language of thought version of
RTM (e.g., see Van Gelder 1995, Elman et al. 1996, McClelland et
al. 2010; see Fodor & Pylyshyn 1988, Marcus 2001, and Gallistel
& King 2009 for critical discussion of theories that don't employ
combinatorial structure.)

According to the abilities view, it's wrong to maintain that concepts
are mental particulars—concepts are neither mental images nor
word-like entities in a language of thought. Rather, concepts are
abilities that are peculiar to cognitive agents (e.g., Dummett 1993,
Bennett & Hacker 2008, Kenny 2010). The concept CAT, for example,
might amount to the ability to discriminate cats from non-cats and to
draw certain inferences about cats.

While the abilities view is maintained by a diverse group of
philosophers, the most prominent reason for adopting the view is a deep
skepticism about the existence and utility of mental representations,
skepticism that traces back Ludwig Wittgenstein (1953/1958). One of the
most influential arguments along these lines claims that mental
representations are explanatorily idle because they reintroduce the
very sorts of problems they are supposed to explain. For example,
Michael Dummett cautions against trying to explain knowledge of a first
language on the model of knowledge of a second language. In the case of
a second language, it is reasonable to suppose that understanding the
language involves translating its words and sentences into words and
sentences of one's first language. But according to Dummett, one
can't go on to translate words and sentences of one's first
language into a prior mental language. “[T]here is really no
sense to speaking of a concept's coming into someone's
mind. All we can think of is some image coming to mind which we take as
in some way representing the concept, and this gets us no further
forward, since we still have to ask in what his associating that
concept with that image consists” (Dummett 1993, p. 98). In other
words, the mental representation itself is just another item whose
significance bears explaining. Either we are involved in a vicious
regress, having to invoke yet another layer of representation (and so
on indefinitely) or we might as well stop with the external language
and explain its significance directly. (For critical discussion of this
type of regress argument, see Fodor 1975, Crane 1995, Laurence &
Margolis 1997).

Not surprisingly, critics of the abilities view argue in the other
direction. They note difficulties that the abilities view inherits by
its rejection of mental representations. One is that the view is
ill-equipped to explain the productivity of thought; another is that it
can say little about mental processes. And if proponents of the
abilities view remain neutral about the existence of mental
representations, they open themselves to the criticism that explication
of these abilities is best given in terms of underlying mental
representations and processes (see Fodor 1968 and Chomsky 1980 for
general discussion of the anti-intellectualist tradition in the
philosophy of mind).

The view that concepts are Fregean senses identifies concepts with
abstract objects, as opposed to mental objects and mental
states (e.g., Peacocke 1992, Zalta 2001). Concepts are said to be the
constituents of propositions. For proponents of this view, concepts
mediate between thought and language, on the one hand, and referents,
on the other. An expression without a referent (“Pegasus”)
needn't lack a meaning, since it still has a sense. Similarly, the
same referent can be associated with different expressions (e.g.,
“Eric Blair” and “George Orwell”) because they
convey different senses. Senses are more discriminating than
referents. Each sense has a unique perspective on its
referent—a unique mode of presentation. Differences in cognitive
content trace back to differences in modes of presentation. It's for
this reason that the thought that George Orwell is Eric Blair lacks
the triviality of the thought that George Orwell is George
Orwell. Philosophers who take concepts to be senses particularly
emphasize this feature of senses. Christopher Peacocke, for example,
locates the subject matter of a theory of concepts as follows:
“Concepts C and D are distinct if and only if
there are two complete propositional contents that differ at most in
that one contains C substituted in one or more places for
D, and one of which is potentially informative while the
other is not” (Peacocke 1992, p. 2). In other words, C
and D embody differing modes of presentation. (See the entry
Frege
for discussion of the sense/reference distinction and for more on the
explanatory functions associated with senses. To avoid terminological
confusion, we should note that Frege himself did not use the term
"concept" for senses, but rather for the referents of predicates.
Similarly, it is worth noting that Frege uses the term "thought" to
stand for propositions, so for Frege thoughts are not psychological
states at all.)

The view that concepts are Fregean senses, like the abilities view,
is generally held by philosophers who are opposed to identifying
concepts with mental representations. Peacocke himself doesn't go
so far as to argue that mental representations are explanatorily idle,
but he does think that mental representations are too fine-grained for
philosophical purposes. “It is possible for one and the same
concept to receive different mental representations in different
individuals” (Peacocke 1992, p. 3). He is also concerned that
identifying concepts with mental representations rules out the
possibility of there being concepts that human beings have never
entertained, or couldn't ever entertain.

If we accept that a thinker's possession of a concept must be
realized by some subpersonal state involving a mental representation,
why not say simply that the concept is the mental representation? Just
this proposal is made by Margolis and Laurence (1999, 77). Mental
representations that are concepts could even be typed by the
corresponding possession condition of the sort I favour. This seems to
me an entirely legitimate notion of a kind of mental representation;
but it is not quite the notion of a concept. It can, for instance, be
true that there are concepts human beings may never acquire, because of
their intellectual limitations, or because the sun will expand to
eradicate human life before humans reach a stage at which they can
acquire these concepts. ‘There are concepts that will never be
acquired’ cannot mean or imply ‘There are mental
representations which are not mental representations in anyone's
mind’. If concepts are individuated by their possession
conditions, on the other hand, there is no problem about the existence
of concepts that will never be acquired. They are simply concepts whose
possession conditions will never be satisfied by any thinkers.
(Peacocke, 2005, p. 169).

Advocates of the mental representation view would respond to these
arguments by invoking the type/token distinction with respect to mental
representations. According to the mental representation view, concepts
that haven't been acquired are just representations of a type
that have never been tokened (Margolis & Laurence 2007).

Critics of the sense-based view have questioned the utility of
appealing to such abstract objects (Quine 1960). One difficulty stems
from the fact that senses, as abstract entities, stand outside of the
causal realm. The question then is how we can access these objects.
Advocates of the Fregean sense view describe our access to senses by
means of the metaphor of “grasping”—we are said to
grasp the sense of an expression. But grasping here is just a metaphor
for a cognitive relation that needs to be explicated. Moreover, though
senses are hypothesized as providing different modes of presentation
for referents, it is not clear why senses themselves do not generate
the mode of presentation problem (Fodor 1998). Since they are external
to our minds, just as referents typically are, it isn't clear why
we can't stand in different epistemic relations towards them
just as we can to referents. In the same way that we can have different
modes of presentation for a number (the only even prime, the sum of one
and one, Tim's favorite number, etc.), we ought to be able to
have different modes of presentation for a given sense.

Stepping back from the details of these three views, there is no
reason, in principle, why the different views of concepts
couldn't be combined in various ways. For instance, one might
maintain that concepts are mental representations that are typed in
terms of the Fregean senses they express. For this reason alone,
it's fair to wonder whether the dispute about ontology is a
substantive dispute. Perhaps there is only a terminological issue about
which things ought to be granted the label “concepts”. If
so, why not just call mental representations
“concepts1”, the relevant abilities
“concepts2”, senses
“concepts3”, and leave it at that?

However, the participants in the dispute don't generally view
it as a terminological one. Perhaps this is because they associate
their own theories of concepts with large-scale commitments about the
way that philosophers should approach the study of mind and language.
Undoubtedly, from Dummett's perspective, philosophers who embrace
the mental representation view also embrace RTM, and RTM, as he sees
it, is fundamentally misguided. Likewise, from Fodor's
perspective, RTM is critical to the study of the mind, so an approach
like Dummett's, which disallows RTM, places inappropriate a
priori constraints on the study of the mind.

These differences in perspective remain present once a more
fine-grained terminology is adopted. For example, it would still be a
matter of dispute whether there are mental representations and whether
they can do the sorts of explanatory work that proponents of RTM
require of them or whether these explanatory roles provide the most
important or coherent cluster of roles associated with the term
“concept”. Previously, these issues would have found
expression by posing the question of whether concepts are mental
representations. However, if we adopt the proposed new terminology,
much the same set of issues would arise concerning the nature and
existence of the various more fine-grained
categories—concepts1, concepts2, and
concepts3.

Just as thoughts are composed of more basic, word-sized concepts, so
these word-sized concepts—known as lexical
concepts—are generally thought to be composed of even more
basic concepts. In this section, we look at different proposals about
the structure of lexical concepts (see Margolis & Laurence 1999 for
different approaches to the issue of conceptual structure).

In one way or another, all theories regarding the structure of
concepts are developments of, or reactions to, the classical theory
of concepts. According to the classical theory, a lexical concept
C has definitional structure in that it is composed of simpler
concepts that express necessary and sufficient conditions for falling
under C. The stock example is the concept BACHELOR, which is
traditionally said to have the constituents UNMARRIED and MAN. If the
example is taken at face value, the idea is that something falls under
BACHELOR if it is an unmarried man and only if it is an unmarried man.
According to the classical theory, lexical concepts generally will
exhibit this same sort of definitional structure. This includes such
philosophically interesting concepts as TRUTH, GOODNESS, FREEDOM,
and JUSTICE.

Before turning to other theories of conceptual structure, it's
worth pausing to see what's so appealing about classical or
definitional structure. Much of its appeal comes from the way it offers
unified treatments of concept acquisition, categorization, and
reference determination. In each case, the crucial work is being done
by the very same components. Concept acquisition can be understood as a
process in which new complex concepts are created by assembling their
definitional constituents. Categorization can be understood as a
psychological process in which a complex concept is matched to a target
item by checking to see if each and every one of its definitional
constituents applies to the target. And reference determination,
we've already seen, is a matter of whether the definitional
constituents do apply to the target.

These considerations alone would be enough to show why the classical
theory has been held in such high regard. But the classical theory
receives further motivation through its connection with a philosophical
method that goes back to antiquity and that continues to exert its
influence over contemporary thought. This is the method of
conceptual analysis. Paradigmatic conceptual analyses offer
definitions of concepts that are to be tested against potential
counterexamples that are identified via thought experiments. Conceptual
analysis is supposed to be a distinctively a priori activity that many
take to be the essence of philosophy. To the extent that paradigmatic
conceptual analyses are available and successful, this will convey
support for the classical theory. Conversely, if the definitions
aren't there to be discovered, this would seem to put in jeopardy
a venerable view of what philosophy is and how philosophical
investigations ought to proceed (see
section 5 below).

The classical theory has come under considerable pressure in the
last thirty years or so, not just in philosophy but in psychology and
other fields as well. For psychologists, the main problem has been that
the classical theory has difficulty explaining a
robust set of empirical findings. At the center of this work is the
discovery that certain categories are taken to be more representative
or typical and that typicality scores correlate with a wide variety of
psychological data (for reviews, see Smith & Medin 1981, Murphy
2002). For instance, apples are judged to be more typical than plums
with respect to the category of fruit, and correspondingly apples are
judged to have more features in common with fruit. There are many other
findings of this kind. One other is that more typical items are
categorized more efficiently. For example, subjects are quicker to
judge that apples are a kind of fruit than to judge that plums are. The
problem isn't that the classical theory is inconsistent with
results like these but that it does nothing to explain them.

In philosophy, the classical theory has been subjected to a number of
criticisms but perhaps the most fundamental is that attempts to
specify definitions for concepts have a poor track record. Quite
simply, there are too few examples of successful definitional
analyses, and certainly none that are uncontroversial (Wittgenstein
1953/1958, Fodor 1981). The huge literature on the analysis of
knowledge is representative of the state of things. Since Edmund
Gettier (1963) first challenged the traditional definition of
KNOWLEDGE (as JUSTIFIED TRUE BELIEF), there has been widespread
agreement among philosophers that the traditional definition is
incorrect or at least incomplete (e.g., Dancy 1985). But no one can
seem to agree on what the correct definition is. Despite the enormous
amount of effort that has gone into the matter, and the dozens of
papers written on the issue, we are still lacking a satisfactory and
complete definition. It could be that the problem is that definitions
are hard to come by. But another possibility—one that many
philosophers are now taking seriously—is that our concepts lack
definitional structure.

What other type of structure could they have? A non-classical
alternative that emerged in the 1970s is the prototype theory.
According to this theory, a lexical concept C doesn't
have definitional structure but has probabilistic structure in that
something falls under C just in case it satisfies a sufficient
number of properties encoded by C's constituents. The
prototype theory has its philosophical roots in Wittgenstein's
(1953/1958) famous remark that the things covered by a term often share
a family resemblance, and it has its psychological roots in Eleanor
Rosch's experimental treatment of much the same idea (Rosch &
Mervis 1975, Rosch 1978). The prototype theory is especially at home in
dealing with the typicality effects that were left unexplained by the
classical theory. One standard strategy is to maintain that, on the
prototype theory, categorization is to be understood as a similarity
comparison process, where similarity is computed as a function of the
number of constituents that two concepts hold in common. On this model,
the reason apples are judged to be more typical than plums is that the
concept APPLE shares more of its constituents with FRUIT. Likewise,
this is why apples are judged to be a kind of fruit faster than plums
are.

The prototype theory does well in accounting for a variety of
psychological phenomena and it helps to explain why definitions may be
so hard to produce. But the prototype theory has its own problems and
limitations. One is that its treatment of categorization works best
for quick and unreflective judgments. Yet when it comes to more
reflective judgments, people go beyond the outcome of a similarity
comparison. If asked whether a dog that is surgically altered to look
like a raccoon is a dog or a raccoon, the answer for most of us, and
even for children, is that it is remains a dog (see Keil 1989, Gelman
2003 for discussion). Another criticism that has been raised against
taking concepts to have prototype structure concerns
compositionality. When a patently complex concept has a prototype
structure, it often has emergent properties, ones that don't derive
from the prototypes of its constituents (e.g., PET FISH encodes
properties such as brightly colored, which have no basis in the
prototype structure for either PET or FISH). Further, many patently
complex concepts don't even have a prototype structure (e.g., CHAIRS
THAT WERE PURCHASED ON A WEDNESDAY) (Fodor & Lepore 1996, Fodor
1998; for responses to the arguments from compositionality, see Prinz
2002, Robbins 2002, Hampton & Jönsson 2011).

One general solution that addresses all of these problems is to hold that a prototype
constitutes just part of the structure of a concept. In addition,
concepts have conceptual cores, which specify the information
relevant to more considered judgments and which underwrite
compositional processes. Of course, this just raises the question of
what sort of structure conceptual cores have. One common suggestion is
that conceptual cores have classical structure (Osherson & Smith
1981, Landau 1982). This won't do, however, since it just raises
once again most of the problems associated with the classical theory
(Laurence & Margolis 1999).

Another and currently more popular suggestion is that cores are best
understood in terms of the theory theory of concepts. This is
the view that concepts stand in relation to one another in the same way
as the terms of a scientific theory and that categorization is a
process that strongly resembles scientific theorizing (see, e.g., Carey
1985, 2009, Gopnik & Meltzoff 1997, Keil 1989). It's generally
assumed, as well, that the terms of a scientific theory are
interdefined so that a theoretical term's content is determined
by its unique role in the theory in which it occurs.

The theory theory is especially well-suited to explaining the sorts of
reflective categorization judgments that proved to be difficult for
the prototype theory. For example, theory theorists maintain that
children override perceptual similarity in assessing the situation
where the dog is made to look like a raccoon, claiming that even
children are in possession of a rudimentary biological theory. This
theory, an early form of folk biology, tells them that being a dog
isn't just a matter of looking like a dog. More important is having
the appropriate hidden properties of dogs—the dog essence (see
Atran & Medin 2008 on folkbiology). Another advantage of the
theory theory is that is supposed to help to explain important aspects
of conceptual development. Conceptual change in childhood is said to
follow the same pattern as theory change in science.

One problem that has been raised against the theory theory is that it
has difficulty in allowing for different people to possess the same
concepts (or even for the same person to have the same concept over
time). The reason is that the theory theory is holistic. A
concept's content is determined by its role in a theory, not by its
being composed of just a handful of constituents. Since beliefs that
enter people's mental theories are likely to be different from one
another (and are likely to change), there may be no principled basis
for comparison (Fodor & Lepore 1992). Another problem with the
theory theory concerns the analogy to theory change in science. The
analogy suggests that children undergo radical conceptual
reorganization in development, but many of the central case studies
have proved to be controversial on empirical grounds, with evidence
that the relevant concepts are implicated in core knowledge systems
that are enriched in development but not fundamentally altered (see
Spelke 1994 on core knowledge). However, there are certain specific
examples where radical conceptual reorganization is plausible, for
instance, when children eventually develop a theory of matter that
allows them to differentiate weight from density, and air from nothing
(Carey 2009).

A radical alternative to all of the theories we've mentioned
so far is conceptual atomism, the view that lexical concepts
have no semantic structure (Fodor 1998, Millikan 2000). According to
conceptual atomism, the content of a concept isn't determined by
its relation to other concepts but by its relation to the world.

Conceptual atomism follows in the anti-descriptivist tradition that
traces back to Saul Kripke, Hilary Putnam, and others working in the
philosophy of language (see Kripke 1972/80, Putnam 1975, Devitt 1981).
Kripke, for example, argues that proper names function like mere tags
in that they have no descriptive content (Kripke 1972/80). On a
description theory one might suppose that “Gödel”
means something like the discoverer of the incompleteness of
arithmetic. But Kripke points out we could discover that Schmitt
really discovered the incompleteness of arithmetic and that Gödel
could have killed Schmitt and passed the work off as his own. The
point is that if the description theory were correct, we would be
referring to Schmitt when we say “Gödel”. But
intuitively that's not the case at all. In the imagined scenario, the
sentence “Gödel discovered the incompleteness of
arithmetic” is saying something false about Gödel, not
something trivially true about the discoverer of the incompleteness of
arithmetic, whoever that might be (though see Machery et al. 2004 on
whether this intuition is universal). Kripke's alternative account of
names is that they achieve their reference by standing in a causal
relation to their referents. Conceptual atomism employs a similar
strategy while extending the model to all sorts of concepts, not just
ones for proper names.

At present, the nature of conceptual structure remains unsettled.
Perhaps part of the problem is that more attention needs to be given
to the question of what explanatory work conceptual structure is
supposed to do and the possibility that there are different types of
structure associated with different explanatory functions. We've seen
that conceptual structure is invoked to explain, among other things,
typicality effects, reflective categorization, cognitive development,
reference determination, and compositionality. But there is no reason
to assume that a single type of structure can explain all of these
things. As a result, there is no reason why philosophers shouldn't
maintain that concepts have different types of structure. For example,
notice that atomism is largely motivated by anti-descriptivism. In
effect, the atomist maintains that considerable psychological
variability is consistent with concepts entering into the same
mind-world causal relations, and that it's the latter that determines
a concept's reference. But just because the mechanisms of reference
determination permit considerable psychological variability doesn't
mean that there aren't, in fact, significant patterns for
psychologists to uncover. On the contrary, the evidence for typicality
effects is impressive by any measure. For this reason, it isn't
unreasonable to claim that concepts do have prototype structure even
if that structure has nothing to do with the determination of a
concept's referent. Similar considerations suggest that concepts may
have theory-structure and perhaps other types of structure as well
(see Laurence & Margolis 1999 on different types of conceptual
structure).

One way of responding to the plurality of conceptual structures is to
suppose that concepts have multiple types of structure. This is the
central idea behind conceptual pluralism. According to one
version of conceptual pluralism, suggested by Laurence & Margolis
(1999), a given concept will have a variety of different types of
structure associated with it as components of the concept in question.
For example, concepts may have atomic cores that are linked to
prototypes, internalized theories, and so on. On this approach, the
different types of structure that are components of a given concept
play different explanatory roles. Reference determination and
compositionality have more to do with the atomic cores themselves and
how they are causally related to things outside of the mind, while
rapid categorization and certain inferences depend on prototype
structure, and more considered inferences and reasoning depend upon
theory structure. Many variants on this general proposal are
possible, but the basic idea is that, while concepts have a plurality
of different types of structure with different explanatory roles, this
differing structure remains unified through the links to an atomic
representation that provides a concept's reference. One challenge for
this type of account is to delineate which of the cognitive resources
that are associated with a concept should be counted as part of its
structure and which should not. As a general framework, the account
is neutral regarding this question, but as the framework is filled in,
clarification will be needed regarding the status of potential types
of structure.

A different form of pluralism about conceptual structure doesn't
employ atomic cores but simply says that the prototype, theory,
etc. are all themselves concepts (Weiskopf 2009). Rather than holding
that a single concept (e.g., the concept CAT) has multiple types of
structure as components, as in the first form of pluralism, this form
takes each type of structure to be a concept on its own,
resulting in a plurality of concepts (CAT1,
CAT2, CAT3, etc). On this view, it is wrong to
suppose that there is such a thing as the concept CAT.
Instead, there are many cat-concepts, each with a different type of
structure, where each is involved in just a subset of the high-level
psychological processes associated with cats. CAT1, for
example, might explain some instances of categorization and some
inferences, while CAT2, CAT3, etc. explain
others. What's more, on this form of pluralism, people might also
differ with respect to which kinds of cat-concepts they possess. And
even if two people have a cat-concept with the same general type of
structure (e.g., prototype structure), the concepts might still be
rather different (treating prototypical cats as having rather
different sorts of properties). One challenge facing this version of
pluralism is to explain why all of the different cat-concepts count as
cat-concepts—that is, to explain what unifies the plurality of
cat-concepts. A natural answer to this challenge is that what unifies
them is that they all refer to the same category, the category of
cats. But it is not so clear that they can all refer to the same
category given the differences between the different cat-concepts and
the way that they function in cognition. For example, a standard
prototype structure would capture prototypical cats and exclude the
highly unusual, atypical cats that a theory structure would cover, and
consequently the two concepts would refer to distinct (though related)
categories.

In all of its forms, pluralism about conceptual structure recognizes
that concepts have diverse functions and that a corresponding variety
of types of representations are needed to fulfill these functions.
These same considerations have led some theorists to
advocate concept eliminativism—the view that there are no
concepts (Machery 2009). The reasoning behind concept eliminativism
is that concept should be understood to be a natural kind if concepts
exist at all, and that natural kinds ought to have significant
commonalities that can be discovered using empirical methods,
including commonalities that go well beyond the criteria that are
initially used to characterize them. But according to concept
eliminativists, there are no such commonalities that hold among the
types of representations that pluralists embrace. Perhaps we need
prototypes and theories and other types of representations for
distinct higher-level cognitive processes, but they are too diverse to
warrant the claim that they constitute a single kind. On this view,
then, we should simply abandon the theoretical construct of a concept
and refer only to more fine-grained types of representations, such as
prototypes and theories. Opponents of concept eliminativism have
responded to the eliminativist's challenge in a number of ways. Some have argued that Machery's criteria for elimination are simply too strong and that concept, understood as a higher-level kind or perhaps a functional kind, has great utility in psychological models of cognitive processes (e.g., Hampton 2010, Lalumera 2010, Strohminger & Moore 2010). Others have argued that Machery's criteria for something's being a natural kind are too restrictive and that his view would have the consequence of ruling out clear cases of legitimate higher-level kinds in science generally (e.g., Gonnerman & Weinberg 2010, Margolis & Laurence 2010). And others have argued that even if we grant Machery's stringent criteria for being a natural kind, elimination wouldn't follow, as concepts are natural kinds according to his criteria (Samuels & Ferreira 2010, Weiskopf 2010). (For further critical discussion of eliminativism, see the peer commentary that appears with Machery 2010 and the author's
response.)

One of the oldest questions about concepts concerns whether there
are any innate concepts and, if so, how much of the conceptual system
is innate. Empiricists maintain that there are few if any innate
concepts and that most cognitive capacities are acquired on the basis
of a few relatively simple general-purpose cognitive mechanisms.
Nativists, on the other hand, maintain that there may be many innate
concepts and that the mind has a great deal of innate differentiation
into complex domain-specific subsystems.

In recent years, the debate over innate concepts has been
reinvigorated as advances in cognitive science have provided
philosophers with new tools for revisiting and refining the
traditional dispute (see, e.g., Pinker 1994, Elman et al. 1996,
Carruthers, Laurence, & Stich 2005, 2006, 2007). Philosophers have
greatly benefited from empirical studies in such diverse fields as
developmental psychology, evolutionary psychology, cognitive
anthropology, neuroscience, linguistics, and ethology. Part of the
philosophical interest of this work is that, while the scientists
themselves take sides on the empiricist-nativist dispute, their
theories and data are often open to interpretation.

As an example, one of the earliest lines of investigation that
appeared to support traditional nativist conceptions of the mind was
the study of language (Pinker 1994). Noam Chomsky and his followers
argued that language acquisition succeeds even though children are
only exposed to severely limited evidence about the structure of their
language (Chomsky 1967, 1975, 1988; see also Laurence & Margolis
2001). Given the way that the final state (e.g., knowledge of English)
outstrips the data that are available to children, we can only
postulate that the human mind brings to language acquisition a complex
set of language-specific dispositions. For Chomsky, these dispositions
are grounded in a set of innate principles that constrain all possible
human natural languages, viz., universal grammar (see Baker 2001 on
universal grammar).

Not surprisingly, many philosophers have questioned Chomsky's
position. The ensuing debate has helped to sharpen the crucial
arguments and the extent to which nativist models should continue to
command their central place in linguistic theory. (On the empiricist
side, see Cowie 1999, Prinz 2002, and Sampson 2005; on the
nativist side, see Laurence & Margolis 2001 and Crain &
Pietroski 2001); see also the entry
innateness and language). For
instance, one of Fiona Cowie's criticisms of Chomsky's
poverty of the stimulus argument is that any induction establishes a
conclusion that outstrips the available data; hence, going beyond the
data in the case of language acquisition doesn't argue for innate
language-specific dispositions—or else there would have
to be a specific innate disposition for every induction we make (for an
earlier version of this argument, see Putnam 1967, Goodman 1969). Both
Laurence & Margolis and Crain & Pietroski respond by teasing
out the various ways in which the problem of language acquisition goes
beyond general problems about induction.

Traditionally, empiricists have argued that all concepts derive from
sensations. Concepts were understood to be formed from copies of
sensory representations and assembled in accordance with a set of
general-purpose learning rules, e.g., Hume's principles of
association (Hume 1739/1978). On this view, the content of any concept
must be analyzable in terms of its perceptual basis. Any purported
concept that fails this test embodies a confusion. Thus David Hume ends
his Enquiry with the famous remark:

When we run over libraries, persuaded of these principles, what
havoc must we make? If we take in our hand any volume; of divinity or
school metaphysics, for instance; let us ask, Does it contain any
abstract reasoning concerning quantity or number? No. Does it
contain any experimental reasoning concerning matter of fact and
existence? No. Commit it then to the flames: For it can contain
nothing but sophistry and illusion. (1748/1975, p. 165)

A similar doctrine was maintained by the logical positivists in the
early Twentieth Century, though the positivists couched the view in
linguistic terms (Ayer 1959). Their principle of verification required
for a sentence or statement to be meaningful that it have empirical
consequences, and, on some formulations of the principle, that the
meaning of a sentence is the empirical procedure for confirming it
(see the entry Vienna Circle).
Sentences that have no empirical consequences were deemed
to be meaningless. Since a good deal of philosophy purports to express
propositions that transcend all possible experience, the positivists
were happy to say that these philosophical doctrines are entirely
devoid of content and are composed of sentences that aren't merely
false but are literally gibberish.

Despite the current unpopularity of verificationism (though see
Dummett 1993, Wright 1989, and Dennett 1991), a growing number of
philosophers are attracted to modified forms of empiricism, forms that
primarily emphasize psychological relations between the conceptual
system and perceptual and motor states, not semantic relations. An example
is Lawrence Shapiro's defense of the claim that the type of body
that an organism has profoundly affects its cognitive operations as
well as the way that the organism is likely to conceptualize the world
(Shapiro 2004). Shapiro's claim is directed against philosophical
theories that willfully ignore contingent facts about human bodies as
if a human mind could inhere in wildly different body types. Drawing on
a number of empirical research programs, Shapiro cites examples that
appear to support what he calls the embodied mind thesis,
viz., that “minds profoundly reflect the bodies in which they are
contained” (Shapiro 2004, p. 167).

Jesse Prinz (2002) also defends a modified form of empiricism. Prinz
claims that “all (human) concepts are copies or combinations of
copies of perceptual representations” (Prinz 2002, p. 108).
Though the reference to copies is a nod to Hume, Prinz certainly
doesn't buy into Hume's verificationism. In fact, Prinz adopts a
causal theory of content of the kind that is usually associated with
atomistic theories of concepts (e.g., Fodor 1998); thus Prinz's theory
of intentional content doesn't require a concept to inherit the
specifically perceptual content of its constituents. Nonetheless,
Prinz thinks that every concept derives from perceptual
representations. Perhaps the best way to understand the claim is that
the mental representations that are activated when someone thinks
about something—no matter what the thought—are
representations that originate in neural circuits with perceptual or
motor functions and that the mental process is affected by that
origin. Suppose, for example, that one is thinking about a
hammer. Then she is either activating representations that inhere in
visual circuits, or representations involved in circuits that control
hand shape, etc., and her thought is affected in some way by the
primary function of these circuits. Following Lawrence Barsalou (1999;
see also Barsalou et al. 2003), Prinz characterizes concept possession as a
kind of simulation “tantamount to entering a perceptual state of
the kind one would be in if one were to experience the thing it
represents” (Prinz 2002, p. 150).

One challenge to this view of cognition is its implication for
abstract concepts. It's one thing to say that the concept HAMMER
involves the activation of circuits related to hand shape; it's quite
another to identify significant modal-specific representations
underlying such concepts as TRUTH, DEMOCRACY, ENTROPY, and NINETEEN
(Adams & Campbell 1999, Brewer 1999). Logical concepts are also a
challenge. Prinz suggests as a perceptual basis for the concept of
disjunction that it is based on feelings of hesitation. However, his
more considered view seems to be that logical concepts are best
understood as operations, not representations. The resulting theory is
one in which thoughts lack logical form. The trouble is that this
makes it difficult to see how to distinguish logically equivalent
thoughts. A related problem is that, since composition for Prinz does
not yield structurally complex representations, there seems to be
nothing to distinguish the type of contents associated with judgements
(propositional contents) from those associated lists or even single
concepts (for related discussion Fodor & Pylyshyn 1988). Finally,
there are difficulties regarding how to interpret behavioral and
neurological evidence that is supposed to support Prinz and Barsalou's
case against amodal representations. For example, Machery (2007)
points out that proponents of amodal representations typically suppose
that imagery is useful in solving certain types of problems. So to
argue against amodal representations, it is not enough to show that
modal representations show up in a task in which experimental subjects
are not explicitly told to visualize a solution. (For further
critical discussion of the form of empiricism that is opposed to
amodal representations, see Weiskopf 2007, Mahon & Caramazza 2008,
and Dove 2009).

Perhaps the most influential discussion of concepts in relation to the
nativism/empiricism debate is Jerry Fodor's (1975, 1981) argument for
the claim that virtually all lexical concepts are innate. Fodor
(1975) argued that there are theoretical problems with all models of
concept learning in that all such models treat concept learning as
hypothesis testing. The problem is that the correct hypothesis
invariably employs the very concept to be learned and hence the
concept has to be available to a learner prior to the learning taking
place. In his (1981), Fodor developed this argument by allowing that
complex concepts (and only complex concepts) can be learned in that
they can be assembled from their constituents during the learning
process. He went on to argue that lexical concepts lack semantic
structure and consequently that virtually all lexical concepts must be
innate—a position known as radical concept
nativism. Fodor's arguments have had a great deal of influence on
debates about nativism and concept learning, especially amongst
cognitive scientists. Few if any have endorsed Fodor's radical
conclusions, but many have shaped their views of cognitive development
at least in part in response to Fodor's arguments (Jackendoff 1989,
Levin & Pinker 1991, Spelke & Tsivkin 2001, Carey 2009). And
Fodor has convinced many that primitive concepts are in principle
unlearnable (see, e.g., Pinker 2007). Fodor's arguments for this
conclusion, however, can be challenged in a number of ways. The most
direct way to challenge it is to construct an account of what it is to
learn a primitive concept and to show that it is immune to Fodor's
challenges (Margolis 1998, Laurence & Margolis 2002, Carey
2009).

Fodor's own views on these issues have recently changed as well. He
now maintains that while considerations about the need for hypothesis
testing show that no concepts can be learned, not even complex
concepts, this does not require concepts to be innate (Fodor 2008).
Instead, Fodor suggests that they are acquired via processes that are
largely biological in that they don't admit of a psychological-level
description. Though a biological account of concept acquisition does
offer an alternative to the innate/learned dichotomy, there are
reasons for supposing that many concepts are learned all the same
(Margolis & Laurence forthcoming). These include the fact that a
person's conceptual system is highly sensitive to the surrounding
culture. For example, the concept PURGATORY comes from cultural
products such as books, stories, and sermons. But clearly these can
only succeed in conveying the concept when mediated by the right sort
of psychological processes. Acquiring such concepts is a
cognitive-level achievement, not a merely biological one.

One further issue concerning innate concepts that is in dispute is
whether the very idea of innateness makes sense. A common point among
those who are skeptical of the notion is the observation that all
traits are dependent upon interactions between genes and the
environment and that there is no way to fully untangle the two (Elman
et al. 1996, Griffiths 2002; see also Clark 1998 and Marcus 2004, and
the entry on the
distinction between innate and acquired characteristics).
Nonetheless, there are clear
differences between models of the mind with empiricist leanings and
models of the mind with nativist leanings, and the notion of
innateness may be thought to earn its usefulness by marking these
differences. For discussion of different proposals of what innateness
is see Ariew (1999), Cowie (1999), Samuels (2002), Mallon &
Weinberg (2006), and Khalidi (2007).

Some philosophers maintain that possession of natural language is
necessary for having any concepts (Brandom 1994, Davidson 1975, Dummett
1993) and that the tight connection between the two can be established
on a priori grounds. In a well known passage, Donald Davidson
summarizes his position as follows:

We have the idea of belief only from the role of belief in the
interpretation of language, for as a private attitude it is not
intelligible except as an adjustment to the public norm provided by
language. It follows that a creature must be a member of a speech
community if it is to have the concept of belief. And given the
dependence of other attitudes on belief, we can say more generally that
only a creature that can interpret speech can have the concept of a
thought.

Can a creature have a belief if it does not have the concept of belief?
It seems to me it cannot, and for this reason. Someone cannot have a
belief unless he understands the possibility of being mistaken, and
this requires grasping the contrast between truth and error—true
belief and false belief. But this contrast, I have argued, can emerge
only in the context of interpretation, which alone forces us to the
idea of an objective, public truth. (Davidson 1975, p. 170).

The argument links having beliefs and concepts with having the concept
of belief. Since Davidson thinks that non-linguistic creatures can't
have the concept of belief, they can't have other concepts as
well. Why the concept of belief is needed to have other concepts is
somewhat obscure in Davidson's writings (Carruthers 1992). And whether
language is necessary for this particular concept is not obvious. In
fact, there is an ongoing research program in cognitive science that
addresses this very issue. A variety of non-linguistic tasks have been
given to animals and infants to determine the extent to which they are
able to attribute mental states to others (see Tomasello, Call, &
Hare 2003 for work on chimpanzees and Onishi & Baillargeon 2005
for work on infants; see also Bloom & German 2000). These and
related studies provide strong evidence that at least some aspects of
theory of mind are nonlinguistic.

Davidson offers a pair of supplementary arguments that may elucidate
why he is hesitant to turn the issue over to the cognitive scientists.
He gives the example of a man engaging in a non-linguistic task where
the man indicates his answer by making a choice, for example, selecting
an apple over a pear. Davidson comments that until the man actually
says what he has in mind, there will always be a question about the
conceptualization guiding his choice. “Repeated tests may make
some readings of his actions more plausible than others, but the
problem will remain how to determine when he judges two objects of
choice to be identical” (1975, p. 163). The second argument
points to the difficulties of settling upon a specification of what a
non-linguistic creature is thinking. “The dog, we say, knows that
its master is home. But does it know that Mr. Smith (who is the master)
is home? We have no real idea how to settle, or make sense of, these
questions” (1975, p, 163). It's not clear how seriously
Davidson himself takes these arguments. Many philosophers have been
unconvinced. Notice that both arguments turn on an
underdetermination claim—e.g., that the interpretation of the
man's action is underdetermined by the non-linguistic evidence.
But much the same thing is true even if we add what the man says (or to
be more precise, if we add what the man utters). The linguistic
evidence doesn't guarantee a correct interpretation any more than
the non-linguistic evidence does.

Davidson appears to be employing a very high standard for attributing
concepts to animals. In effect, he is asking for proof that our
attributions are correct. In contrast, most philosophers who are happy
to attribute concepts to animals do so because of a wealth of data
that are best explained by appealing to an internal system of
representation (e.g., Bermudez 2003; for overviews within cognitive
science, see Gallistel 1990, Hauser 2000, Bekoff, Allen, &
Burghardt 2002, and Shettleworth 2010). For example, many species of
birds cache food for later retrieval. Their very survival depends upon
their ability to successfully recover, in some cases, more than 10,000
different caches in a single season. Researchers studying one species
of caching birds have shown that not only do the birds represent the
location of the food, but they integrate this information with
information about the quality of the food, its perishability, and
whether their caching was observed by other birds. Evidence here comes
from demonstrations of selective retrieval and recaching of food items
under experimentally controlled conditions. Birds will retrieve more
perishable items first. When highly valued food items become highly
perishable, they shift strategies to retrieve a higher percentage of
less perishable food items. And birds that have themselves stolen
food from other birds will selectively recache stored food when they
are observed caching it (see Clayton, Bussey, & Dickinson 2003,
Emery, Dally, & Clayton 2004). Experimental data of this kind
provide evidence for particular concepts in birds (of food types,
locations, and so on) as well as surprisingly sophisticated cognitive
operations that make use of them.

There is a great deal of controversy among philosophers about the
implications of this type of research. Proponents of RTM are, of
course, entirely happy with the idea that the scientific theories of
what birds are doing can be taken at face value. Other philosophers
maintain that if the scientific theories say that birds are computing
an algorithm for determining a caching strategy, then this can only be
read as a façon de parler. Still others will grant that animals
have representations but go on to claim that these representation are
of a lesser status, not to be confused with concepts (Brandom 1994,
2000, McDowell 1994).

This raises an interesting question about whether there is a motivated
and principled difference between concepts in humans and mere
representations in animals. Philosophers who maintain that there is
such a difference often cite the role of concepts in reasoning. For
example, Robert Brandom claims that representations in animals do
little more than act as reliable mechanisms of discrimination. These
representations are supposed to be like thermometers, responding to
specific environmental features yet without entering into appropriate
inferential processes. However, it's not clear what counts as an
appropriate inferential process, and certainly there is room for
differing opinions on this point. Moreover, whatever reasoning amounts
to, comparative psychology is replete with examples that suggest that
animals are capable of far more than reliable detection. Animals may
not be as smart as humans, but that doesn't mean they are as dumb as
thermometers (see Hurley & Nudds 2006
and Carruthers (2006) on reasoning in animals).

Even if it's agreed that it is possible to have concepts in
the absence of language, there is a dispute about how the two are
related. Some maintain that concepts are prior to and independent of
natural language, and that natural language is just a means for
conveying thought (Fodor 1975, Pinker 1994). Others maintain that at
least some types of thinking (and hence some concepts) occur in the
internal system of representation constituting our natural language
competence (Carruthers 1996, 2002, Spelke 2003).

The arguments for deciding between these two positions involve a
mixture of theoretical and empirical considerations. Proponents of the
first view have claimed that language is ambiguous in ways that thought
presumably is not. For example, the natural language sentence
everyone loves someone could be interpreted to mean that
everyone loves someone or other, or to mean that everyone loves one and
the same person (Pinker 1994). Proponents of the first view have also
argued that since language itself has to be learned, thought is prior
to language (Fodor 1975; Pinker 1994). A third and similar
consideration is that people seem to be able formulate novel concepts
which are left to be named later; the concept comes first, the name
second (Pinker 1994).

Proponents of the alternative view—that some thinking occurs in
language—have pointed to the phenomenology of thought. It
certainly seems as if we are thinking in language when we
“hear” ourselves silently talking to ourselves (Carruthers
1996). There is also data that success on certain tasks (e.g., spatial
reorientation that relies on combining landmark information with
geometrical information) is selectively impaired when the linguistic
system is engaged but not when comparable attention is given to
non-linguistic distractors. The suggestion is that solving these tasks
requires thinking in one's natural language and that some of the
crucial concepts must be couched linguistically (Hermer-Vazquez,
Spelke, & Katsnelson 2001; Shusterman & Spelke 2005;
Carruthers 2002).

Finally, one further issue that bears mentioning is the status of
various claims regarding linguistic determinism and linguistic
relativity. Linguistic determinism is the doctrine that the
language a person speaks both causes her to conceptualize the world in
certain ways and limits what she can think about by imposing
boundaries on her conceptual system; as a result, people who speak
very different languages are likely to conceptualize the world in
correspondingly different ways. Linguistic relativity is the
weaker doctrine that the language one speaks influences how one
thinks.

Linguistic determinism is historically associated with the writings of
Benjamin Lee Whorf (Whorf 1956). Whorf was especially interested in
the languages of the indigenous people of America. He famously argued
that the Hopi both speak and think about time in ways that are
incongruent with European languages and thought. Rather than viewing
time as a continuum that flows evenly throughout the universe and that
can be broken up into countable events occurring in the past, present,
and future, the Hopi are supposed to focus on change as a
process. Their conceptual system is also supposed to differ from ours
in that it embodies a distinction between things that are or have been
accessible to perception versus things that are not, where the latter
category includes things in the future as well as mythical and mental
constructs.

The claim that the Hopi lack our concept of time has not stood up to
scrutiny. Whorf used clumsy translations of Hopi speech that concealed
the extent to which they talk about time (references to yesterday,
tomorrow, days of the week, lunar phases, etc.). More interestingly,
Whorf provided no direct evidence of how the Hopi think. Instead, he
used the circular reasoning that they don't think about
time as we do because they don't talk about time as we
do. In fact, the Hopi use numerous familiar devices for time keeping,
such as calendar strings and sun dials, and their sensitivity to time
is evident a wide variety of cultural practices (Malotki 1983).

Some of the deepest divides in contemporary philosophy concern the
limits of empirical inquiry, the status of conceptual analysis, and
the nature of philosophy itself (see, e.g., Chalmers 1996, Jackson
1998, DePaul & Ramsey 1998, Block & Stalnaker 1999, and
Williamson 2007). And concepts are right at the center of these
disputes. For many, philosophy is essentially the a priori analysis of
concepts, which can and should be done without leaving the proverbial
armchair. We've already seen that in the paradigm case, an analysis
embodies a definition; it specifies a set of conditions that are
individually necessary and jointly sufficient for the application of
the concept. When all goes well, the intuitions are supposed to match
the correct analysis perfectly (though generally speaking it's
understood that there may be a tradeoff, where most intuitions have to
match an analysis but where an otherwise successful analysis may lead
to the discrediting of a few intuitions).

Conceptual analysis is attractive to philosophers for a number of
reasons. One is that it makes sense of a good deal of philosophical
practice—what George Bealer (1998) calls the standard
justificatory procedure. Philosophers are always constructing thought
experiments and eliciting intuitions. If this practice makes sense,
then there has to be an understanding of what philosophy is that would
vindicate its utility. Conceptual analysis is supposed to provide just
what's needed here. Intuitions can be said to be of value to
philosophy precisely because they help us to get clearer about our
concepts, especially concepts of intrinsic philosophical interest
(JUSTICE, KNOWLEDGE, etc.).

A related attraction is that conceptual analysis explains why
philosophy can be an a priori discipline, as many suppose it is. If
philosophy is primarily about concepts and concepts can be investigated
from the armchair, then the a priori character of philosophy is secured
(Jackson 1998).

A third attraction of conceptual analysis is that conceptual analysis
has been argued to be a necessary precursor for answering questions
about ontological reduction, that is, the sort of reduction that takes
place when it's argued that genes are DNA segments, that sensations
are brain states, and so on (Chalmers 1996, Jackson 1998). According
to one way of filling this view out, one has to begin with an a priori
analysis of the higher-level concept, particularly an analysis that
makes explicit its causal relations. One can then appeal to empirical
findings regarding the things that actually have those causal
relations. For example, neuroscience may reveal that such-and-such
brain state has the casual relations that analysis reveals to be
constitutive of our concept of pain. In the course of doing this,
neuroscience is supposed to be showing us what pain is (Lewis 1966,
Armstrong 1968). But neuroscience is only in a position to do this
against the background of the philosophical work that goes into
articulating the concept. (For detailed treatments of this view of
reduction, see Chalmers 1996 and Jackson 1998—though it should
be noted that Chalmers argues that PAIN, and other concepts of
conscious mental states, cannot be analyzed solely in terms of their
causal relations and concludes from this that consciousness itself is
irreducible.) This work has generated a great deal of debate (e.g.,
Block & Stalnaker 1999, Yablo 2000, Papineau 2002).

Finally, a fourth attraction is that conceptual analysis may offer
normative guidance (Goldman 1986). For instance, epistemologists face
the question of whether our inferential practices are justified and, if
so, what justifies them. One standard answer is that they can be
justified if they conform to our intuitions about what counts as a
justified inference (Goldman 1986). In other words, an analysis of our
concept of justification is supposed to be all that is needed in order
to establish that a set of inference rules is justified. So if it ever
turned out that different groups of people employed qualitatively
different sets of inferential principles, we could establish the
epistemically preferable one by showing that it does a better job of
conforming to our concept of justification.

Many philosophers who are opposed to conceptual analysis identify
their approach as being naturalistic (e.g., Papineau 1993,
Devitt 1996, Kornblith 2002; see also the
entry
naturalism).
A common theme of this
work is that philosophy is supposed to be continuous with science and
that philosophical theories are to be defended on largely explanatory
grounds, not on the basis of a priori arguments that appeal to
intuition. Accordingly, perceived difficulties with conceptual
analysis provide arguments for naturalism.

One such argument centers around the failures of the classical theory
of concepts. Earlier, in Section 2, we noted that
paradigmatic conceptual analyses require concepts to have classical
structure, an assumption that is increasingly difficult to
maintain. For this reason, a number of philosophers have expressed
skepticism about the viability of conceptual analysis as a
philosophical method (e.g., Ramsey 1998, Stich 1992). Others, however,
have called into question the connection between conceptual analysis
and definitions (Chalmers & Jackson 2001).

Another objection to conceptual analysis is that the intuitions that
philosophers routinely rely upon may not be shared. Anyone who teaches
philosophy certainly knows that half the time students have the
“wrong intuitions”. But who are we to say that they are
wrong? And given that people disagree about their intuitions, these can
hardly be treated as objective data (Cummins 1998).

Things become even more interesting if we branch out to other
cultures. In a preliminary study of East Asian vs. Western intuitions,
Jonathan Weinberg, Shaun Nichols, & Stephen Stich (2001) found that
East Asians often have the “wrong intuitions” regarding
variations on classic philosophical thought experiments, including
Gettier-type thought experiments. At the very least, this work suggests
that philosophers should be cautious about moving from their own
intuitions to claims about the proper analysis of a concept.

What's more, the cultural diversity that the work in Weinberg et
al. points to raises a troubling question for philosophers who
want to establish normative claims on the basis of analyses of
concepts, such as the concept of justification. Suppose, for example,
that East Asian culture offers a different concept of justification
than the one that is embedded in Western commonsense thought (assuming
for sake of argument that there is a single concept of justification
in each culture). In addition, suppose that East Asians employ
different inferential practices than our own and that their practices
do a fair job of conforming to their concept of justification and that
ours do a fair job of conforming to our own. On what basis, then, are
we to compare and evaluate these differing practices? Does it really
make sense to say that ours are superior on the ground that they
conform better to our concept of justification? Wouldn't this
just be a form of epistemic prejudice? After all, the question arises
whether, given the two concepts of justification, ours is the one that
ought to be used for performing normative epistemic evaluations (Stich
1990; for further discussion see Williamson 2005, Sosa 2009, Stich
2009, Weinberg, Nichols, & Stich 2001, Weinberg et al. 2010).

Much is at stake in the debate between conceptual analysts and
naturalists, and it is likely to be a central topic in the theory of
concepts for the foreseeable future.

The SEP would like to congratulate the National Endowment for the Humanities on its 50th anniversary and express our indebtedness for the five generous grants it awarded our project from 1997 to 2007.
Readers who have benefited from the SEP are encouraged to examine the NEH’s anniversary page and, if inspired to do so, send a testimonial to neh50@neh.gov.