Analytic philosophy

Analytic philosophy, also called linguistic philosophy, a loosely related set of approaches to philosophical problems, dominant in Anglo-American philosophy from the early 20th century, that emphasizes the study of language and the logical analysis of concepts. Although most work in analytic philosophy has been done in Great Britain and the United States, significant contributions also have been made in other countries, notably Australia, New Zealand, and the countries of Scandinavia.

Nature of analytic philosophy

Analytic philosophers conduct conceptual investigations that characteristically, though not invariably, involve studies of the language in which the concepts in question are, or can be, expressed. According to one tradition in analytic philosophy (sometimes referred to as formalism), for example, the definition of a concept can be determined by uncovering the underlying logical structures, or “logical forms,” of the sentences used to express it. A perspicuous representation of these structures in the language of modern symbolic logic, so the formalists thought, would make clear the logically permissible inferences to and from such sentences and thereby establish the logical boundaries of the concept under study. Another tradition, sometimes referred to as informalism, similarly turned to the sentences in which the concept was expressed but instead emphasized their diverse uses in ordinary language and everyday situations, the idea being to elucidate the concept by noting how its various features are reflected in how people actually talk and act. Even among analytic philosophers whose approaches were not essentially either formalist or informalist, philosophical problems were often conceived of as problems about the nature of language. An influential debate in analytic ethics, for example, concerned the question of whether sentences that express moral judgments (e.g., “It is wrong to tell a lie”) are descriptions of some feature of the world, in which case the sentences can be true or false, or are merely expressions of the subject’s feelings—comparable to shouts of “Bravo!” or “Boo!”—in which case they have no truth-value at all. Thus, in this debate the philosophical problem of the nature of right and wrong was treated as a problem about the logical or grammatical status of moral statements.

The empiricist tradition

In spirit, style, and focus, analytic philosophy has strong ties to the tradition of empiricism, which has characterized philosophy in Britain for some centuries, distinguishing it from the rationalism of Continental European philosophy. In fact, the beginning of modern analytic philosophy is usually dated from the time when two of its major figures, Bertrand Russell (1872–1970) and G.E. Moore (1873–1958), rebelled against an antiempiricist idealism that had temporarily captured the English philosophical scene. The most renowned of the British empiricists—John Locke, George Berkeley, David Hume, and John Stuart Mill—have many interests and methods in common with contemporary analytic philosophers. And although analytic philosophers have attacked some of the empiricists’ particular doctrines, one feels that this is the result more of a common interest in certain problems than of any difference in general philosophical outlook.

Most empiricists, though admitting that the senses fail to yield the certainty requisite for knowledge, hold nonetheless that it is only through observation and experimentation that justified beliefs about the world can be gained—in other words, a priori reasoning from self-evident premises cannot reveal how the world is. Accordingly, many empiricists insist on a sharp dichotomy between the physical sciences, which ultimately must verify their theories by observation, and the deductive or a priori sciences—e.g., mathematics and logic—the method of which is the deduction of theorems from axioms. The deductive sciences, in the empiricists’ view, cannot produce justified beliefs, much less knowledge, about the world. This conclusion was a cornerstone of two important early movements in analytic philosophy, logical atomism and logical positivism. In the positivist’s view, for example, the theorems of mathematics do not represent genuine knowledge of a world of mathematical objects but instead are merely the result of working out the consequences of the conventions that govern the use of mathematical symbols.

The question then arises whether philosophy itself is to be assimilated to the empirical or to the a priori sciences. Early empiricists assimilated it to the empirical sciences. Moreover, they were less self-reflective about the methods of philosophy than are contemporary analytic philosophers. Preoccupied with epistemology (the theory of knowledge) and the philosophy of mind, and holding that fundamental facts can be learned about these subjects from individual introspection, early empiricists took their work to be a kind of introspective psychology. Analytic philosophers in the 20th century, on the other hand, were less inclined to appeal ultimately to direct introspection. More important, the development of modern symbolic logic seemed to promise help in solving philosophical problems—and logic is as a priori as science can be. It seemed, then, that philosophy must be classified with mathematics and logic. The exact nature and proper methodology of philosophy, however, remained in dispute.

For philosophers oriented toward formalism, the advent of modern symbolic logic in the late 19th century was a watershed in the history of philosophy, because it added greatly to the class of statements and inferences that could be represented in formal (i.e., axiomatic) languages. The formal representation of these statements provided insight into their underlying logical structures; at the same time, it helped to dispel certain philosophical puzzles that had been created, in the view of the formalists, through the tendency of earlier philosophers to mistake surface grammatical form for logical form. Because of the similarity of sentences such as “Tigers bite” and “Tigers exist,” for example, the verb to exist may seem to function, as other verbs do, to predicate something of the subject. It may seem, then, that existence is a property of tigers, just as their biting is. In symbolic logic, however, existence is not a property; it is a higher-order function that takes so-called “propositional functions” as values. Thus, when the propositional function “Tx”—in which T stands for the predicate “…is a tiger” and x is a variable replaceable with a name—is written beside a symbol known as the existential quantifier—∃x, meaning “There exists at least one x such that…”—the result is a sentence that means “There exists at least one x such that x is a tiger.” The fact that existence is not a property in symbolic logic has had important philosophical consequences, one of which has been to show that the ontological argument for the existence of God, which has puzzled philosophers since its invention in the 11th century by St. Anselm of Canterbury, is unsound.

Among 19th-century figures who contributed to the development of symbolic logic were the mathematicians George Boole (1815–64), the inventor of Boolean algebra, and Georg Cantor (1845–1918), the creator of set theory. The generally recognized founder of modern symbolic logic is Gottlob Frege (1848–1925), of the University of Jena in Germany. Frege, whose work was not fully appreciated until the mid-20th century, is historically important principally for his influence on Russell, whose program of logicism (the doctrine that the whole of mathematics can be derived from the principles of logic) had been attempted independently by Frege some 25 years before the publication of Russell’s principal logicist works, Principles of Mathematics (1903) and Principia Mathematica (1910–13; written in collaboration with Russell’s colleague at the University of CambridgeAlfred North Whitehead).

History of analytic philosophy

During the last decades of the 19th century, English philosophy was dominated by an absolute idealism derived from the German philosopher G.W.F. Hegel. For English philosophy this represented a break in an almost continuous tradition of empiricism. As noted above, the seeds of modern analytic philosophy were sown when two of the most important figures in its history, Russell and Moore, broke with idealism at the turn of the 20th century.

Absolute idealism was avowedly metaphysical, in the sense that its adherents thought of themselves as describing, in a way not open to scientists, certain very fundamental truths about the world. Indeed, in their view what passes for truth in the sciences is not really truth at all, for the scientist must, perforce, treat the world as composed of distinct objects and can describe and state only the relationships supposedly holding among them. But the idealists held that to talk about reality as if it were a multiplicity of objects is to falsify it; in the end, only the whole, the absolute, has reality.

In their conclusions and, most important, in their methodology, the idealists were decidedly not on the side of commonsense intuition. The Cambridge philosopher J.M.E. McTaggart, for example, argued that the concept of time is inconsistent and that time therefore is unreal. British empiricism, on the other hand, had generally started with commonsense beliefs and either accepted or at least sought to explain them, using science as the model of the right way in which to investigate the world. Even when their conclusions were out of step with common sense (as was the radical skepticism of David Hume), the empiricists were generally concerned to reconcile the two.

One can hardly claim, however, that analytic philosophers have universally accepted commonsense beliefs, much less that metaphysical conclusions (regarding the ultimate nature of reality) are absent from their writings. But there is in the history of the analytic movement a strong antimetaphysical strain, and its exponents have generally assumed that the methods of science and of everyday life are the best ways of finding out the truth.

Moore and Russell

The first break from the idealist view that the physical world is really only a world of appearances occurred when Moore, in a paper entitled “The Nature of Judgment” (1899), argued for a theory of truth that implies that the physical world does have the independent existence that it is naively supposed to have. Although the theory was soon abandoned, it represented British philosophy’s return to common sense.

The influences on Russell and Moore—and thus their methods of dealing with problems—soon diverged, and their different approaches became the roots of two broadly different traditions in analytic philosophy, referred to above as formalism and informalism. Russell, whose general approach would be adopted by philosophers in the formalist tradition, was a major influence on those who believed that philosophical problems could be clarified, if not solved, by using the technical equipment of formal logic and who saw the physical sciences as the only means of gaining knowledge of the world. They regarded philosophy—if as a science at all—as a deductive and a priori enterprise on a par with mathematics. Russell’s contributions to this side of the analytic tradition have been important and, in great part, lasting.

In contrast to Russell, Moore, who would inspire philosophers in the informalist tradition, never found much need to employ technical tools or to turn philosophy into a science. His dominant themes were the defense of commonsensical views about the nature of the world against esoteric, skeptical, or grandly metaphysical views and the conviction that the right way to approach a philosophical puzzle is to examine closely the question through which it was generated. Philosophical problems, he thought, are often intractable simply because philosophers have not stopped to formulate precisely what is at issue.

Because of these two themes, Moore enlisted sympathy among analytic philosophers who, from the 1930s onward, saw little hope in advanced formal logic as a means of solving traditional philosophical problems and who believed that philosophical skepticism about the existence of an independent external world or of other minds—or, in general, about common sense—must be wrong. These philosophers also shared with Moore the belief that it is often more important to look at the questions that philosophers pose than at their proposed answers. Thus, unlike Russell, who was important for his solutions to problems in formal logic and the philosophy of mathematics, among other areas, it was more the spirit of Moore’s philosophy than its lasting contributions that made him such an important influence.

G.E. Moore, detail of a pencil drawing by Sir William Orpen; in the National Portrait Gallery, LondonCourtesy of the National Portrait Gallery, London

In his seminal essay “A Defence of Common Sense” (1925), as in others, Moore argued not only against idealist doctrines such as the unreality of time but also against all the forms of skepticism—for example, about the existence of other minds or of a material world—that philosophers have espoused. The skeptic, he pointed out, usually has some argument for his conclusion. Instead of examining such arguments, however, Moore pitted against the skeptic’s premises various quite everyday beliefs—for example, that he had breakfast that morning (thus, time cannot be unreal) or that he does in fact have a pencil in his hand (thus, there must be a material world). He challenged the skeptic to show that the premises of the skeptic’s argument are any more certain than the everyday beliefs that form the premises of Moore’s argument.

Although some scholars have seen Moore as an early practitioner of ordinary language philosophy, his appeal was not to what it is proper to say but rather to the beliefs of common sense. His rejection of any philosophical doctrine that offends against common sense was influential not only in the release that it afforded from the metaphysical excesses of absolute idealism but also in its impact on the sensibilities and general orientation of most later analytic philosophers.

Moore was also important for his vision of the proper business of philosophy—analysis. He was puzzled, for example, about the proper analysis of “a sees b,” in which b designates a physical object (e.g., a pencil). He thought that there must be a special sense of see in which one does not see the pencil but sees only part of its surface. In addition, he thought that there must be another sense of see in which what is directly perceived is not even the surface of the pencil but rather what Moore called “sense data” and what earlier empiricists had called “visual sensations” or “sense impressions.” Moore’s problem was to discern the relationships between these various elements in perception and, in particular, to discover how a person can be justified, as Moore fully believed he is, in his claims to see physical objects when what he immediately perceives are really only sense data. The idea that sense impressions form the immediate objects of perception played a large role in early analytic philosophy, showing once again its empiricist roots. Later, however, it became an important source of division among the logical positivists. In addition, most ordinary-language philosophers, as well as those closely influenced by the later work of Russell’s most famous student, Ludwig Wittgenstein, found sense data to be as unpalatable and unwarranted as Moore had found McTaggart’s doctrine of the unreality of time.

One of the recurring themes in philosophy is the idea that the discipline needs to be given a new methodology. Among empiricists this has often meant making it more scientific. From an early date, Russell enunciated this viewpoint, finding in the techniques of symbolic logic a measure of reassurance that philosophy might be put on a new foundation. Russell did not see the philosopher as merely a logician, however. Symbolic logic might provide the framework for a perfect language, but the content of that language is something else. The job of the philosopher is—for Russell, as it was for Moore—analysis. But the purpose is somewhat different. In most of Russell’s work, analysis has the task of uncovering the assumptions—especially about the kinds of things that exist—that it is necessary to adopt in order to be able to describe the world as it is. For the most part this description is the one that science gives, and it is therefore realistic. Thus, Russell’s use of analysis was openly metaphysical.

Bertrand Russell, 1960Courtesy of the British Broadcasting Corporation, London

There then arises the question of how philosophical analysis—which, at least on one conception, is concerned with how people talk about the world—can presume to give any answers about how the world is. The search for an answer begins with Russell’s theory of descriptions, a doctrine that is evidently closely tied to linguistic concerns.

In a simple subject-predicate statement such as “Socrates is wise,” Russell observed, there seems to be something referred to (Socrates) and something said about it (that he is wise). If the proper name in such a sentence is replaced by a “definite description”—as in the statement “The president of the United States is wise”—there is apparently still something referred to and something said about it. A problem arises, however, when nothing fits the description, as in the statement “The present king of France is bald.” Although there is apparently nothing for the statement to be about, one nevertheless understands what it says. Prior to Russell’s work on definite descriptions, some philosophers—most notably Alexius Meinong (1853–1920)—felt forced by such examples to conclude that, in addition to things that have real existence, there are things that have some other sort of existence, for such statements could not be understood unless there was something for them to be about.

In Russell’s view, philosophers like Meinong had been misled by the surface grammatical form of sentences containing definite descriptions. Although they treated them as if they were simple subject-predicate statements, in reality they were much more complex. Upon analysis, the statement “The present king of France is bald” is shown to be a complex conjunction of other statements. Rendered in symbolic logic, these statements are: (i) (∃x)(Fx), or “There is a present king of France”; (ii) (∀y)(Fy → y=x), or “There is at most one present king of France”; and (iii) (∀x)(Fx → Bx), or “If anyone is a present king of France, he is bald.” More important, each of the three component statements is general, in the sense that it does not refer to anything or anyone in particular. Thus, there is no phrase in the complete analysis equivalent to “the present king of France,” which shows that the phrase is not an expression, like a proper name, that refers to something as the thing that the whole statement is about. There is no need, therefore, to make Meinong’s distinction between things that have real existence and things that have some other kind of existence.

Because descriptions do not refer directly to things in the world, however, there must be some other way in which such a direct connection between language and the world is made. In search of this connection, Russell turned his attention to proper names. The name Aristotle, for example, does not seem to carry any descriptive content. But Russell argues, on the contrary, that ordinary names are really concealed definite descriptions (Aristotle may simply mean “The student of Plato who taught Alexander, wrote the Metaphysics, etc.”). If a name had no descriptive content, one could not sensibly ask about the existence of its bearer, for one could then not understand what is expressed by a statement involving it. If Russell were a name in this sense (without any descriptive content), then merely to understand the statement “Russell exists” or the statement “Russell does not exist” presupposes that one already knows what Russell refers to. But then there cannot be any genuine question about Russell’s existence, for just to understand the question one must know the thing to which the name refers. Ordinary proper names, however—Russell, Homer, Aristotle, and Santa Claus—as Russell pointed out, are such that it makes sense to question the existence of their bearers. Thus, ordinary names must be concealed descriptions and cannot be the means of directly referring to the particular things in the world.

Russell eventually concluded that things in the world can be talked about only through the medium of a special kind of name—in particular, one about which no question can arise whether it names something or not—and he suggested that in English the only possible candidates are the demonstrative pronounsthis and that.

At this point in his thinking, Russell shifted from questions about the nature of language to questions about the nature of the world. He asked what sort of thing it is that can be named in the strict logical sense, that can be known and talked about, and from which one can learn about the world. The important restriction was that no question about whether it exists or not can arise. Ordinary physical objects and other people seemed not to fit this requirement.

In his search for something whose existence cannot be questioned, Russell hit upon present experience and, in particular, upon sense data: one can question whether one is really seeing some physical object—whether, for example, there is a desk before one—but one cannot question that one is having visual impressions or sense data. Thus, what a person can name in the strict logical sense and what things in the world he can refer to directly turn out to be elements of his present experience. Russell therefore made a distinction between what can be known by acquaintance and what can be known only by description—i.e., between things whose existence cannot be doubted and things about whose existence, at least theoretically, doubt can be raised. What is novel about Russell’s conclusion is that it was arrived at from a fairly technical analysis of language. To be directly acquainted with something is to be in a position to give it a name in the strict logical sense, and to know something only by description is to know only that there is something that the description uniquely fits.

Russell was not constant in his view about physical objects. At one point he thought that the observer must infer their existence as the best hypothesis to explain the observer’s experience. Later he held that they were “logical constructions” out of sense data.

The next important development in analytic philosophy was initiated when Russell published a series of articles entitled “Philosophy of Logical Atomism” (1918–19), in which he acknowledged a debt to Wittgenstein, who had studied with Russell before World War I. Wittgenstein’s own version of logical atomism, presented in his difficult work Tractatus Logico-Philosophicus (1922), was tremendously influential in the subsequent development of analytic philosophy.

Russell’s choice of the words logical atomism to describe this viewpoint was, in fact, particularly apt. By the word logical Russell meant to sustain the position, described above, that through analysis—particularly with the aid of symbolic logic—the underlying logical structure of language can be revealed and that this disclosure, in turn, would show the fundamental structure of that which language is used to describe. By the word atomism Russell meant to emphasize the particulate nature of the results that his analyses and those of Wittgenstein seemed to yield.

On the linguistic level, the atoms in question are atomic propositions, the simplest statements that it is possible to make about the world; and on the level of what language talks about, the atoms are the simplest atomic facts, those expressible by atomic propositions. More complex propositions, called molecular propositions, are built up out of atomic propositions via the logical connectives—such as “… or …,” “… and …,” and “… not …”—and the truth-value of the molecular proposition is in each case a function of the truth-values of its component atomic propositions.

Language, then, must break down, upon analysis, into propositions that cannot be analyzed into any other simpler propositions; and, insofar as language mirrors reality, the world must then be composed of facts that are not constituted of other simpler facts. Atomic propositions themselves, however, are composed of strings of names that function, as Russell explained, in the strict logical sense; and atomic facts are composed of simple objects, the things that can thus be named.

The details of logical atomism have fascinated philosophers because of the way in which they not only formed a coherent whole but also seemed to follow inexorably from the doctrine’s central assumptions. There are close connections between logical atomism, which was perhaps the most metaphysical theory in contemporary analytic philosophy, and traditional empiricism. The decomposition of language and the world into atomic elements, for example, was a significant feature of the work of the classical empiricists Locke, Berkeley, and Hume. The thesis that the structure of language mirrors the structure of reality has as a consequence that the meaning of a proposition is the particular fact to which it is isomorphic. This “picture theory” of meaning, as it came to be called, was adumbrated by Russell and stated explicitly in the Tractatus. Another theme of logical atomism is that the deductive sciences—mathematics and logic—are based solely on the way that language operates and cannot reveal any truths about the world, not even about a world of entities called numbers. Finally, logical atomism, in Wittgenstein’s thought as opposed to Russell’s, was at one and the same time metaphysical—in the sense of conveying via pure reasoning something about how the world is—and antimetaphysical. Wittgenstein’s Tractatus is unique in the history of empiricism in its acceptance of the fact that it is itself a piece of metaphysics, even though part of its metaphysics is that metaphysics is impossible: the Tractatus says of itself that what it says cannot be coherently said. Only empirical science, according to Wittgenstein, can tell us anything about the world as it is. Yet the Tractatus apparently tells us about, for example, the relationship between language and the facts of the world. For Wittgenstein the solution of this apparent paradox lies in his distinction between what can be said and what can only be shown. There are certain things that can somehow be seen to be so—in particular, the ways in which language is connected with the world—though they cannot be straightforwardly stated. Although metaphysics is not strictly expressible in any language, the attempt to say metaphysical things, if done in the right way, can show what it cannot coherently express.

Wittgenstein’s Tractatus was both a landmark in the history of contemporary analytic philosophy and perhaps its most aberrant example. Not only did it contain a highly sophisticated metaphysics, but it also was an important influence on the most antimetaphysical school of analytic philosophy, viz., logical positivism. The central doctrines of this school were developed by a group of philosophers, scientists, and logicians centred in Vienna who came to be known as the Vienna Circle. Among the members of this group, Rudolf Carnap (1891–1970) and Moritz Schlick (1882–1936) have perhaps had the most influence on Anglo-American philosophy, though it was an English philosopher, A.J. Ayer (1910–89), who introduced the ideas of logical positivism to English philosophy in his widely read work Language, Truth and Logic (1936). Its main tenets have struck sympathetic chords among many analytic philosophers and are still important today, even if sometimes in repudiation.

Sir A.J. Ayer, late 1980s.Geoff A Howard/Alamy

Above all else, logical positivism was antimetaphysical; nothing can be learned about the world, it held, except through the methods of the empirical sciences. The positivists sought a method that would (1) determine whether a theory that seems to be about the world is really metaphysical and (2) show that such a metaphysical theory is, in fact, meaningless. This they found in the principle of verification. In its positive form, the principle says that the meaning of any statement that is really about the world is given by the methods employed for verifying its truth or falsity—the only allowable methods being, ultimately, those of observation and experiment. In its negative form, the principle says that no statement can be about the world unless there is some method of verification attached to it. The negative form was the weapon used against metaphysics and as a vindication of science as the only possible source of knowledge about the world. The principle would thus class as meaningless many philosophical and religious theories that purport to say something about the world but provide no way of testing the truth of the statements of which the theory is composed. In religion, for example, it would render suspect the statement that God exists, which, being metaphysical, would be strictly speaking meaningless.

The principle of verification ran almost immediately into difficulties, most of which were first raised by the positivists themselves. The attempt to work out these difficulties belongs to a more detailed study of the movement. It is sufficient to note here that, as a result of these problems, most subsequent analytic philosophers have been wary of appealing directly to the principle. It has, however, influenced philosophical work in more subtle ways.

With the principle of verification in hand, the positivists thought that they could show a great many theories to be nonsense. There were several areas of discourse, however, that failed the test of the principle but that nevertheless were impossible to rule out in this fashion. Foremost among them were the disciplines of mathematics and ethics. Mathematics (and logic) could hardly be written off as nonsense. Yet mathematical theorems are not verifiable by observation and experiment; they are known, in fact, by pure a priori reasoning alone. The answer to this problem seemed to be provided in Wittgenstein’s Tractatus, which held that the propositions of mathematics and logic are, in Kantian terms, analytic; i.e., like the statement “All bachelors are unmarried,” they are true not because they correctly describe the world but because they are consistent with or follow logically from the conventions underlying the use of the symbols involved.

About ethics—or, more precisely, about any statement expressing a judgment of value—the positivist view was quite different, yet still of lasting importance. On this view value judgments are not, like mathematical truths, necessary adjuncts to science; nor, obviously, are they true by definition or linguistic convention. The usual view of the positivists, as mentioned briefly above, is that what look like statements of fact—e.g., that one should not tell lies—are really expressions of one’s feelings toward a certain action, in the same way that “Ouch!” is an expression of one’s pain. Value judgments, therefore, are not about the world, and they are not really true or false. This doctrine, known as emotivism, illustrates the positivists’ divorce of ethics from science and once again reflects an old empiricist theme. The same theme can be seen, for example, in Hume’s dictum that one cannot derive an “ought” from an “is”: from matters of fact one cannot derive a conclusion about what ought to be.

The later Wittgenstein

A crucial turn that initiated developments that were destined to have a lasting and profound effect on much of contemporary analytic philosophy occurred in 1929, when Wittgenstein, after some years in Austria during which he was not philosophically very active, returned to England and established his residence at Cambridge. There the direction of his thought soon shifted radically away from the doctrines of the Tractatus, and his views became in many ways diametrically opposed to logical atomism. Because he published none of his writings from this period, his influence on other English philosophers—and ultimately on those in all of the countries associated with analytic philosophy—was exerted through his students and others to whom he spoke at Cambridge. His style changed too, from the semirigorous and formally organized propositions of the Tractatus to sets of loosely connected paragraphs and remarks in which ideas are often conveyed not discursively but by suggestion and example. One result of this transformation was a major division within the ranks of analytic philosophers, between those who practiced philosophy in the manner of the later Wittgenstein and those who preferred the Tractatus.

Although Wittgenstein’s thought ranged over almost the entire field of philosophy, from the philosophy of mathematics to ethics and aesthetics, its impact has perhaps been felt most where it has concerned the nature of language and the relationship between the mental and the physical.

Language and rule following

For the logical atomists, language was conceived as having a certain necessary and fairly simple underlying structure, which was expressible in terms of symbolic logic. In other words, the underlying structure of language is reflected in the logical rules that govern the construction of molecular propositions out of atomic ones. The later Wittgenstein, however, rejected this assumption. Language, he now thought, is like an instrument that can be used for an indefinite number of purposes. Hence, any effort to codify its operation in some small set of rules would be like supposing that screwdrivers, for example, can be used only to drive screws and not also to open jars or to jimmy windows. Language is a human institution that is bound only by what its speakers consider to be correct or incorrect. And that, in turn, is not really a matter for a priori theories to consider.

The conception of language as a logical calculus leads naturally to the idea that meaning is a kind of naming, or referring. Prior to Wittgenstein’s work in the 1930s and afterward, many philosophers, especially those influenced by logical atomism, held that, for at least a large class of terms—proper names and noun phrases, for example—the meaning of the term is just the particular entity in the world that it names. This view, according to Wittgenstein, ignores the huge variety of ways in which words can be meaningful, and more generally it assumes that meaning is fixed independently of the ways in which language is used and the activities in which it is embedded. Wittgenstein’s later conception of meaning is expressed in his well-known observation (which, however, he did not make without qualification) that “the meaning of a word is its use in the language.”

The conception of language as a logical calculus was misleading in other respects as well. It suggested, for example, that in learning a language one learns the rules first and then goes on to apply them in speaking and understanding. But, in fact, one does not first learn the rules and then use the language; indeed, prior to learning the language, one would not know what to do with rules. Mathematics and logic are, in this sense, bad models for language, because they aim at setting out beforehand the rules and principles that are subsequently to be used. The “rules” that one might plausibly discern in the language that one speaks are not, as rules, already there, in a ghostly way, guiding what one says; they are either generalizations from the finite data of what is counted as correct or incorrect, or they are rules that, as Wittgenstein metaphorically expressed it, one puts away in the archives—one adopts the rule, but only after the fact.

The notion of following a rule was wrongly analyzed in many classical views about language, according to Wittgenstein. He cast irrevocable doubt on the prevalent theory—typified best, perhaps, in John Locke’s Essay Concerning Human Understanding (1690)—that to use an expression meaningfully is to have in one’s mind a standard or a rule for applying it correctly. Against this theme, Wittgenstein’s point was that a rule by itself is dead—it is like a ruler in the hands of someone who does not know how to use it, a mere stick of wood. Rules cannot compel or even guide a person unless he knows how to use them, and the same is true about mental images, which have often been thought to provide the standard for using linguistic expressions. But if rules themselves do not give life to words but require a similar explanation for what gives them life—if there must be, in effect, rules that tell one how to apply the rules—then there is a useless regress and no philosophical or explanatory value in the assumption of an apparatus of internal rules and standards.

Relation between mental and physical events

In some respects Wittgenstein made significant breaks with the empiricist tradition, especially in his views about language and the explanation of the rigour of the deductive sciences. His treatment of the relationship between mental events and physical events also represents an important departure. Empiricists generally have started from the important assumption that what a person is immediately acquainted with is his own sensations, ideas, and volitions; that these are mental and not physical; and, most important, that the things he knows immediately are essentially private and inaccessible to others. For both Moore and Russell there then arose the problem of how, in view of the privacy stressed by the sense-datum theory, the world of physical objects could be known. Wittgenstein’s attack on this viewpoint, which has come to be known as the “private language” argument, has been much discussed, partly because it was in this area that Wittgenstein presented what could most easily be identified as a more or less formal argument—one that could then be analyzed and criticized in an analytic manner. Even in this case, however, his style of writing was such that the proper formulation of the argument has become a main source of controversy. Wittgenstein argued that the notion of an utterly private experience would imply: (1) that what goes on in the mental life of a person could be talked about only in a language that that person alone could understand; (2) that such a private language would be no language at all (this has been the main source of controversy); and (3) that the widely held doctrine that there are absolutely private mental events cannot be intelligibly stated, because to say that there are such events is to speak in a public language about things that supposedly can be referred to only in a private language understandable by just one person.

The fact that Wittgenstein’s argument against private language depends essentially on the question “What is it to follow a rule?” illustrates a common characteristic of his writings, viz., that themes developed in one area of philosophy continually emerge in apparently quite divorced areas. His extraordinary ability to see a common source of difficulty in philosophical problems that seem to be unrelated helps to explain his style of writing, which seems at first sight to be a somewhat chaotic arrangement of ideas.

For a time, analytic philosophy was attracted to a behaviouristic view of mental phenomena according to which apparently private mental events, such as the feeling of fear, are not really private and in fact are definable in terms of publicly observable patterns of behaviour. Empiricism’s orientation toward science, which is founded on observation, together with the view that the evidence one has of what goes on in the mental lives of other people must derive from what one sees of their behaviour, has often warred against another inclination of empiricism, which is to regard the starting point of all knowledge of the world, for each person, as being essentially private sense experience. Wittgenstein was tremendously influential, however, in suggesting that these two extremes are not the only alternatives. Yet attempts to state how Wittgenstein could deny the privacy of experience without espousing some form of behaviourism—which treats emotions, desires, and attitudes as dispositions to behave in certain ways—have not been very successful. Sympathetic interpreters have taken up the notion of “criteria,” which Wittgenstein used but did not develop in any detail. The idea is that, for mental states such as fear, outward behaviour (e.g., running away, blanching, or cringing) does not constitute what it is to be in that state, as behaviourism would have it, but neither is such behaviour merely evidence of some completely private event. The problem has been to characterize the relation between behaviour and mental states in such a way that the two are neither identical nor evidence for each other while still allowing that knowledge of a person’s characteristic behaviour is essential to understanding the notion of a certain mental state.

The “therapeutic” function of philosophy

For the later Wittgenstein and many philosophers influenced by him, the proper role of philosophy is not, as it was for Russell, to develop theories in answer to philosophical problems but to clear up the conceptual confusions through which philosophical problems arise in the first place. These confusions invariably come about through misunderstandings of the complicated ways in which terms with philosophical import—such as know, believe, desire, intend, and think—are used in everyday life. Philosophers who are thus “bewitched” by language have been led to wonder, for example, how one can know what is going on in another’s mind or how desires and emotions can produce physical changes in the body, and vice versa. Examination of the actual workings of psychological language would, on this way of looking at philosophy, “dissolve” rather than solve the problems, for it would reveal features of the psychological concepts involved that philosophers, in their original formulation of the problems, had ignored or misunderstood. Philosophy is thus not an avenue to discovering philosophical truths but a kind of conceptual “therapy.” As Wittgenstein observed in the Philosophical Investigations (1953), the aim of philosophy is “to shew the fly the way out of the fly-bottle.”

Critics have argued that this way of looking at philosophy reduces the discipline to a sterile, inward-looking, and ultimately uninteresting enterprise. However, the confusions that philosophy, thus conceived, seeks to clear up need not be only those of philosophers. Scientists, for example, sometimes produce or presuppose philosophical theories that affect how they conduct their research—which, therefore, may be a fitting subject for philosophical therapeutics. Behaviourism in psychology seems to presuppose a philosophical theory and perhaps to be based on a general confusion about psychological concepts. More recently, some philosophers have suggested that contemporary cognitive science—and in particular the field of artificial intelligence, which views the human mind as a kind of computer—also is based on conceptual confusions created in large part by misunderstandings of the complexities of psychological speech. On this view, therefore, philosophy can have a therapeutic value beyond the sphere of philosophy itself.

Later trends in England and the United States

Wittgensteinians

Close students of Wittgenstein’s ideas tended to work chiefly on particular concepts that lie at the core of traditional philosophical problems. As an example of such an investigation, a monograph entitled Intention (1957), by G.E.M. Anscombe, an editor of Wittgenstein’s posthumous works, may be cited as an extended study of what it is for a person to intend to do something and of what the relationship is between his intention and the actions that he performs. This work occupied a central place in a growing literature about human actions, which in turn influenced views about the nature of psychology, of the social sciences, and of ethics. Another student of Wittgenstein, the American philosopher Norman Malcolm, has investigated concepts such as knowledge, certainty, memory, and dreaming. As these topics suggest, Wittgensteinians tended to concentrate on Wittgenstein’s ideas about the nature of mental concepts and to work in the area of philosophical psychology. Typically, they began with classical philosophical theories and attacked them by arguing that they employ some key concept, such as knowledge, in a manner incongruous with the way in which the concept would actually be employed in various situations. Their works thus abound with descriptions of hypothetical, though usually homely, situations and with questions of the form “What would a person say if…?” or “Would one call this a case of X?”

After World War II the University of Oxford was the centre of extraordinary philosophical activity; and, although Wittgenstein’s general outlook on philosophy—his turning away, for example, from the notion of formal methods in philosophical analysis—was an important ingredient, many of the Oxford philosophers could not be called Wittgensteinians in the strict sense. The method employed by many of them has often been characterized—especially by critics—as an “appeal to ordinary language,” and they were thus identified as belonging to the school of “ordinary language” philosophy. Exactly what this form of argument is supposed to be and what exemplifies it in the writings of these philosophers has been by no means clear. Gilbert Ryle, Moore’s successor as editor of a leading journal, Mind, was among the most prominent of those analysts who were regarded as using ordinary language as a philosophical tool. Ryle, like Wittgenstein, pointed out the mistake of regarding the mind as what he called “a ghost in a machine” by investigating how people employ a variety of concepts, such as memory, perception, and imagination, that designate “mental” properties. He tried to show that, when philosophers carry out such investigations, they find that, roughly speaking, it is the way people act that leads to the attribution of these properties and that there is no involvement of anything internally private. He also attempted to show how philosophers were led to dualistic conclusions through the use of a wrong model in terms of which to interpret human activities. A dualistic model may be constructed, for example, by wrongly supposing that an intelligently behaving person must be continually utilizing knowledge of facts—knowledge that something is the case. Ryle contended, on the contrary, that much intelligent behaviour is not a matter of knowing that something is the case but of knowing how to do something. Once this difference between “knowing that” and “knowing how” is acknowledged, according to Ryle, there is no temptation to explain the behaviour by looking for a private internal knowledge of facts.

Although Ryle’s objectives were similar to those of Wittgenstein, his results often seemed more behaviouristic. It is true that Ryle did ask, in pursuit of his method, some fairly detailed questions about when a person would say, for example, that someone had been imagining something; but it is by no means clear that he was appealing to ordinary language in the sense of an investigation into how speakers of English use certain expressions. In any case, the charge, often voiced by critics, that this style of philosophizing trivializes and perverts philosophy from its traditional function would probably also have to be leveled against Aristotle, who frequently appealed to “what we would say.”

A powerful philosophical figure among postwar Oxford philosophers was John Austin, who was White’s Professor of Moral Philosophy until his death in 1960. Austin believed that many philosophical theories derive their plausibility from overlooking distinctions—often very fine—between different uses of expressions, and he also thought that philosophers too frequently think that any one of a number of expressions will do just as well for their purposes. (Thus, ignoring the difference between an illusion and a delusion, for example, lends credence to the view that the objects of immediate perception are not physical objects but sense data.) Austin’s work was, in many respects, much closer to the ideal of philosophy as comprising the analysis of concepts than was that of Ryle or Wittgenstein. Austin was also much more concerned with the nature of language itself and with general theories of how it functions. His novel approach, as exemplified in the posthumously published lectures How to Do Things with Words (1962), set a trend that was followed in a sizable literature in the philosophy of language. Austin took the total “speech act” as the starting point of analysis, and this allowed him to make distinctions based not only upon words and their place in a language but also upon points such as the speaker’s intentions in making the utterance and its expected effect on the audience. There was also in Austin’s approach something of the program of Russell and the early Wittgenstein for laying bare the fundamental structure of language. In the 1960s and ’70s, Austin’s theory of speech acts was considerably extended and systematized in work by his American student John Searle.

Although the Oxford philosophers and the posthumous publication of Wittgenstein’s writings produced a revolution in Anglo-American philosophy, the branch of analytic philosophy that emphasized formal analyses by means of modern logic was by no means dormant. Since the appearance of Principia Mathematica in 1910–13, striking new findings have emerged in logic, many of which, though requiring for their understanding a high level of mathematical sophistication, are nevertheless important for philosophy.

Among those philosophers for whom symbolic logic occupied a central position was W.V.O. Quine, who taught at Harvard University from the 1930s to his retirement in 1978. Symbolic logic represented for him, as it did for many earlier analytic philosophers, the framework for the language of science. There were two important themes in his work, however, that represent significant departures from the positions of the logical atomists and the logical positivists. In the first place, Quine rejected the distinction between “analytic” statements, whose truth or falsity depends upon the meaning of the terms involved (e.g., “All bachelors are unmarried”), and “synthetic” statements, whose truth or falsity is a matter of empirical and observable fact (e.g., “It is raining here now”). This distinction, which had played an essential role in logical positivism and was thought by most empiricists to be the basis of the division between the deductive sciences (including philosophy) and the empirical ones, was impossible to draw, according to Quine. In the course of his argument, a similar doubt was cast upon concepts traditional not only to philosophy but also to linguistics—in particular, the concept of synonymy, or sameness of meaning.

The second important departure of Quine’s philosophy was his attempt to show that science can be successfully conducted without reference to what he calls “intensional entities.” Among such entities are many items that analytic philosophers had thought they could talk about without difficulty, such as meanings, propositions, and the properties—attributed to statements—of being necessarily true or possibly true. Because he did not accept the existence of entities that did not need to be referred to in successful scientific theories, Quine concluded that we have no good reason to believe that intensional entities exist. Quine’s work, though by no means widely accepted, has made analytic philosophers at least wary of uncritically accepting certain of their standard distinctions.

Since the mid-20th century, there has been considerable interaction between analytic philosophy and the science of linguistics. This interaction did not occur in earlier years because analytic philosophers, at least until the later Wittgenstein, had almost always considered their study of language to be a priori and thus unconcerned with empirical facts about particular languages. However, the advent of theories of transformational-generative grammar in the work of the American linguist Noam Chomsky and others from the late 1950s, and in particular Chomsky’s theory of innate linguistic knowledge in the form of a “universal grammar,” produced a revolution in linguistics and exerted a powerful influence in analytic philosophy, especially in the fields of epistemology and the philosophy of mind. At first, some analytic philosophers regarded Chomsky’s analyses, in which the surface syntactic structures of sentences were generatively derived from underlying “deep structures,” as a possible model for philosophical analysis. It was subsequently considered, however, that, whereas Chomsky’s way of looking at grammar had contributed valuable concepts to philosophy, it was not an appropriate methodology for doing analytic philosophy. The interchange between linguists and philosophers, however, has continued.

Analytic philosophy today

Beginning in the last quarter of the 20th century, analytic philosophy was occupied with two vigorous debates, the first concerning the theory of reference and the second concerning the theory of mind.

The theory of reference

The debate concerning the theory of reference was about which of two competing accounts, one based on the views of Frege and one based on the early views of Russell, is best able to explain how people, using language, are able to refer to things in the world and to communicate with each other. The debate involved a long-standing puzzle regarding so-called “identity” statements—i.e., statements consisting of two names or descriptions joined by is or are. The puzzle was how to account for the apparent informativeness of statements such as “Venus is the morning star,” in which the referents of the names or descriptions are the same. Because “Venus” and “the morning star” both refer to Venus, the statement “Venus is the morning star” must be equivalent in content to “Venus is Venus”—both statements say of a certain object, namely Venus, that it is Venus. But if the two statements say the same thing, how is it possible that one of them, “Venus is the morning star,” is informative—indeed, it represents a discovery made by astronomers in ancient Babylonia—whereas the other, “Venus is Venus,” is not?

Frege’s solution to the puzzle involves a tripartite distinction between a linguistic expression, its meaning, or sense (Sinn), and its referent (Bedeutung). The meaning of an expression, according to Frege, is what one can be said to grasp when one understands it and what the expression shares with its translations into other languages. The meaning determines the expression’s referent as the thing to which the meaning uniquely applies. Thus, the referent of “the morning star” is the planet Venus, because the meaning of “the morning star” uniquely applies to that planet. Accordingly, whereas “Venus” and “the morning star” have the same referent, their meanings are different, and this explains why “Venus is the morning star” is informative and “Venus is Venus” is not.

Russell’s solution to the puzzle is based on his theory of descriptions. As discussed above (seeHistory of analytic philosophy: Bertrand Russell), Russell held that definite descriptions are not genuinely referring expressions, as are logically proper names, and that sentences containing them are logically equivalent to complex general statements containing existential and universal quantifiers. On this view, the sentence “Venus is the morning star” is logically equivalent to the complex statement “(i) There is a morning star, (ii) there is at most one morning star, and (iii) if anything is a morning star, then it is Venus.” Thus, “Venus is the morning star” is informative because it is equivalent to a complex statement that contains information about the morning star and its relation to Venus. In contrast, according to an early view of Russell (one in which ordinary proper names function logically as genuine names rather than as concealed descriptions), the sentence “Venus is Venus” says only that Venus is identical to itself.

A major difference between the Fregean and the Russellian accounts is that for Frege every referring expression, whether a proper name or a description, has both a meaning and a referent, whereas for Russell proper names have referents but no meanings. Consequently, within a Russellian semantics the connection between language and the world via proper names is direct, whereas for Frege it is indirect, taking place through the intermediation of the meaning of the name.

From the early 20th century, analytic philosophers were thus divided over whether reference is direct or indirect. In 1980 support for the direct-reference view was provided by the American philosopher Saul Kripke, who argued that proper names, unlike descriptions, were “rigid designators” that referred directly to the same object in every “possible world.” Thus, according to Kripke, although Aristotle was the teacher of Alexander the Great, it could have turned out that someone other than Aristotle was Alexander’s teacher. In linguistic terms the referent of “the teacher of Alexander the Great” is different in different possible worlds, and the sentence “Aristotle was the teacher of Alexander the Great” is therefore true in some possible worlds and false in others (i.e., it is a contingent truth). But whereas someone other than Aristotle could have been the teacher of Alexander the Great, no one other than Aristotle could have been Aristotle. In linguistic terms, once the referent of Aristotle is fixed in the actual world (i.e., once Aristotle is applied to Aristotle), the name Aristotle must refer to Aristotle in every possible world in which it refers at all, and the sentence “Aristotle is Aristotle” is therefore true in every possible world (i.e., it is a necessary truth). But if the referent of Aristotle is the same in every possible world, then it cannot be determined by means of a description such as “the teacher of Alexander the Great,” because the referents of such descriptions, as we have seen, are different in different possible worlds. Therefore, Aristotle and all other proper names refer directly to their bearers. Kripke was anticipated in this theory by the philosopher Ruth Barcan Marcus and joined by a large number of other thinkers, including Hilary Putnam, David Kaplan, Joseph Almog, and Howard Wettstein.

The Fregean side of the debate also had many supporters, chief among them the American philosopher John Searle. His view, following that of the British philosopher P.F. Strawson, was that to speak as if words by themselves refer is an oversimplification: it is not words that refer but people using words. What ultimately determines the referent of an expression is what the person who uses it on a particular occasion has in mind. Consider a particular use of the proper name Aristotle, as in an utterance of the sentence “Aristotle is intelligent.” As Kripke has shown, it cannot be assumed that the referent of Aristotle in this utterance is whoever was the teacher of Alexander the Great, because someone other than Aristotle might have been Alexander’s teacher . But from this fact it does not follow that Aristotle and other proper names refer directly to their bearers. Whether the referent of Aristotle in this utterance is Aristotle the philosopher or someone else (say, the speaker’s son) depends on what (or whom) the speaker has in mind. And what the speaker has in mind, according to Searle, must be something like a Fregean meaning or sense.

At the start of the 21st century, there was still no resolution of the dispute between the Fregean and the Russellian accounts. Both had their advocates, and the debate continued with highly sophisticated arguments on both sides.

In the theory of mind, the major debate concerned the question of which materialist theory of the human mind, if any, was the correct one. The main theories were identity theory (also called reductive materialism), functionalism, and eliminative materialism.

An early form of identity theory held that each type of mental state, such as pain, is identical with a certain type of physical state of the human brain or central nervous system. This encountered two main objections. First, it falsely implies that only human beings can have mental states. Second, it is inconsistent with the plausible intuition that it is possible for two human beings to be in the same mental state (such as the state of believing that the king of France is bald) and yet not be in the same neurophysiological state.

As a result of these and other objections, type-type identity theory was discarded in favour of what was called “token-token” identity theory. According to this view, particular instances or occurrences of mental states, such as the pain felt by a particular person at a particular time, are identical with particular physical states of the brain or central nervous system. Even this version of the theory, however, seemed to be inconsistent with the plausible intuition that felt sensation is not identical with neural activity.

The second major theory of the mind, functionalism, defines types of mental states in terms of their causal roles relative to sensory stimulation, other mental states, and physical states or behaviour. Pain, for example, might be defined as the type of neurophysiological state that is caused by things like cuts and burns and that causes mental states such as fear and “pain behaviour” such as saying “ouch.” Functionalism avoids the second objection against the type-type identity theory mentioned above—that it seems possible for two people to be in the same mental state but not in the same neurophysiological state—because it is not committed to the idea that the neurophysiological state that plays the causal role of pain must be the same in all people, or the same in people as in nonhuman creatures. This point was often expressed by saying that functional states exhibit “multiple realizability.”

Functionalism was inspired in part by the development of the computer, which was understood in terms of the distinction between hardware, or the physical machine, and software, or the function that the computer performs. It also was influenced by the earlier idea of a Turing machine, named after the English mathematician Alan Turing. A Turing machine is an abstract device that receives information as input and produces other information as output, the particular output depending on the input, the internal state of the machine, and a finite set of rules that associate input and machine-state with output. Turing defined intelligence functionally, in the sense that for him anything that possessed the ability to transform information from one form into another, as the Turing machine does, counted as intelligent to some degree. This understanding of intelligence was the basis of what came to be known as the Turing test, which proposed that, if a computer could answer questions posed by a remote human interrogator in such a way that the interrogator could not distinguish the computer’s answers from those of a human subject, then the computer could be said to be intelligent and to think. Following Turing, the philosopher Hilary Putnam held that the human brain is basically a sophisticated Turing machine, and his functionalism was accordingly called “Turing machine functionalism.” Turing machine functionalism became the basis of the later theory known as strong artificial intelligence (or strong AI), which asserts that the brain is a kind of computer and the mind a kind of computer program.

Hilary Putnam.Harvard University News Office

In the 1980s Searle mounted a challenge to strong AI. Searle’s objections were based on the observation that the operation of a computer program consists of the manipulation of certain symbols according to rules that refer only to the symbols’ formal or syntactic properties and not to their semantic ones. In his so-called “Chinese-room argument,” Searle attempted to show that there is more to thinking than this kind of rule-governed manipulation of symbols. The argument involves a situation in which a person who does not understand Chinese is locked in a room. He is handed written questions in Chinese, to which he must provide written Chinese answers. With the aid of a computer program or a rule book that matches questions in Chinese with appropriate Chinese answers, the person could simulate the behaviour of a person who understands Chinese. Thus, a Turing test would count such a person as understanding Chinese. But by hypothesis, he does not have that understanding. Hence, understanding Chinese does not consist merely in the ability to manipulate Chinese symbols. What the functionalist theory leaves out and cannot account for, according to Searle, are the semantic properties of the Chinese symbols, which are what the Chinese speaker understands. In a similar way, the Turing-functionalist definition of thinking as the manipulation of symbols according to syntactic rules is deficient because it leaves out the symbols’ semantic properties.

A more general objection to functionalism involves what is called the “inverted spectrum.” It is entirely conceivable, according to this objection, that two humans could possess inverted color spectra without knowing it. The two may use the word red, for example, in exactly the same way, and yet the color sensations they experience when they see red things may be different. Because the sensations of the two people play the same causal role for each, however, functionalism is committed to the claim that the sensations are the same. Counterexamples such as these demonstrated that similarity of function does not guarantee identity of subjective experience, and accordingly that functionalism fails as an analysis of mental content. Putnam eventually agreed with these and other criticisms, and in the 1990s he abandoned the view he had created.

The most radical theory of the mind developed in this period is eliminative materialism. Introduced in the late 1980s and refined and modified throughout the 1990s, it contended that scientific theory does not require reference to the mental states posited in informal, or “folk,” psychology, such as thoughts, beliefs, desires, and intentions. The correct view of the human mind, according to eliminative materialism, is that there are no mental states in the folk-psychological sense and that the mind is nothing more or less than the brain. Furthermore, because there are no mental states, both the identity theory and functionalism are trying to do the impossible—i.e., to reduce nonexistent mental events to neural activity. Just as late 18th-century chemical theory did not try to reduce the fictional concept of phlogiston to molecular states but simply dispensed with any reference to it, so the entire mentalistic vocabulary of folk psychology can be eliminated in a sophisticated scientific theory of the mind. Such a theory will simply describe how the brain works.

Three main objections were posed against this view. The first was that it failed to explain how semantic properties such as meaning, truth, and reference could be elicited from, or instantiated in, neural activity. In brief, this objection argued that it is simply a conceptual mistake to try to ascribe truth or falsity, or any semantic property, to brain processes, as eliminative materialism would seem to require. The second objection was that eliminative materialism denied the existence of certain things that all accept as real: namely, felt sensations (known as “qualia”). To deny that qualia exist is tantamount to saying that there are no such things as sounds, only air vibrating at various frequencies.

The third objection to eliminative materialism emphasized the fact that each person has access to his own mental experiences in a way that no other person has. Pains and visual images, as well as countless other kinds of thought, possess a kind of subjectivity that cannot be captured in a purely scientific account, because scientific descriptions concern only the objective properties of natural phenomena. There were many variants of this position. Among the philosophers who rejected reductivism on these or other grounds were Searle, Roderick Chisholm, Zeno Vendler, Thomas Nagel, Roger Penrose, Alastair Hannay, and J.R. Smythies.

That there are still divisions among analytic philosophers concerning the theory of reference and the theory of mind (though in much-altered form) shows both the continuity of the movement and the changes that have occurred. Although it is not possible to forecast the future trends in analytic philosophy in any detail, it seems likely that the two general approaches to the discipline established by Russell and Moore, formalism and informalism, will continue well into the 21st century.

Inspire your inbox –
Sign up for daily fun facts about this day in history, updates, and special offers.

By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica.
Click here to view our Privacy Notice. Easy unsubscribe links are provided in every email.

Thank you for subscribing!

Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox.