Monday, December 31, 2007

Nicholas Dykes' critique of Popper, linked by the poster Ian, while not as bad as most such Rand-inspired criticisms, does have its share of serious problems.

Dykes begins his critique by complaining about Popper's tone of assurance.

One thing which is quite certain is that Popper wrote with absolute assurance of his own rectitude, as I think the quotations in this paper reveal. For all his belittlement of knowledge and certainty, I have never read anyone who wrote so many books all imbued with such conscious certainty and authority— the authority of one who knows.

Dykes appears to be annoyed by the fact that Popper never prefaces all his remarks with the phrase I suppose or I conjecture. Of course, it would be very tedious to proceed in this way. Dykes also takes Popper to task for declaring "I am not a belief philosopher. I do not believe in belief" while at the same time refusing, in other places in his work, to stop using the phrase I believe to state one of his positions. Here Dykes shows himself deaf to the ambiguity of language. Popper's phrase "I don't believe in belief" plays on two senses of the word belief to make a point. In the first sense of the term Popper is using it to describe certain belief, in the second, the sort of conjectural belief Popper supported. In other words, Popper is simply saying that he doesn't believe (in the conjectural sense of the term) in certainty.

Dykes unquestioningly accepts the Platonic view that equates knowledge with certainty, asserting that all denials of certainty are "self-contradictory," because "in the absence of certain knowledge one is either forced into a position involving some kind of unfounded conviction, belief or faith, or into scepticism." Says who? Dykes here make use of an oft-repeated Objectivist fallacy: the either-or fallacy, where we are given the choice between the Objectivist position and several unappetizing alternatives. But who says that the only alternative between certain knowledge is unjustified belief and skepticism? What about probable knowledge? What about degrees of reliability? If one knowledge claim can be regarded as superior to another, isn't that good enough for practical purposes?

Some of Dykes criticisms demonstrate a lack of familiarity with Popper's ideas. He accuses Popper, for instance, of refusing "to have anything to do with definitions." This is an exaggeration. Popper accepted scientific, or nominalistic, definitions; he simply has no use for essentialist definitions — another matter entirely. I can find no evidence that Dykes understands this distinction, or has any idea why essentialist definitions, and the scholastic mythology that has grown-up around them, deserve the criticism and scorn Popper directed at them.

Dykes essay particularly flounders when he attempts to explode Popper's critical rationalism by associating it with the views of Kant and Hume. He repeats the Objectivist canard that Hume's "whole argument" is in conflict with the Law of Identity. Alas, this misses the point entirely. Hume's argument is not directed at the Law of Identity, but at our knowledge of specific identities. How do we know which attributes of an object are constant and which are not? This is not a question which the Law of Identity can answer. Reminding ourselves that objects have identities is nothing to the purpose if we don't know, or can't be sure, what those identities are.

Dykes is equally clueless when it comes to Kant. Because Popper believed that all observations are "theory impregnated," Dykes assumes that Popper regards all theories as prior to experience, and even wonders whether Popper is "asking us to accept that the heliocentric theory came before observation of perturbations in planetary orbits?" Again, however, Dykes has missed the point. When Popper asserts that observations are "theory-impregnated" he is merely noting how hopeless it would be for an empty mind, bereft of any presuppositions or "theories," to make sense of observations. As Kant put it, "Thoughts without content are empty, intuitions [i.e., sense experiences] without concepts are blind." Regardless of whatever confusions and pendantries Kant may have stumbled into trying to elucidate this paradox in the Critique of Pure Reason, the principle itself constitutes one of the seminal insights of epistemology, easily corroborated by common experience and scientific investigation. I wonder if Dykes has any familiarity, let alone understanding, of it. Or does he believe that facts can be understood without any prior theories at all? Hardly a plausible position, if so. After all, how can a blank mind ever make neither heads nor tails of the bewildering complexity of sense experience, without at least some prior heuristic to guide it? It can't. So some measure of theory does appear to precede fact. How, then, on a realist framework, are we to account for this? Popper at least deserves credit for taking this issue seriously and trying to provide a non-idealist, non-Kantian solution to it. Since Dykes does not even appear to grasp that this is what Popper is attempting to do, his criticism is worthless. One cannot effectively criticize what one doesn't understand.

And that seems to be the over-riding problem of Dykes critique. His understanding of the problems Popper attempts to solve is minimal, at best. His attachment to Randian categories of interpretion has rendered him hopelessly naive in the face of problems originally posed by Hume and Kant and later elucidated by Pierce, Santayana, Popper and Polanyi. His outlook is still trapped in the scholasticism of Plato and Aristotle.

Saturday, December 29, 2007

My copy of The Ayn Rand Bookstore 2008 Catalog arrived the other day. The ARB is owned by the ARI, so you can be sure that you are getting your Objectivism straight-up. The catalog is 74 pages and well produced. It contains lectures, books, coffee mugs, t-shirts and just about everything else needed to make you a passionate valuer of all things Randian.

What is most striking about the catalog is how prominently Leonard Peikoff is featured. On page 2, right after "Who was Ayn Rand?", there is "Who is Leonard Peikoff?" He is, of course, "the preeminent authority on Objectivism." In fact, Peikoff's works come before Rand's. ARB even sells a documentary on Peikoff. "The life of Leonard Peikoff is a heroic one. From his early years as a precocious student tortured by the dichotomy of the 'moral' vs. the 'practical' . . . to his . . . already-classic books . . . ."

The catalog also contains the odd disclaimer that "the inclusion of Leonard Peikoff's materials . . . does not imply that he agrees with the content of other items herein." No such disclaimer is given for associates of Ayn Rand such as Harry Binswanger or Allan Gotthelf. I guess Peikoff doesn't call himself Rand's "intellectual heir" for nothing.

You can purchase lectures by Peikoff on subjects big and small, from his "Induction in Philosophy and Physics" where he solves the problem of induction (thus completing "in every essential respect, the validation of reason") to "Poems I Like-and Why." This doesn't come cheap: $205.00 for the former and $47.00 for the latter (plus $27.00 shipping). And why is it that none of the material in the catalog is available to download to your MP3 players? Wouldn't downloads be cheaper for the ARB to produce (no need to make CDs) and save customers the rather hefty shipping costs?

The ARB offers several courses and lectures by David Harriman, ARI's resident expert on physics and philosophical issues related thereto. Readers of ARCHNBlog won't be surprised to learn that modern physics has been "corrupted" by Kant. Space is even a "chimera" (why not an anti-concept?) and we should return to "the relational view held by Aristotle."

There are many lectures that would be of interest to anyone critical or sympathetic to Objectivism. If you want to know the Objectivist take on numerous topics not addressed by Rand, there is a dearth of published sources. I'd be willing to pony up some of my hard earned cash to learn what Objectivists think of Karl Popper, or how the Objectivist theory of concepts differs from other theories, but these lectures are just a bit too expensive. And given the bluster that Official Objectivists often direct toward non-Objectivists, I expect to be disappointed.

There's truly something for everyone in the catalog. Psychologists Edwin Locke and Ellen Kenner even offer a course on sex containing role-playing dialogues "suitable for . . . same-sex couples." One wonders what Rand would have thought. - Neil Parille

Tuesday, December 25, 2007

Intuition and tacit knowledge, part 2. In the last Cognitive Revolution post, I introduced Oakeshott's two forms of knowledge, the technical and the practical. Technical knowledge is consciously formalized rationalistic knowledge, while practical knowledge is largely unconscious, inarticulable, and not easily amenable to "reason." Oakeshott's theory about practical knowledge, if true, creates problems for Rand, whose foundationalism requires that all knowledge claims be explicitly justified in terms of articulable, reason-based arguments. Now we shall examine the evidence that can be brought forth to defend Oakeshott's claim. The evidence can be divided into two parts: (1) Evidence from everyday life; (2) Evidence from cognitive science.

(1) Evidence from everyday life. Such evidence is abundant and overwhelming—assuming one has enough wit to notice it. Oakeshott himself gives the example of the master chef and the cookbook. Let us assume that a master chef has made recipes of all his best dishes and put them in a cookbook. Would it be possible to duplicate the results of the master chef simply by following the recipes in the cookbook? No, of course not. If that were true, anyone could become a master chef merely by following the recipes of a master chef. It is widely recognized that in cookery, along with other complex endeavors, expertise is necessary. What qualifies one as an expert? Beyond a simple competence/intelligence, what is required is long experience within the given domain of experience in which one is an expert.

Another compelling example of practical knowledge is grammar. Consider what the sociologist Vilfredo Pareto has to contribute on the subject:

It would be absurd to claim that the theory of grammar preceded the practice of speech. It certainly followed, and human beings have created the most subtle grammatical structures without any [conscious, technical] knowledge of it. Take the Greek language as an example. We cannot imagine that the Greeks one day got together and decided what their system of conjugation was to be. Usage alone made such a masterpiece of the Greek verb. In Attic Greek there is the augment, which is the sign of the past in historical tenses; and, for a very subtle nuance, beside the syllabic augment, there is the temporal augment, which consists in a lengthening of the initial vowel. The conception of aorist, and its functions in syntax, are inventions that would do credit to the most expert logician. The large number of verbal forms and the exactness of their functions in syntax constitute a marvelous whole. [Mind and Society, §158]

Grammatical knowledge is interesting precisely because it is developed in young children who could never have grasped it explicitly, through the means of articulated, formalized rules. The practical knowledge of grammar thus precedes the technical knowledge of this domain, which flies in the face of Rand's view of automatized knowledge, with its orientation toward the primacy of technical knowledge view.

(2) Evidence from Cognitive Science. but if the evidence from common experience is not sufficient to convince the incorrigible skeptic-rationalist, then we have evidence from cognitive science as well. Robin Hogarth has a very interesting article on intuition which draws heavily on cogsci research. Hogarth distinguishes two modes of judgment, the tacit (or intuitive) and the deliberative (analytical). These two modes are analogous, though not identical, to Oakeshott's practical and technical knowledge. (The main difference is that Hogarth's tacit knowledge is a wider conception that Oakeshott's practical knowledge—i.e., practical knowledge is a kind of tacit knowledge.)

Oakeshott claims that "practical knowledge can neither be taught nor learned, but only imparted and acquired." This suggests that practical knowledge is learned without us realizing how we learn it, that it is learned unconsciously. Hogarth's tacit knowledge is learned (in part) in a similar manner (i.e., unconsciously). According to Hogarth, "information about stimuli are recorded without conscious awareness and stored for possible future use. This very basic process is at the heart of tacit learning and the accumulation of facts and frequencies that has now been so well documented in the literature. It requires neither effort nor intention and, yet, the information stored can be subsequently recalled when needed, even for tasks we would have never imagined."

Oakeshott also insists that practical and technical knowledge work hand in hand. Hogarth buttresses this view by noting that "it is important to emphasize that in many cases the deliberate response will be influenced by the tacit response and all that precedes it. In other words, people will always have some tacit response (however minimal) to any triggering event."

Oakeshott also emphasizes the importance "connoisseurship" and mastership through experience—in other words, what cognitive scientists call "expertise." Here's what Hogarth has to contribute about expertise:

There have been many studies of expertise. From our perspective, there are several key findings. First, expertise is limited to domains and is only acquired through exposure to and activity within specific domains. Thus, because someone is an expert in one domain (e.g., chess) does not mean that she will be an expert in another domain (e.g., medicine) unless she has also had considerable experience in the latter. Second, outstanding performance in any domain takes years of dedication. Moreover, high performers have typically followed demanding regimes of deliberate practice and benefited from good teachers. Third, there is – curiously – less relation between expertise and predictive ability in many areas of activity than one might imagine. However, this finding may be more indicative of the nature of the uncertain environments in which experts operate than due to a lack of “expertise” per se. Consider, for example, the random nature of movements of the stock market. Fourth, experts and novices process information in different ways. Experts acquire habits that allow them to process more information than novices. They learn, for example, how to counteract limitations in short-term memory and to “chunk” information more effectively. They also use different problem-solving strategies. Novices tend first to identify a specific goal and then work backward through the details to determine a solution – making much use of deliberate thinking. Experts, on the other hand, rely more on tacit processes. They tend first to take in the details of the problems they face, and then determine (by recognizing patterns and similarities) a general framework that allows them to explore possible solutions. Not surprisingly, experts solve problems faster than novices.

This pretty much matches Oakeshott's view, particularly the assertion that "experts acquire habits that allow them to process more information than novices." Such habits are precisely what Oakeshott is referring to when he talks about practical knowledge.

Hogarth has another interesting comment to make on tacit knowledge which could help explain its importance in human cognition. He notes that

Attention in consciousness is limited and therefore costly. In the framework, I assume a scarce resource principle. The key idea is that because the deliberate system consumes limited resources, it is used sparingly. It is allocated to tasks that are deemed important at given moments but can be switched to other tasks as the situation demands. It is rarely “shut down” completely and often has a monitoring function. In most cases, the tacit system is our “default” and the deliberate system is invoked when either the tacit system cannot solve the problem at hand or the organism is making some conscious decision (for example, planning what to do).

I would also note, in conclusion, what we have discovered in previous Cognitive Revolution posts: namely, that "unconscious thought is 95 percent of all thought" and that "nearly all important thinking takes place outside of consciousness and is not available on introspection." Hogarth's scarce resource principle helps explain this fact about the dominance of the unconscious in thought. Yet there is another principle which can be added to this that is part of my own theory. It is well known (even Rand admits) that conscious attention is limited: only so much can be attained in consciousness at any one given moment. The practical effect of this is that consciousness is not very well equipped to handle complexity. Since it is very difficult to hold in memory all the factors in a complex situation, the attempt to understand complexity through deliberate conscious reasoning easily breaks down. Hence I conjecture that unconscious thinking is required for human beings to grasp complexity and nuance. Since the unconscious has access to everything in the brain, included all memories, whether received consciously or not, it can form intuitive inferences that are cognitively far more powerful than what the mere conscious can do.

Any philosophy which, like Rand, glorifies and accentuates conscious deliberate thinking at the expense of originative unconscious intuition, vastly underestimates the cognitive capabilities of man's mind.

Sunday, December 23, 2007

"The charming aspect of Christmas is the fact that it expresses good will in a cheerful, happy, benevolent, non-sacrificial way. One says: 'Merry Christmas'--not 'Weep and Repent.' And the good will is expressed in a material, earthly form--by giving presents to one's friends, or by sending them cards in token of remembrance....[T]he street decorations put up by department stores and other institutions--the Christmas trees, the winking lights, the glittering colors--provide the city with a spectacular display, which only 'commercial greed' could afford to give us. One would have to be terribly depressed to resist the wonderful gaiety of that spectacle." - Ayn Rand

And for once, how could we at the ARCHNblog disagree? I'll be taking a break for a while, Greg may post as the seasonal spirit moves him. Otherwise a Merry Christmas to all and we'll see you in the New Year.

The Ayn Rand Institute's Dave Harriman's "Fundamentals of Physical Science" 120hr DVD course from the Van Damme Academy promises that you will emerge understanding said fundamentals "better than most scientists." Yet nowhere in the lengthy, hypey blurb does Harriman mention quantum physics, or even more incredibly, Albert Einstein. It is well known that the Objectivist orthodoxy consider Einstein and 20thC physics philosophically "corrupt" - and perhaps this explains why the blurb mentions Newton repeatedly, and not much more (other than that well known scientific genius "BUY NOW"). Yet despite their famous incompatibilities, both Einstein's relativity and quantum physics are two of the best empirically tested theories in existence. The ARI's continuing denigration of them marks the Objectivist movement's descent into pseudoscience.

We at the ARCHNblog admit we have not forked out the US$1195 for a program that Van Damme proudly admits is "not professional grade" (one lecture is so inaudible a transcript comes with it) by people who deny two of the greatest scientific achievements of the past century, so we cannot fully attest to its content. However, if you want to get basic Newtonian physics, you might want to take a look at MIT's famous Walter Lewin video course - it's fun, and it's free.

Friday, December 21, 2007

In comments on the previous thread, Ellen Stuttle very kindly provides us with a lengthy extract from Rand's "Art And Cognition."

"A composition may demand the active alertness needed to resolve complex mathematical relationships -- or it may deaden the brain by means of monotonous simplicity. It may demand a process of building an integrated sum -- or it may break up the process of integration into an arbitrary series of random bits -- or it may obliterate the process by a jumble of sounds mathematically-physiologically impossible to integrate, and thus turn into noise.

[....]

"The deadly monotony of primitive music -- the endless repetition of a few notes and of a rhythmic pattern that beats against the brain with the regularity of the ancient torture of water drops falling on a man's skull -- paralyzes cognitive processes, obliterates awareness and disintegrates the mind. Such music produces a state of sensory deprivation, which -- as modern scientists are beginning to discover -- is caused by the absence or the monotony of sense stimuli.

There is no evidence to support the contention that the differences in the music of various cultures are caused by innate physiological differences among various races. There is a great deal of evidence to support the hypothesis that the cause of the musical differences is psycho-epistemological (and, therefore, ultimately philosophical).

A man's psycho-epistemological method of functioning is developed and automatized in his early childhood; it is influenced by the dominant philosophy of the culture in which he grows up. If, explicitly and implicity (through the general emotional attitude), a child grasps that the pursuit of knowledge, i.e., the independent work of his cognitive faculty, is important and required of him by his nature, he is likely to develop an active, independent mind. If he is taught passivity, blind obedience, fear and the futility of questioning or knowing, he is likely to grow up as a mentally helpless savage, whether in the jungle or in New York City. But -- since one cannot destroy a human mind totally, so long as its possessor remains alive -- his brain's frustrated needs become a restless, incoherent, unintelligible groping that frightens him. Primitive music becomes his narcotic: it wipes out the groping, it reassures him and reinforces his lethargy, it offers him temporarily the sense of a reality to which his stagnant stupor is appropriate."

[skipping paragraph, though my fingers itch to include it, about the supposed motivation for the development of the diatonic scale and the connection to the Renaissance - Ellen]

"Today, when the influence of Western civilization [is breaking up tradition-bound Japanese culture, Japanese composers are writing good Western-syle music].

The products of America's anti-rational, anti-cognitive "Progressive" education, the hippies, are reverting to the noise and the drumbeat of the jungle.

Integration is the key to more than music; it is the key to man's consciousness, to his conceptual faculty, to his basic premises, to his life. And lack of integration will lead to the same existential results in anyone born with a potentially human mind, in any century, in any place on earth.

A brief word about so-called modern music: no further research or scientific discoveries are required to know with full, objective certainty that it is not music. The proof lies in the fact that music is the product of periodic vibrations -- and, therefore, the introduction of nonperiodic vibrations (such as the sounds of street traffic or of machine gears or of coughs and sneezes), i.e., of noise, into an allegedly musical composition eliminates it automatically from the realm of art and of consideration. But a word of warning in regard to the vocabulary of the perpetrators of such "innovations" is in order: they spout a great deal about the necessity of "conditioning" your ear to an appreciation of their "music." Their notion of conditioning is unlimited by reality and by the law of identity; man, in their view, is infinitely conditionable. But, in fact, you can condition a human ear to different types of music (it is not the ear, but the mind that you have to condition in such cases); you cannot condition it to hear noise as if it were music; it is not personal training or social conventions that make it impossible, but the physiological nature, the identity, of the human ear and brain."

Thursday, December 20, 2007

In the "Concepts In a Hat" thread, commenter Jonathan notes that Rand is talking through her hat in some of her pronouncements on aesthetic issues too:

Jonathan:Objectivist aesthetic concepts "in a hat":

Concept #1:Rand:"As a re-creation of reality, a work of art has to be representational; its freedom of stylization is limited by the requirement of intelligibility; if it does not present an intelligible subject, it ceases to be art."

Concept #2:Rand:"Music is art even though it "cannot tell a story, it cannot deal with concretes, it cannot convey a specific existential phenomenon, such as a peaceful countryside or a stormy sea...even concepts which, intellectually, belong to a complex level of abstraction, such as 'peace,' 'revolution,' 'religion,' are too specific, too concrete to be expressed in music."

---

Concept #1:Rand:"...an objective evaluation requires that one identify the artist's theme, the abstract meaning of his work (exclusively by identifying the evidence contained in the work and allowing no other, outside considerations), then evaluate the means by which he conveys it—i.e., taking his theme as criterion...until a conceptual vocabulary is discovered and defined, no objectively valid criterion of esthetic judgment is possible in the field of music..."

Concept #2:Music is a legitimate art form, but abstract visual art is not.

---

Concept #1:Rand:"Art is a selective re-creation of reality according to an artist's metaphysical value-judgments...it serves no practical, material end, but is an end in itself; it serves no purpose other than contemplation... utilitarian objects cannot be classified as works of art."

Concept #2:Rand: Architecture is an art form that "combines art with a utilitarian purpose and does not re-create reality."

Sunday, December 16, 2007

No, not a droll putdown of Rand's epistemological disarray, but apparently an Objectivist party game - I suppose the New Intellectual equivalent of "Six Degrees of Kevin Bacon". But of course this being Objectivism, it could never merely be just a game; it would have to aim at "developing a deeper understanding of Objectivist ideas" too, right?

The idea is to put 25 or more Objectivist concepts into a hat, then draw any two at random and explain the connection. Just as many a true word is said in jest, this simple game really does unwittingly give us a greater understanding of the Objectivist method in action ie. a series of purely speculative rationalisations, the more vaguely and speciously connected, the better. Here is a perfect example, apparently cited from one of Peikoff's tape lectures

"...Peikoff says that when he started playing the game many years ago, he was stumped when he was asked to explain the connection between "private roads" and "the validity of the senses." But eventually he made the following connection: Private roads are an expression of the fact that the purpose of government is only the protection of individual rights. Individual rights are defended on the basis that man is a rational being who survives by the exercise of his mind, so he has to be left free. Finally, the validity of man's mind depends on the validity of the senses because the mind is the conceptualization of sensory data. Therefore you can see from this rough analysis that it is useless to argue for private roads with someone who rejects the validity of the senses."

Saturday, December 15, 2007

Intuition and tacit knowledge, part 1. In the next series of Cognitive Revolution posts, I am going to discuss what I regard as Rand's most critical shortcoming: namely, her inadequate theory of intuition—a theory so inadequate, that it is almost an attack on intuition. Because this subject is so very complicated, I am going to start by introducing a theory of knowledge proposed by philosopher Micheal Oakeshott. Now it is important to note that Oakeshott is not a part of the Cognitive Revolution. But since his theory provides clarification of some of the key issues at stake, I will use it as a jumping off point for my discussion of this very important issue.

Oakeshott proposes that there are two types of knowledge involved in every science, art, or practical activity: (1) technical knowledge and (2) practical knowledge. Technical knowledge is "susceptible of precise formulation" in terms of "formulated rules," such as the "technique of cookery" contained in a cookbook. Practical knowledge is called practical "because it exists only in use, is not reflective and (unlike technique) cannot be formulated in rules. This does not mean, however, that it is an esoteric sort of knowledge. It means only that the method by which it may be shared and becomes common knowledge is not the method of formulated doctrine. And if we consider it from this point of view, it would not, I think, be misleading to speak of it as traditional knowledge. In every activity this sort of knowledge is also involved; the mastery of any skill, the pursuit of any concrete activity is impossible without it."

Oakeshott continues:

These two sorts of knowledge [i.e., the technical and the practical], then, distinguishable but inseparable, are the twin components of knowledge involved in every concrete human activity. In a practical art, such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists... And what is true of cookery ... is no less true of politics: the knowledge involved in political activity is both technical and practical. Nor ... is it correct to say that whereas technique will tell a man what to do, it is practice which tells him how to do it... Even in the what ... there lies already this dualism of technique and practice: there is no knowledge which is not "know how" ...

Technical knowledge, we have seen, is susceptible of formulation in rules, principles, directions, maxims—comprehensively, in propositions.... And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of this kind. Its normal expression is in a customary or traditional way of doing things, or simply, in practice. And this gives it the appearence of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is, indeed, a knowledge that is expressed in taste or connoisseurship....

Technical knowledge can be learned from a book: it can be learned in a correspondence course. Moreover, much of it can be learned by heart, repeated by rote, and applied mechnically: the logic of the syllogism is a technique of this kind. Technical knowledge, in short, can be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master—not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practising it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as a knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Oakeshott's theory is important for my own critique of Objectivism, because much of what I say against Rand assumes the existence of a tacit, intuitive dimension that is very similar to Oakeshott's practical knowledge. Keeping this in mind, consider how Oakeshott uses his distinction of technical and practical knowledge to criticize what he calls "rationalism":

Now as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. The sovereignty of 'reason,' for the Rationalist, means the sovereignty of technique

The heart of the matter is the pre-occupation of the Rationalist with certainty. Technique and certainty are, for him, inseparably joined because certain knowledge is, for him, knowledge which does not require to look beyond itself for its certainty; knowledge, that is, which not only ends with certainty but begins with certainty and is certain throughout. And this is precisely what technical knowledge appears to be. [Rationalism in Politics and Other Essays, 12-16]

Now when Oakeshott talks about Rationalism, he doesn't mean only philosophers like Descartes and Spinoza. He also would include under the term any philosopher who emphasizes conscious, formalized "reason" at the expense of tacit, intuitive, "practical" knowledge. Rand would be a rationalist in Oakeshott's sense of the word, although she is not of the extreme type criticized in his essay. Rand would not necessarily have denied the existence of practical knowledge. She might have, instead, insisted that practical knowledge, in order to have any "validity" or usefulness, must be based on technical knowledge. Or, in other words, from an Objectivist point of view, practical knowledge, if it exists at all, is merely technical knowledge that has been "automatized." Even with this amendment, the Randian view is not compatible with Oakeshott's view, because Oakeshott insists that "practical knowledge can neither be taught nor learned," and this pretty much rules out the notion that practical knowledge can simply be automatized technical knowledge. Also note Oakeshott's equation of technical knowledge with the "appearance" of certainty. This, perhaps, explains Rand's prejudice against tacit, intuitive, or practical knowledge and her desire to, in effect, make all knowledge reducible, in the final analysis, to technical knowledge. Rand needed certainty because of her foundationalism, which is so critical to the eschatological components of her system. For Rand, the (secular) salvation of Western Civilization depends on establishing the foundation of reason and man's "conceptual faculty." If technical knowledge is not supreme (or "primary"), this foundationalist salvation is impossible and unnecessary.

Now the question arises: which view, Rand's or Oakeshott's, come closest to the truth? This question will be the focus of my next post.

Thursday, December 13, 2007

Over at Mises.org we find Patrick M. O'Neil's detailed breakdown of Objectivist ethics, and despite Rand's claims to the contrary, exactly how it fails to overcome Hume's problem. The result is what he calls the "essential subjectivity of Objectivism.":

"...It is at this stage in her argumentation, at the very point of seeming triumph over subjectivity, that Rand loses the battle. She takes aim at the Humean disjuncture of the prescriptive from the descriptive and fatally wounds the pretentions to objectivity of her own systematic ethics: "The fact that a living entity is, determines what it ought to do. So much for the issue of the relation between 'is' and 'ought'"("The Objectivist Ethics"). In these two sentences, Rand reveals a serious misconception of the nature of the Humean is-ought gap and introduces a dangerous potential for self-contradiction into her own ethical system.

In his work A Treatise of Human Nature, the Scottish philosopher David Hume challenged the basis of all objective systems of morality:

"I can not forbear adding to these reasonings an observation, which may, perhaps, be found of some importance. In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for sometime in the ordinary way of reasoning, and establishes the being of a God. or makes observations concerning human affairs: when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not. I meet with no proposition that is not connected with an ought or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new revelation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogetherinconceivable, how this new relation can be a deduction from others, which are entirely different from it."

Clearly, Rand thinks that Hume denied a connection between facts and systems of morality. This error is not uncommon, and forms the basis for A. C. MacIntyre's revisionist reinterpretation of Hume's famous paragraph. Claiming that Hume could not have meant to establish a total divorcement of facts from values because Hume uses facts in his own ethical system, MacIntyre interprets Hume to be assaulting only theologically-based morality. In fact, there is nothing in the Humean formulation of the problem which hinders the simple integration of facts and values. The sole difficulty arises over the derivability of values from facts...In conclusion, then, Ayn Rand's system of Objectivist ethics does not provide the basis for a solution to the Humean dilemma of the is/ought gap; nor have attempts by a new generation of natural law ethicians to rework her system succeeded in subduing that central ethical difficulty. Since no ethical system has been demonstrated to have solved the is-ought problem, it may be thought a minor flaw in Rand. It is her specific claim to have overcome this difficulty that magnifies its importance in regard to her system."

Tuesday, December 11, 2007

Here at the ARCHNblog we welcome another entrant to the small but merrily growing band of sites critical of Objectivism. "Meg's Marginalia" is the product of one of our regular commenters, and she promises to provide both a chick-perspective to Rand critique and a freewheeling "commentary on Rand, her works, her life, other people's works about Rand, commentary about Rand-fans and Rand-haters, from internet sources, print sources, meta sources, anything." Meg's even threatening her own loyalty oath...we dread to think. Anyway, stop by and say hi.

Monday, December 10, 2007

In 1967, Ayn Rand’s philosophy of Objectivism became complete. In that year, Rand published her collection of essays entitled Introduction to Objectivist Epistemology (it was published by Mentor with Leonard Peikoff’s essay in 1979). By that time, she had written The Fountainhead, Atlas Shrugged and published her important articles “The Objectivist Ethics” and “The Nature of Government.”

Considering the revolutionary nature of Objectivism and the pure evil and evident absurdity of non-Objectivist thought, one might assume that Objectivists would rush into print with defenses and elaborations of Objectivism. Best I can tell, the number of books actually advancing Objectivism is quite small. (I exclude books written by non-ARI Objectivists).

Even if I’ve forgotten a book or two, this is hardly an impressive list. Granted there is a fair amount of literature produced by Objectivists, but much of it is general discussions of Rand or material unrelated to Objectivism per se. I would put in this list Allan Gotthelf’s 2000 book On Ayn Rand (a 100 page synopsis of Rand’s thought) and Andrew Bernstein’s The Capitalist Manifesto, a defense of capitalism. One prolific Objectivist is Robert Mayhew, who has edited collections about We the Living, Anthem, The Fountainhead, Rand’s “marginalia,” Rand’s answers to questions posed at lectures or interviews, and a book on Rand’s testimony before the House Committee on Un-American Activities on the movie “Song of Russia” (Ayn Rand and Song of Russia).

While Objectivists are short on writing books, they are long on producing taped lectures. Quite often one will hear Objectivists recommend Leonard Peikoff’s tape courses, such as Objectivism Through Induction, to those who raise issues about Objectivism. I haven’t listened to this course, but it’s unreasonable to expect critics to spend $270.00 to purchase the CDs. (One can purchase slickly produced courses from The Teaching Company for much less.) If this course is so great, why doesn’t Peikoff publish transcripts of it?

For years we have heard that Peikoff will be publishing a book on his DIM Hypothesis, David Harriman a book on physics applying Peikoff’s theory of induction, and Harry Binswanger on consciousness. If these books see the light of day and are reasonably priced, I will be among the first purchasers.

Thursday, December 06, 2007

Emotion and reason. The so-called "high-reason" view of thinking, which insists that reason must be free of emotion, has not weathered very well in recent years. A new view of rationality, which includes emotion as part of reason, has been presented by the neuroscientist Antonio Damasio, also quoted in my last post on the same issue. For purposes of clarification, the following quotes from Damasio are helpful:

Let us begin by considering a situation which calls for a choice. Imagine yourself as the owner of a large business, faced with the prospect of meeting or not with a possible client who can bring valuable business but also happens to be the archenemy of your best friend, and proceeding or not with a particular deal.... Even in this caricature you will recognize the sort of quandary we face most every day. How do you resolve the impasse? How do you sort out the questions inherent in the images before your mind's eye?

There are at least two distinct possibilities: the first is drawn from a traditional "high-reason" view of decision making; the second from the "somatic-marker hypothesis."

The "high-reason" view, which is none other than the common-sense view, assumes that when we are at our decision-making best, we are the pride and joy of Plato, Descartes and Kant. Formal logic will, by itself, get us to the best available solution for any problem. An important aspect of the rationalist conception is that to obtain the best results, emotions must be kept out. Rational processing must be unencumbered by passion.

Basically, in the high-reason view, you take the different scenarios apart and to use current managerial parlance you perform a cost/benefit analysis of each of them. Keeping in mind "subjective expected utility," which is the thing you want to maximize, you infer logically what is good and what is bad.... You are, in fact, faced with a complex calculation ... and burdened with the need to compare results of a different nature which somehow must be translated into a common currency for the comparison to make any sense at all. A substantial part of this calculation will depend on the continued generation of yet more imaginary scenarios, built on visual and auditory patterns, among others, and also on the continued generation of verbal narratives which accompany those scenarios, and which are essential to keep the process of logical inference going.

Now let me submit that if this strategy is the only one you have available, rationality, as described above, is not going to work. At best, your decision will take an inordinately long time, far more than acceptable if you are to get anything else done that day. At worst, you may not even end up with a decision at all because you will get lost in the byways of your calculation. Why? Because it will not be easy to hold in memory the many ledgers of losses and gains that you need to consult for comparisons. The representations of intermediate steps, which you have put on hold and now need to inspect in order to translate them in whatever symbolic form required to proceed with your logical inferences, are simply going to vanish from the memory slate. You will lose track. Attention and working memory have a limited capacity. In the end, if purely rational calculation is how your mind normally operates, you might choose incorrectly and live to regret the error, or simply give up trying, in frustration.

What the experience with [brain-damaged] patients such as Elliot suggests is that the cool strategy advocated by Kant, among others, has far more to do with the way patients with prefrontal damage go about deciding than with how normals usually operate....

It is also important to note that the flaws of the common-sense view are not confined to the issue of limited memory capacity. Even with paper and pencil to hold the necessary knowledge in place, the reasoning strategies themselves are fraught with weaknesses, as Amos Tversky and Daniel Kahnerman have demonstrated.... Nonetheless, our brains can often decide well, in seconds, or minutes, depending on the time frame we set as appropriate for the goal we want to achieve, and if they can do so, they must do the marvelous job with more than just pure reason. An alternative view is needed.

Damasio's alternative view is his "somatic-marker hypothesis," where emotions figure as warning signals to help guide reasoning and make it more efficient:

There is still room for using a cost/benefit analysis and proper deductive competence, but only after the automated step [provided by the somatic marker] drastically reduces the number of options [explains Damasio]. Somatic markers may not be sufficient for normal human decision-making since a subsequent process of reasoning and final selection will still take place in many though not all instances. Somatic markers probably increase the accuracy and efficiency of the decision process. Their absence reduces [its efficiency]....

The work of Amos Tversky and Daniel Kahneman demostrates that the objective reasoning we employ in day-to-day decisions is far less effective than it seems and than it ought to be. To put it simply, our reasoning strategies are defective and Stuart Sutherland strikes an important chord when he talks about irrationality as an "enemy within." But even it our reasoning strategies were perfectly tuned, it appears, they would not cope well with the uncertainty and complexity of personal and social problems. The fragile instruments of rationality need special assistance...

But while biological drives and emotion may give rise to irrationality in some circumstances, they are indispensable in others. Biological drives and the automated somatic-marker mechanism that relies on them are essential for some rational behaviors, especially in the personal and social domains... [Descartes' Error, p. 170-200]

Wednesday, December 05, 2007

"As far as I'm concerned, the problem of universals is not really a problem at all. It is simply one of those manufactured problems that philosophers have created to give themselves something to cavil about." - Greg Nyquist, ARCHN

"Among the things that can drive a thinker to despair is the knowledge that the illogical is necessary for man and that much good comes of it. It is so firmly lodged in the passions, in speech, in art, in religion, and generally in everything which endows life with value, that one cannot extricate it without doing irreparable harm to these beautiful things. Only the very naive are capable of thinking that the nature of man can be transformed into a purely logical one..." - F. Nietzsche, p34, "Human, All Too Human"

Monday, December 03, 2007

In the first part of this post, I discussed Rand’s style of argumentation as found in Introduction to Objectivist Epistemology. As I pointed out, Rand often defends her position using as a background the supposedly failed views of other philosophers. She takes much the same approach in “The Objectivist Ethics.”

Rand quickly disposes with the entire history of ethical thought. “In the sorry record of the history of mankind’s ethics—with few rare, and unsuccessful, exceptions—moralists have regarded ethics as the province of whims, that is: of the irrational.” Rand does not provide us with the names of those “rare” philosophers who consider ethics to be based on something other than whims. In any event, her claim is certainly exaggerated.

First, as Huemer notes, it is inaccurate to say that Plato, Aristotle, Epicurus, Epictetus, Aquinas, Butler, Kant, Bentham, Mill, Bradley and Moore regarded ethics as the province of whims and the irrational. And, even if unsuccessful, they are not the few.

Second, there is an entire traditional of natural law ethics which seeks to derive universal ethical principles from objective reality. Aristotle was called the “father of natural law.” Heinrich Rommen writes that, for Aristotle, “The supreme norm of morality is accordingly this: Realize your essential form, your nature. The natural is the ethical, and the essence is unchangeable.” (Rommen, The Natural Law, p. 15.) Thomas Aquinas, among others, passed this tradition to the West via his synthesis of Aristotelian and Christian thought.

Natural law theories were prominent in the Enlightenment. As Lord Kames, an important thinker in the Scottish Enlightenment, wrote, “A lion has claws, because nature made him an animal of prey. A man has fingers, because he is a social animal to procure food by art not by force. It is thus we discover for what end we were designed by nature, or the Author. And the same chain of reasoning points out to us the laws by which we ought to regulate our actions: for acting according to our nature, is acting so as to answer the end of our creation.” (Henry Home (Lord Kames), Essays on the Principles of Morality and Natural Religion. pp. 25-26.)

Natural law ethics wasn’t dead by Rand’s time either. One example is philosopher Henry Veatch who published a defense of Aristotelian ethics in his 1962 book Rational Man. (Rand dismisses Aristotle with the debatable claim that he based his ethics on observations of what wise and noble men did, without asking why they did it.)

Since nature law ethics have commonalities with Rand’s ethics (and in many ways hers seems to be a version of it), her readers would certainly benefit from a discussion of why these theories are unsuccessful.

Rand’s first failed school is the “mystics,” who allegedly hold the “arbitrary, unaccountable ‘will of God’ as the standard of the good and as the validation of their ethics.” No mystic is mentioned, but I assume that these are conventional religious thinkers. Even so, the description isn’t apt. Most religious philosophers would probably disagree with the claim that they consider God’s commands “arbitrary.” The Ten Commandments, for example, contain a mix of religious injunctions (e.g., have no other gods) and practical commands (e.g, don’t steal). Religious thinkers often adopt a natural law ethic, arguing that God created human beings with a certain nature. (See the above quote from Lord Kames.)

Rand next turns to the “neomystics.” These philosophers attempted to “break the traditional monopoly of mysticism in the field of ethics . . . . But their attempts consisted of accepting the ethical doctrines of the mystics and of trying to justify them on social grounds, merely substituting society for God." Particularly problematic is Rand’s claim that apparently all neomystics are advocates of the unlimited state. Her statement is so sweeping that it should be quoted in detail:

“This meant, in logic — and, today, in worldwide practice — that society stands above any principle of ethics, since it is the source, standard and criterion of ethics, since ‘the good’ is whatever it wills, whatever it happens to assert as its own welfare and pleasure. This meant that ‘society’ may do anything it pleases, since ‘the good’ is whatever it chooses to do because it chooses to do it. And—since there is no such entity as ‘society,’ since society is only a number of individual men-this meant that some men (the majority or any gang that claims to be its spokesman) are ethically entitled to pursue any whims (or any atrocities) they are entitled to pursue, while other men are ethically obliged to spend their lives in the service of that gang’s desire.”

Taken literally, Rand is arguing that no secular philosopher places any limits on the state’s power over the individual. This hardly seems the case, the utilitarian Ludwig von Mises being an obvious counter-example.

As in ITOE, Rand’s scholarship is quite poor. Rand mentions only Aristotle, Nietzsche, Bentham, Mill and Comte. None of these philosophers is discussed in any detail, and none is quoted or cited. Rand’s only quoted source is herself, principally John Galt’s speech from Atlas Shrugged. (Galt is called, curiously, Objectivism’s “best representative.”) The amount of hyperbole is excessive, even by Rand’s standards. Rand is certainly entitled to disagree with altruism, but do altruists really hold death as their ultimate value?

Saturday, December 01, 2007

Emotions as tools of cognition. "Emotions are not tools of cognition," states Rand in For the New Intellectual. Is this view in accord with cognitive reality? Not according to neuroscientist Antonio Damasio, who has discovered that emotion plays an important role in thinking:

[E]very experience of our lives is accompanied by some degree of emotion and this is especially obvious in relation to important social and personal problems. Whether the emotion is a response to an evolutionarily set stimulus ... or to a learned stimulus ... does not matter: positive or negative emotions and their ensuing feelings become obligate components of our social experiences.

The idea is that, over time, we do far more than merely respond automatically to components of a social situation with the repertoire of innate social emotions. Under the influence of social emotions ... and of those emotions that are induced by punishment and reward ... , we gradually categorize the situations we experience — the structure of the scenarios, their components, their significance in terms of our personal narrative. Moreover, we connect the conceptual categories we form — mentally and at the related neural level — with the brain apparatus used for the triggering of emotions. For example, different options for action and different future outcomes become associated with different emotions/feelings. By virtue of those associations when a situation that fits the profile of a certain category is revisited in our experience, we rapidly and automatically deploy the appropriate emotions....

I accord special importance to those [emotions/feelings] that are associated with the future outcome of actions, because they come to signal a prediction of the future, an anticipation of the consequences of actions. This is a good example, incidentally, of how nature's juxtapositions generate complexity, of how putting together the right parts produces more than their mere sum. Emotions and feelings have no crystal ball to see the future. Deployed in the right context, however, they become harbingers of what may be good or bad in the near or distant future....

The revival of the emotional signal accomplishes a number of important tasks. Covertly or overtly, it focuses attention on certain aspects of the problem and thus enhances the quality of reasoning over it.... In brief, the signal marks options and outcomes with a positive or negative signal that narrows the decision-making space and increases the probability that the action will conform to past experience. Because the signals are, in one way or another, body-related, I began referring to this set of ideas as "the somatic-marker hypothesis."

The emotional signal is not a substitute for proper reasoning. It has an auxiliary role, increasing the efficiency of the reasoning process and making it speedier. On occasion, it may make the reasoning process almost superfluous, such as when we immediately reject an option that would lead to certain disaster, or, on the contrary, we jump to a good opportunity based on a high probability of success.

Our research team and others have accumulated substantial evidence in support of such [emotional] mechanisms. The body-relatedness of the operation has been noted in the wisdom of the ages. The hunches that steer our behavior in the proper direction are often refered to as the gut or the heart — as in "I know in my heart that this the right thing to do." ...

Although hardly mainstream, the idea that emotions are inherently rational has a long history. Both Aristotle and Spinoza obviously thought that at least some emotions, in the right circumstances, were rational. In a way, so did David Hume and Adam Smith.... In this context the term rational does not denote explicit logical reasoning but rather an association with actions or outcomes that are beneficial to the organism exhibiting emotions. The recalled emotional signals are not rational in and of themselves, but they promote outcomes that could have been derived rationally." (Looking for Spinoza, 146-150)

To sum up: thinking, even "rational" (or efficacious) thinking, is emotional from the start. Sometimes emotions influence or guide thought for the worse, sometimes for the better: but they always do guide it. If Damasio is right, emotions tend to do a better job guiding thought when those emotions have been trained by experience. This explains, for example, why the intuitions of experts can be so amazingly insightful: because their emotions have been trained, through intensive experience, to give immediate signals illuminating the point at issue, so, for example, an expert on Greek sculpture can intuitively determine whether a statue is a genuine or a fake merely by looking at.

Thursday, November 29, 2007

In which we briefly outline some of the philosophic underpinnings of ARCHNblogs critiques.

"(Unlike natural laws) norms and normative laws can be made and changed by man, more especially by a decision or convention to alter them...This decision can never be derived from facts (or from statements of facts), although they pertain to facts...Hence, the "critical dualism" is properly a dualism of facts and decisions....This dualism of facts and decisions is, I believe, fundamental. Facts as such have no meaning; they can gain it only through our decisions."

"It must, of course, be admitted that the view that norms are conventional or artifical indicates that there will be a certain element of arbitrariness involved, i.e. that there may be different systems of norms between which there is not much to choose...But artificiality by no means implies full arbitrariness. Mathematical calculi, for instance, or symphonies, or plays, are highly artificial, yet it does not follow that one calculus or symphony or play is just as good as any other. Man has created new worlds--of language, of music, of poetry, of science, and the most important of these is the world of the moral demands for equality, for freedom, and for helping the weak....Our comparison is only intended to show that the view that moral decisions rest with us does not imply that they are entirely arbitrary." - Karl Popper, Chapter 5, Nature and Convention, "The Open Society And Its Enemies"

Tuesday, November 27, 2007

Ayn Rand’s two most important philosophic works in essay form are her “The Objectivist Ethics” and the essays on concepts that form the "Introduction to Objectivist Epistemology." In their critiques of these works, Gary Merrill and Michael Huemer have drawn attention to an important technique in Rand’s argumentation. Rand defends her position using as a background the supposedly failed attempts of previous philosophers, arguing that the credibility of her position is advanced because their positions are so blatantly false (if not pure evil). To the extent that Rand fails to accurately describe these opposing views, her case for Objectivism becomes that much less credible. (Some of what I say is indebted to the discussions of Merrill and Huemer.)

Rand begins her discussion in ITOE with a review of various philosophical traditions on the question of universals with an overview of five schools: extreme realism, moderate realism, nominalism, extreme nominalism and conceptualism. (p. 2.) We are, however, given only two philosophers (Plato and Aristotle) who hold any of these positions (extreme realism and moderate realism). Not a single representative is given for the nominalist, extreme nominalist and conceptualist schools. This makes it difficult for the reader to determine the accuracy of Rand’s description. It might be the case that they were wrestling with problems or encountered difficulties which Rand’s theory also has. Her readers will never know.

Rand returns to these schools later with slightly more elaboration. Rand says the following about nominalists and conceptualists: “The nominalist and conceptualist schools regard concepts as subjective, i.e., as products of man’s consciousness, unrelated to the facts of reality, as mere ‘names’ or notions arbitrarily assigned to arbitrary groupings of concretes on the ground of vague, inexplicable resemblances.” (p. 53.) This is interesting because Rand’s position that only particulars exist is (in the view of many commentators) a version of nominalism or conceptualism. Is it really the case that all nominalists and conceptualists consider concepts “unrelated to the facts of reality”? Is there not a single significant thinker in this tradition who considered concepts objective? Doing a bit of reading lately in John Dewey (who probably falls in conceptualist camp), I came across the following from his Nature and Experience: “Meaning is objective and universal . . . . It requires the discipline of ordered and deliberate experimentation to teach us that some meanings, as delightful or horrendous as they are, are meanings communally developed in the process of communal festivity or control, and do not represent the polities, and ways and means of nature apart from social control . . . the truth in classical philosophy in assigning objectivity to meanings, essences, ideas remains unassailable.” (Nature and Experience, pp. 188-89.) Maybe Dewey and the like are mistaken, but it hardly seems fair to imply that their motivation is the destruction of the human mind without some evidence.

Even if the various positions with respect to universals are sufficiently well known as to justify Rand’s cursory discussion, there is much in ITOE that calls out for explanation. Merrill points to an example which has became somewhat famous: “As an illustration, observe what Bertrand Russell was able to perpetrate because people thought they ‘kinda knew’ the meaning of the concept of ‘number’ . . . .” (pp. 50-51.) Because of Rand’s unwillingness to provide a citation or elaboration concerning what Russell perpetrated, even her point gets lost.

There are many other jabs in ITOE which are almost as egregious. Rand occasionally objects to “Linguistic Analysis,” without much of a description of this diverse movement. (pp. 47-48, 50 and 77.) She does, at least, name Ludwig Wittgenstein’s theory of family resemblance as an example of what is supposedly wrong with it. (p. 78.)

Curiously, Kant does not loom large in ITOE, or at least not in the way one would expect. Since Kant was the most evil man in history and universals the most important problem in philosophy, one might expect that Rand would discuss Kant’s theory of universals. When Rand does get around to discussing Kant, she attacks him for inspiring pragmatists, logical positivists and Linguistic Analysts (“mini-Kantians”). Her two sources for Kant are herself (a quotation from For the New Intellectual) and a quote from the now obscure Kantian Henry Mansel. (pp. 77, 80-81.)

What David Gordon says of Peikoff’s The Ominous Parallels is even more true of ITOE: it is “the history of philosophy with the arguments left out.”

Saturday, November 24, 2007

Reason. One of the greatest challenges for the critic of Objectivism involves trying to make sense of Rand's conception of reason. The chief difficulty is that Rand never explained, in empirical, testable terms, how to distinguish valid from invalid reasoning. This is, to be sure, the problem with all reasoning that goes beyond deductive logic. In deduction, the critic may examine the form of the argument in order to determine validity. Not so with non-deductive reasoning. In such reasoning, there is no reliable method for testing the conclusion by examining the structure or form of the argument.

For this reason, Rand's assertion that "Reason is man's only means of grasping reality and acquiring knowledge" is empirically empty. Since Rand provides us no way of distinguishing between what she regarded as valid and non-valid reasoning, there is no way to evaluate the extent to which her conception of reason clashes with the "natural reasoning" discovered by cognitive science.

By studying how people actually reason to solve problems of everyday life, cognitive science has discovered that deductive logic has little to do with the inferences people actually make:

[F]ormal logic is not a good description of how our minds usually work. Logic tells us how we should reason when we are trying to reason logically, but it does not tell us how to think about reality as we encounter it most of the time....

Logic enables us to judge the validity of our own deductive reasoning, but much of the time we need to reason non-deductively — either inductively, or in terms of likelihoods, or of causes and effects, none of which fits within the rules of formal logic. The archetype of everyday realistic reasoning might be something like this: This object (or situation) reminds me a lot of another that I experienced before, so probably I can expect much to be true of this one that was true of that one. Such reasoning is natural and utilitarian — but logically invalid....

Our natural reasoning is thus a kind of puttering around — the intellectual equivalent of what a child does when it messes around with some new toy or unfamiliar object. After a spell of mental messing around, we may put our reasoning into logical terms, but the explanation comes after the fact, the actual process took place according to some other method... [The Universe Within, p. 132-136]

In my JARS article, I quoted psychologist Paul E. Johnson's description of how experts in medicine, engineering, and other skilled professions think when solving professional problems. Here's the entire quotation:

I’m continually struck by the fact that the experts in our studies very seldom engage in formal logical thinking. Most of what they do is plausible-inferential thinking based on the recognition of similarities. That kind of thinking calls for a great deal of experience, as we say, a large data base. If anybody’s going to be logical in a task, it’s the neophyte, who’s desperate for some way to generate answers, but the expert finds logical thinking a pain in the neck and far too slow. So the medical specialist, for instance, doesn’t do hypothetical-deductive, step-by-step diagnosis, the way he was taught in medical school. Instead, by means of his wealth of experience he recognizes some symptom or syndrome, he quickly gets an idea, he suspects a possibility, and he starts right in looking for data that will confirm or disconfirm his guess.

I further suggested in the JARS article that the way in which professionals solve problems does not accord with Rand's notion of reason. Seddon responded to this by claiming that I equate Rand's view of logic with deductive logic. But if he had read more attentively the passage above, he'd see that his claim misses the point. When medical specialists "quickly get an idea" based on their wealth of experience, they are reasoning deductively, as follows:

Disease X is known to produce various symptoms, including A, B, and C.This patient has symptoms A, B, and C.Therefore, (I suspect) this patient has disease X.

This is clearly not an inductive inference. It's a deductive inference, but an invalid one: it commits the fallacy of the undistributed middle.

Now whether Rand's "reason" includes invalid deductions or not, I will leave for Rand's apologists to decide. The only point I wish to make is that human intelligence apparently cannot do without making use of invalid inferences. Indeed, the whole question of "validity," in everyday life, is of no very great concern. What is important is: (1) the experiential data base from which the invalid inferences are made; (2) the extent to which our conclusions are subjected to criticism (particularly empirical criticism, when that is possible).

This is not to say that we must always make use of invalid inferences, coupled with criticism of conclusions. There are fields of inquiry, particularly in the sciences, where strict deductive logic remains an important tool of analysis. But this does not apply across the board. Many of the problems of everyday life cannot be solved, in part or whole, by only using logically valid forms of reasoning. As John R. Anderson, a "leading theoretician of thinking" (according to Morton Hunt), puts it: "Naturalistic reasoning doesn't use the rules of general inference, but tends to be 'content-specific' — it uses rules that work in a particular area of experience." Morton Hunt sums up the position as follows:

[W]e are pragmatists by nature; what feels right we take to be right; were this not so, we would long ago have disappeared from the earth. Our pragmatism, our natural mode of reasoning, is not anti-intellectual but is the kind of effective intellectuality that was forged in the evolutionary furnace. (The Universe Within, 138)

Wednesday, November 21, 2007

Wittgenstein's "family resemblance." A philosophical idea that has proved influential in cognitive science is Wittgenstein's "family resemblance" critique of the classical view of concepts (of which Objectivism is a variant):

Consider for example the proceedings that we call "games." I mean board games, card games, ball games, Olympic games, and so on. What is common to them all? Don't say, "There must be something common, or they would not be called 'games' " - but look and see whether there is anything common to all. For if you look at them you will not see something common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! Look for example at board games, with their multifarious relationships. Now pass to card games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball games, much that is common is retained, but much is lost. Are they all "amusing"? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail.

Now consider Rand's "essentialism" (i.e., her belief that all concepts have essential characteristics):

When a given group of existents has more than one characteristic distinguishing it from other existents, man must observe the relationships among these various characteristics and discover the one on which all the others (or the greatest number of others) depend, i.e., the fundamental characteristic without which the others would not be possible.

This view gratuitously assumes that all of reality is easily categorized into objects that have easily identified essential characteristics. But why should this be so? Was reality created for the convenience of the human mind? The Randian view not only oversimplifies reality to the point of serious distortion, it also seriously underestimates the ability of the mind (particularly the unconscious mind) to handle complexity. The mind has no difficult grasping the idea of games, even if it can' t explain why.

A while ago, Greg Nyquist wondered "Is Objectivism Dangerous?" He concluded that the answer was no, despite the often fantastic theorising and overblown rhetoric of some of its proponents.

However, here in New Zealand we have the interesting situation of the former leader of the Libertarianz, Objectivist Lindsay Perigo, on Tuesday taking the quite unprecedented step of calling for the violent overthrow of the presiding government, a coalition of several parties dominated by the Labour Party. Comparing the situation to the American War of Independence, he said:

"...the time for mere marches is past...It is the right and duty of New Zealand citizens to throw off this government, which has long evinced a desire—nay, a compulsion—to subjugate us to absolute despotism. We should not wait for the 2008 election...New Zealanders must now ask themselves if they are a free people—and if so, are they prepared to act accordingly? Which is to say, are they prepared forcibly to evict all tyranny-mongers from their positions of power?"

In a followup comment, "So, now to overthrow!" Perigo continues:

"Private feedback this evening indicates overwhelming acceptance that the time has come...Everything from this point until Showtime must of necessity be clandestine..."

He then seems to try to soften his original position with some of his trademark outre humour, before returning to his theme:

"All right, I'm getting flippant already. Seriously, don't fret at the absence of progress reports. Needs must that this project go underground. But help is on the way."

One commenter described himself as "shocked" yet applauded Perigo's stand, to which Perigo replied "Bravo to you! At least you took in its import. And yes, this is where we've got to..."

This is the first time in my recollection that the former leader of a longstanding political party has called for the overthrow by force of a sitting government. The issue that has apparently brought him to make these threats is the Electoral Finance Bill, a controversial attempt to stop anonymous election funding that is currently before Parliament. The bill appears flawed in many ways; but does not appear to be serious enough a threat to free speech to require revolutionary action.

Should we take this sort of literal call to arms seriously? Probably not. These days Perigo seems to largely act as a figurehead in the Objectivist/Libertarian movement, issuing various provocative pronouncements and leaving others to do the organizational donkey work behind the scenes. The Libertarianz themselves, while composed of some highly competent and intelligent people, also do not have much of a reputation for organizational prowess, and have struggled to gain political traction. A former media personality, Perigo's star has fallen significantly over the past decade or more since leaving employment in the state-run Television New Zealand. His naturally contrarian tendencies have made him an interesting figure in the political landscape, but other than sporadic fill-in media gigs, inflammatory web posts and a short run in a student newspaper column, his actual output has reduced to a trickle. How seriously he is taken in Objectivist circles is also uncertain.

It is most likely that this is either a simple cri de coeur, to be quietly regretted in the cold light of day, or a kind of publicity stunt intended to re-ignite media interest in his views. The danger, however, in these sorts of things lies not so much with Perigo himself, but more with some of the figures on the fringes of these movements who perhaps might take calls for some kind of "Showtime" more seriously than their author may have intended.

At any rate, I will email Bernard Darnton, current leader of the Libertarianz, for a reaction to these statements.

UPDATE: Perigo has now deleted his later "So, now to overthrow!" comment. However, I had archived it, and will put up a link to it later.

Tuesday, November 20, 2007

Due to the rapidly growing site content and visitor stats, over the next week or two we're going to sort out the blooming, buzzing manifold that is the ARCHNblog into more easily accessible categories in the sidebar. If there's any particular areas y'all want featured, let us know.

Monday, November 19, 2007

"Since I'm not a scientist, I'm not competent to judge whether Dave Harriman's criticisms of Einstein are ultimately right or not. I can say with some confidence that although brilliant, Einstein was undoubtedly corrupted by Kantian philosophy. That's no small matter." - Diana Mertz Hsieh, Noodlefood

Sunday, November 18, 2007

Anon76 begins his extraordinary series of posts by trying to diminish the claims of cognitive science on the grounds that there exists no consensus on the prototype theory and that, as a matter of fact, there are "many, many" competing theories. This, however, is a palpable exaggeration. If you were to count every subtle variation of each of the main theories, then you might get "many, many," but what would be the point of that? And, indeed, even the main theories are hardly inclusive of each other. Exemplar theories are often considered a kind of prototype theory, while the theory-theory view, which comes close to Popper's suggestion of a concepts as "theory-laden," could easily be dovetailed into the prototype theory (thus, the hybrid views). Atomistic theories are a bit more complicated. But they're even further from Rand's view (in their strong version, they hold that many concepts are innate), so why Anon76 would want to bring them up is anyone's guess.

One theory Anon didn't bring up is the Classical view of concepts, sometimes called Definitionism, of which Objectivism is a variant. Definitionism dominated thinking in the West until the last thirty years or so, during which it has taken mortal blows from cognitive science and the growing appreciation of Wittgenstein's notion of family resemblances (which I will discuss in a future post).

Anon76, I fear, simply has no clue as to the point of my Cognitive Revolution posts. He seems to be under the illusion that I am presenting these excerpts from books on the Cognitive Revolution as irrefragible proofs which conflict with nearly everything Rand believed. But that is not my intention at all. I am well aware that the theories developed by the Cognitive Revolution are conjectural, as are all scientific theories. To be sure, they are conjectures based on extensive empirical research that has been held up to the scrutiny of peer review. Contrast this with the conjectures brought forth by Rand, which were presented to the world without a whiff of evidence and have never faced the critical rigors of scientific peer review.

Now none of us at ARCHNBlog are experts in the field of cognitive science. So the question arises: if we want to know something about human cognition, where should we turn? Should we look to Rand, who is not a recognized expert on cognition, whose views are largely speculative, who shunned debates with other experts and who provided no scientific evidence to support her speculations? Or should we turn to those who have spent their entire adult lives specializing in doing scientific research on human cognition and whose findings are subject to peer review? I don't think there can be any doubt as to how a reality-centric person would answer these questions. Whatever shortcomings cognitive science may or may not have, there is every reason to believe that its findings are sounder and closer to the truth than those of Rand.

Keeping this in mind, I was sorry to find Anon76 denigrating Morton Hunt as a mere "popular science writer." In his book, The Universe Within, Hunt has presented his research on cognitive science in a way that can be understood by the intelligent layman. His book includes a 17 page list of cited sources, listing over 150 citations. He interviewed dozens of cognitive sciences, some of whom even went as far as to demonstrate their research to Hunt. Two cognitive sciences read Hunt's book in manuscript and offered corrections. Contrast this with Rand's IOTE, which has no list of cited sources, no citations of scientific studies, and which was written without the assistance of any cognitive scientists. (Incidentally, the problem of Rand's poor scholarship has been devastatingly criticized by Gary Merrill here.)

The core of Anon76's response is a rather confused defense of "invalid concepts" bundled with an attack on Popper's rejection of essentialist definitions. Let us consider "invalid concepts," first of all. Anon76 writes "One would think that [Greg Nyquist's] admission that there are such things as inefficient concepts would tend to support AR's theory, since it is precisely the inefficiency that she cites in her allegation that a concept like Blue Eyed Blondes, 5'11" tall would be an invalid concept." Unfortunately for Anon76, this view does not easily accord with what Rand wrote elsewhere about invalid concepts. According to Rand, invalid concepts are "attempts to integrate errors, contradictions or false propositions." Even worse, Rand claimed that "An invalid concept invalidates every proposition or process of thought in which it is used as a cognitive assertion." (IOTE, 65) This view, however, is self-contradictory, as can easily be demonstrated by the following proposition:

Blue Eyed Blondes, 5'11" tall are mortal.

Now Rand claims that any assertion made with invalid concepts is itself "invalid." Yet this proposition, using a concept which, at least according to Anon76, Rand regarded as "invalid," yields a proposition that is both intelligible and true. The error here is to assume that what I have called an "inefficient" concept can't refer to something in reality.

Even more problematic is the kind of the concepts that Rand considered "invalid"—concepts such as polarization, consumerism, extremism, isolationism, meritocracy, and simplistic. To suggest that these concepts have the same epistemological status as "all bachelors and non-returnable bottles is preposterous. Try submitting such a view to a peer reviewed journal in mainstream philosophy or cognitive science and see how far you get.

We get into even murkier waters when Anon76 turns to Popper's critique of definitions. Now it is important to understand that Popper's main point of attack is against essentialist definitions, i.e., the sort of definitions advocated by Aristotle and Rand. Popper does not deny that science uses definitions, but they are "nominalist definitions," and as such are only shorthand symbols or labels for some proposed phenomenon or theory. Popper's main two arguments against essentialist definitions are (1) They lead to "verbalism," that is "specious and insignificant" arguments about words; and (2) They lead either to an infinite regress or to circularity. Since Daniel Barnes has already addressed issues with (2), I will confine my comments to their first argument.

Verbalism. Essentialist definitions lead to verbalism for the simple reason that there exists no objective, formalized method to distinguish between true and false, good or bad, or proper and "improper" definitions. Consequently, when there is a disagreement over definitions, it cannot but lead to an argument about words.

Anon76, in trying to explain what distinguishes a "valid" from an "invalid" definition, writes: "A definition is valid when it highlights an essential distinguishing characteristic, i.e., the characteristic that explains the greatest number of other distinguishing characteristics of a class of things." But as a practical matter, this simply doesn't cut it. Among other things, it fails to address problems raised by Wittgenstein in his notion of family resemblances (to be discussed in a future post). It also commits the naive blunder of overemphasizing distinguishing characteristics at the expense of all other characteristics, thereby making the utterly gratuitous assumption that what distinguishes one referent from another is more important, in terms of understanding the referent, then the non-distinguishing characteristics. This is clearly seen in the Objectivist definition of man as a rational animal. Yet as anyone who reads history or knows anything about social psychology can tell you, non-rational motivation is clearly at least as important, if not more so, in the understanding human nature, than is rationality.

Anon76 in another place makes the following extraordinary confession: "Once we have definitions, then we use them to know what we mean," he writes. "But we know what we mean before we define a concept--otherwise we wouldn't know what to define. There are a thousand philosophical paradoxes that result without this fairly obvious point." But if we already know what we mean before we have the definition, then what is the point of having the definition? On this account, it's not so much paradoxical as redundant. If you know what you mean and other people know what you mean, trotting out a definition is merely an exercise in barren pleonasm.

We can guess what the point is by observing how Rand used definitions in practice. Albert Ellis long ago pointed out the problem of what he called "definitional thinking" in Objectivism — that is, assuming the point at issue in your definitions as a tactic of debate. We see this sort of "definist fallacy" in its clearest form in Rand's discussion of the term selfish, which she insist has only one meaning (i.e., her meaning). But as a matter of fact, a term's meaning is determined by the person using it. If a person uses the word selfish to mean "not having consideration for other people," then that is what he means by it and there's an end. There is no such thing as a "one and only true meaning" for a term, because terms are instrumental. If you have any doubt about this, just look at any unabridged dictionary — the Oxford English dictionary, for example. There you will find that most words have multiple definitions and can be used in many different senses. In the very best dictionaries, there will be examples, taken from literature, illustrating the particular usage defined.

Wednesday, November 14, 2007

Concept-Formation. Rand's theory of concept-formation could be summed up as follows: in the pursuit of one insight, Rand committed a number of errors. The insight is something Rand called "unit economy," which cognitive scientists refer to as "achieving human scale" or attaining "cognitive efficiency." Rand's errors have two main sources: her blank slate view of man's "cognitive mechanism" and her denial of the originative functions of the "cognitive unconscious."

People often think it sophisticated to speak disparagingly of "pigeonholing," but our tendency to classify new experience is an essential component of human intellectual behavior.... And that's because we pigeonhole our experience in ways that prove functional. We do so, however, not by conscious design and not even because we are taught to do so, but naturally and inevitably; that is the message of current research on concept-formation. Daniel Osherson of MIT, who is currently exploring the nature of the differences between natural categories and artificial ones, put it to me this way: "We can program a computer to deal with any concept, no matter how bizarre or unrealistic—such as, for instance, the class 'all bachelors and nonreturnable bottles'—and the computer can handle it. But the human mind can't. The computers in our heads simply aren't hard-wired to form or make use of concepts of that sort but only of natural and realistic ones."

"Hard-wired" is the key word. It's a computer-science term that refers to built-in characteristics—those resulting from fixed circuitry rather than programming. Applied to human beings, hard wiring refers to innate abilities or, at least, predispositions, as opposed to learned behavior. Osherman was saying, in effect, that the circuitry of the brain develops, under the biochemical guidance of the genes, in such a way as to assemble incoming experience into realistic, useful concepts, and not the converse....

The new view of categorization [arising from cognitive science] emerged from several lines of research, among them studies of how children acquire concept words. When they first begin to talk, they use words for specific objects or actions, but soon begin to form categories — not by subtracting or extracting traits but by generalizing on the traits they see. In fact, overgeneralizing on them. The child between one and two will, typically, learn a word like "ball" or "dog" and then call all round things "ball" and all furry, four-legged creatures "dog"; the child is categorizing, and simply needs to have its categories refined....

Many such experiments have made it clear that concepts are learned, and that ... what is learned is not whatever we choose to teach the child; it is the product of an interaction between incoming experience and the brain's circuitry. We do not possess concepts a priori; rather, the concepts we make of our experience are largely predetermined by neural structures. The human brain is concept-prone — but prone to conceptualize experience in certain ways....

It took a revolution in cognitive theory for scientists to see two related facts that now seem obvious... First, we do not perceive an object as a set or list of distinct attributes (such as having two squares, shaded, and one border), but as wholes (a particular person, chair, house). Second, as a consequence, we do not naturally group objects in categories with sharp boundaries but in clusters that have a dense center and thin out to fuzzy, indeterminate edges, overlapping other categories....

Our method of making categories has a simple and obvious biological rationale: it is the mind's way of representing reality in the most cognitively economical form. In the real world, writes Eleanor Rosch, traits occur in "correlational structures"; observable characteristics tend to go together in bunches. "It is an empirical fact provided by the perceived world," she says, "that wings co-occur with feathers more than with fur," or that things that look like chairs have more sit-on-able traits than things that look like cats. The prototypes and categories that our minds naturally make are isomorphs of clusters of traits in the external world.

This is so simple a notion that you might wonder why it took the revolution of cognitive science to discover it. But such was the awesome reputation of Aristotle that his fiats concerning categories — they should be logically defined, clearly bounded, and nonoverlapping — kept us until now from seeing the plain facts about the mind's natural method of sorting out its impressions of the world. And that method, though neither logical nor tidy, is an evolutionary design that maximizes our ability to make sense of our experiences and to interact effectively with the environment. (The Universe Within, 163-173)

As long as Rand sticks to her "unit-economy," she manages to stay pretty close to the cognitive straight and narrow. When she strays from it, she tends to get into trouble. Consider her "definition" of a concept: "A concept is a mental integration of two or more units which are isolated according to a specific characteristic(s) and united by a specific definition." Alas, this doesn't quite hit the nail on the head. There is no integration of two or more units: merely the development of a prototype, which requires merely one "unit"! Nor is there any evidence that definitions play any role in the development of most concepts. Since the overwhelming majority of concepts are formed unconsciously and on the basis of intuitive prototypes, there's no need for definitions, which are mostly used to define word usage. (That's what you will find in any dictionary: details of word usage, often with specific examples, so that there's some truth in Popper's dictum that "definitions never convey information—except to those in need of a dictionary.") Nor is there any such thing as "invalid concepts." There are concepts (e.g. unicorns, centaurs) that refer to things that don't exist; there are so-called "artificial" concepts (e.g., all bachelors and nonreturnable bottles) that are cognitively inefficient and useless to the human mind—unmemorable and deserving to be forgot; and there are concepts that refer to confused theories (e.g., the concept of Marxism, Hegelianism, Objectivism, etc.): none of these concepts are "invalid" (perhaps their referents are "invalid," but that's another story). Since the mind forms most of its concepts unconsciously, on the basis of innate cognitive predispositions originally developed in the crucible of evolution, questions about the "validity" of concepts are misplaced. If our concepts never referred to anything in reality or were perversely inefficient, we wouldn't be around to argue about them: natural selection would have taken us out a long time ago. It is how concepts are used, the assertions and arguments and theories made about their referents, that can be true or false, valid or invalid.

Are You A Rand Cultist, Or Just A Fan? Take This Simple Test To Find Out.

what they"re saying about the ARCHNblog:

"This wonderful blog chronicles exactly why and to what extent Ayn Rand was horribly incorrect on a number of issues. This is for the fence-sitting and fanatic but otherwise intelligent Objectivist alike. If the latter even exists. Highly recommended, but come prepared with your Rand collection..." - Commerican at Stumbleupon