Philosophy

July 29, 2012

In this provocative paper, sure to annoy evolutionary psychologists, Cecilia Hayes argues that the "cognitive processes that comprise cultural learning are themselves culturally inherited; they are cultural adaptations [rather than genetic adaptations]. They are products as well as producers of cultural evolution." Here is the abstract (and my two recent related posts are here and here):

Cumulative cultural evolution is what ‘makes us odd’; our capacity to learn facts and techniques from others, and to refine them over generations, plays a major role in making human minds and lives radically different from those of other animals. In this article I discuss cognitive processes that are known collectively as ‘cultural learning’ because they enable cumulative cultural evolution. These cognitive processes include reading, social learning, imitation, teaching, social motivation, and theory of mind. Taking the first of these three types of cultural learning as examples, I ask whether and to what extent these cognitive processes have been adapted genetically or culturally to enable cumulative cultural evolution. I find that recent empirical work in comparative psychology, developmental psychology and cognitive neuroscience provides surprisingly little evidence of genetic adaptation, and ample evidence of cultural adaptation. This raises the possibility that it is not only ‘grist’ but also ‘mills’ that are culturally inherited; through social interaction in the course of development, we not only acquire facts about the world and how to deal with it (grist), we also build the cognitive processes that make ‘fact inheritance’ possible (mills).

July 03, 2012

EO Wilson's new book, The Social Conquest of Earth, has reignited an old debate about natural evolution, i.e., the level at which it occurs. In the dominant camp are folks like Richard Dawkins and Steven Pinker who hold that evolution occurs via gene selection. In the other camp, folks like Wilson and Jonathan Haidt claim that evolution occurs at multiple levels, including via both gene and group selection. Notably, Charles Darwin himself supported the latter view in Descent of Man.

Not surprisingly, Dawkins and Pinker wrote hostile reviews. Dawkins lamented Wilson's "erroneous and downright perverse misunderstandings of evolutionary theory" and called Darwin's own support of group selection "anomalous". Other reviewers I've read include David Sloan Wilson, Steven Mithen, Jerry Coyne, and Leonard Finkelman. Wilson responded with a vigorous defense of his thesis in a NYT's Stone column. At least to me—a general reader, not a specialist in the field—Wilson's account seems entirely plausible and more than likely. What empirical observations might settle this dispute however seems less than clear.

Whatever the truth, what concerns me more about both camps is their penchant for, in the words of H Allen Orr, Darwinian storytelling. Evolutionary psychologists, who abound in both camps, often try to explain too much of human behavior, including our current morality, through evolutionary selection. While it is undeniable that our basic moral instincts come out of millions of years of evolution, it also seems to me that to explain the prolific range of our behavior, we should look more at the cultural edifice that our humanoid ancestors have developed relatively recently through symbolic language and the resulting explosion of speech, concept formation, and social learning.

June 29, 2012

If Stephen Hawking and Lawrence Krauss incline you to the view that physicists today are philosophically naïve, read Russell Stannard.

We said earlier that the job of science is to describe the world. In order to do this, we have to observe it to find out what kind of world it is. But having made the observations (done the experiments) what we write down in our physics textbooks is a description of the world itself, regardless of whether one happens to be observing it. Bohr, and other adherents to his so-called Copenhagen Interpretation of quantum mechanics, claimed that this was not so. What has been written down is not a description of the world at all, but a description of acts of observation made on the world. All our customary scientific terms such as energy, momentum, position, speed, distance, time, etc. -- they are terms specifically for the description of observations. It is a misuse of language to try and apply them to a world-in-itself divorced from the action of an observation. It is this misuse of language that leads to problems like that posed by the wave/particle paradox. Which is not to say that the world-in-itself does not exist outside the context of someone making an observation of it. Rather, as Werner Heisenberg asserted, all attempts to talk about the world-in-itself are rendered meaningless.

Not that there is anything new in this. The philosopher Immanuel Kant had long ago asserted that one could know nothing about the thing-in-itself. (So much for the death of philosophy.

Finally we ask whether the scientific enterprise, even in this more limited domain of describing only interactions with the world rather than the world itself, might one day achieve complete knowledge. I think not. After all, what do we do our science with? Our brain. But how come we have a brain? It is something that evolved in response to the need of our ancestors to find food, shelter, and avoid predators. It enabled them to survive to the point where they could mate and pass on their genes. The brain was part of their survival kit. Why therefore should anyone think that such an imperfect instrument should be capable of mastering all knowledge regardless of whether it has any relevance to survival?

June 25, 2012

Christine Overall on one of the most important ethical decisions people make in their lives: to procreate or not?

In fact, people are still expected to provide reasons not to have children, but no reasons are required to have them. It’s assumed that if individuals do not have children it is because they are infertile, too selfish or have just not yet gotten around to it. In any case, they owe their interlocutor an explanation. On the other hand, no one says to the proud parents of a newborn, Why did you choose to have that child? What are your reasons? The choice to procreate is not regarded as needing any thought or justification. ...

The burden of proof — or at least the burden of justification — should therefore rest primarily on those who choose to have children, not on those who choose to be childless. The choice to have children calls for more careful justification and thought than the choice not to have children because procreation creates a dependent, needy, and vulnerable human being whose future may be at risk. The individual who chooses childlessness takes the ethically less risky path. After all, nonexistent people can’t suffer from not being created. They do not have an entitlement to come into existence, and we do not owe it to them to bring them into existence. But once children do exist, we incur serious responsibilities to them.

June 19, 2012

(Cross-posted on 3 Quarks Daily, where it has received many comments. A slightly modified version of this essay appeared in the July/Aug 2013 issue of the Humanist.)

Some years ago in a Montana slaughterhouse, a Black Angus cow awaiting execution suddenly went berserk, jumped a five-foot fence, and escaped. She ran through the streets for hours, dodging cops, animal control officers, cars, trucks, and a train. Cornered near the Missouri river, the frightened animal jumped into its icy waters and made it across, where a tranquilizer gun brought her down. Her "daring escape" stole the hearts of the locals, some of whom had even cheered her on. The story got international media coverage. Telephone polls were held, calls demanding her freedom poured into local TV stations. Sensing the public mood, the slaughterhouse manager made a show of "granting clemency" to what he dubbed “the brave cow.” Given a name, Molly, the cow was sent to a nearby farm to live out her days grazing under open skies—which warmed the cockles of many a heart.

Cattletrying to escapeslaughterhousesarenotuncommon. Few of their stories end happily though. Some years ago in Omaha, six cows escaped at once. Five were quickly recaptured; one kept running until Omaha police cornered her in an alley and pumped her with bullets. The cow, bellowing miserably and hobbling like a drunk for several seconds before collapsing, died on the street in a pool of blood. This brought howls of protest, some from folks who had witnessed the killing. They called the police’s handling inhumane and needlessly cruel.

It’s tempting to see these commiserating folks as animal lovers—and that's how they likely see themselves—until one remembers what they eat for dinner. A typical slaughterhouse in the United States kills over a thousand Mollys a day—lined up, shot in the head, and often cut-open and bled while still conscious, an end no less cruel and full of bellowing—all because Americans keep buying neatly-packaged slices of their corpses in supermarkets. Raised unnaturally and inhumanely, over a million protesting birds and mammals are violently killed in the U.S. every hour (that's 300 per second!). Is it then unreasonable to say that nearly all meat-eaters in America participate quite directly in a cycle of suffering and cruelty of staggering scale?

Yet the idea persists that Americans love animals, largely because of their love and concern for a class of animals called "pets" (and other "cute animals" like dolphins, polar bears, and pandas). Most Americans have had at least one pet at some point in their lives, and many see their pets as extensions of their families; they photograph their pets, swap stories about them, buy them gifts and treats, spend money on their sicknesses, support taxes to build shelters for them, and mourn their deaths. Yet, the question continues to rankle, as Elizabeth Kolbert put it:

"How is it that Americans, so solicitous of the animals they keep as pets, are so indifferent toward the ones they cook for dinner? The answer cannot lie in the beasts themselves. Pigs, after all, are quitecompanionable, and dogs are said to be delicious."[1]

What might explain this disjunction? From humankind's long community with farm animals, how has it come to this?

June 13, 2012

The phrase "moral relativism" is usually considered a scornful term today. It is often used to discredit an opponent's moral position by those who more or less subscribe to the idea that universal and objective moral truths exist, that all humans can discover them, and that morality, like science, can be progressive (less often the term is misapplied to suggest a plainly dishonest or inconsistent moral position). In this essay, Jesse Prinz attempts to dignify the phrase by arguing that moral relativism is in fact true.

Suppose you have a moral disagreement with someone, for example, a disagreement about whether it is okay to live in a society where the amount of money you are born with is the primary determinant of how wealthy you will end up. In pursuing this debate, you assume that you are correct about the issue and that your conversation partner is mistaken. Your conversation partner assumes that you are making the blunder. In other words, you both assume that only one of you can be correct. Relativists reject this assumption. They believe that conflicting moral beliefs can both be true. The staunch socialist and righteous royalist are equally right; they just occupy different moral worldviews.

Relativism has been widely criticized. It is attacked as being sophomoric, pernicious, and even incoherent. Moral philosophers, theologians, and social scientists try to identify objective values so as to forestall the relativist menace. I think these efforts have failed. Moral relativism is a plausible doctrine, and it has important implications for how we conduct our lives, organize our societies, and deal with others.

May 22, 2012

(Cross-posted on 3 Quarks Daily, where it has received many comments.)

It is often said that humans are the only animals to use symbols. So many other claims of human uniqueness have fallen away—thoughts, emotions, intelligence, tool use, sense of fairness—what's so special about symbols, you ask? I share your skepticism, dear reader, and in the next few paragraphs I'll tell you why.

Let's begin by clarifying what "symbol" means here. One way to do this is to contrast symbols with signs. A sign, such as a red light, a grimace, a growl, or a thunderstorm, signifies something direct and tangible, making us think or act in response to the thing signified. Issuing and responding to signs is commonplace in Animalia. A symbol, on the other hand, is "something that represents something else by association, resemblance, or convention". A symbol allows us to think about the thing or idea symbolized outside its immediate context, such as the word "water" for the liquid, "7" for a certain quantity, and "flag" for a community. What is symbolized doesn't even have to be real, such as God, and herein lies the power of symbols—they are the building blocks of abstract and reflective thought. Evidence of material symbols used by humans dates back at least 60-100K years, when burial objects and decorated beads start to appear in archaeological finds. Linguistic symbols were almost certainly in use long before then.

According to Susanne Langer, symbols serve "to liberate thought from the immediate stimuli of a physically present world; and that liberation marks the essential difference between human and nonhuman mentality ... Words, pictures, and memory images are symbols that may be combined and varied in a thousand ways." It is only through symbolic thought that we imagine the past or the future—mental time-travel, including episodic memory, requires the use of symbols. Indeed, language is really a system of symbolic communication, combining words (which are symbols) and syntax. If non-human animals lack symbols, what and how do they really think?

May 05, 2012

In this review of James R. Hurford's The Origins of Grammar, Nick Enfield presents a significant viewpoint on how we humans, from a stage when our ancestors were without language, came to acquire language in all its modern complexity.

If you could travel back to a time around the dawn of humankind, and if you encountered a people there whose only form of language was a list of one-word interjections like Yuck, Wow, Oops, Hey!, No, and Huh?, would you say that these people were of a different species, not quite human? Would they be like today’s apes that simply don’t have it in them to fully acquire a modern human language? Or would they be the same as us only less well equipped for communication, like the eighteenth-century man who is every bit human but happens not to have been born in a world with telephones? If the latter were true, then language would be more technology than biology, more something we build than something that grows. It’s clear that the earliest humans did not possess language as we know it. The question is whether this was because language as we know it hadn’t yet been invented.

In James R. Hurford’s towering account of our species’ path from being once without language to now being emphatically with it, he proposes that just such a monophrase language of the Yuck/Wow variety was an important early human achievement. And, Hurford argues, while our earliest forms of language had no grammatical rules by which words were combined to form sentences, they were far from primitive call systems.

More here. Also see this 3QD thread for a discussion on language, syntax, and symbol use among animals, where I've weighed in too.

April 14, 2012

The old debate on free will has lately flared up again. Are we "biochemical puppets, swayed by forces beyond our conscious control", or "authors of our own actions, beliefs, and desires"? Or something in between? The Chronicle of Higher Education recently invited several thinkers to weigh in on the question of free will, which then spawned additional commentaries. One that I really enjoyed reading is this essay by my friend, Chris Schoen. The Iris Murdoch quote he provides with commentary towards the end is particularly insightful.

Sometimes people have arguments they don't want to have in order to shore up some principle they wish they didn't have to defend. Actually, most debate can probably be characterized this way, though it doesn't always nestle up against outright absurdity the way that the argument I will speak of here is so prone to do, namely the argument that something called "Determinism" means that something else called "Free Will" cannot exist.

This is a rather hot debate right now, largely because advocates of the "incompatibilist" or "hard determinist" view I have just described believe they smell blood in the water and have moved in for the kill. Sam Harris, the smartest man who was ever wrong about everything, has a recent book out on the topic ("Free Will,") and Jerry Coyne, the smartest horse ever to be led to water while steadfastly refusing to drink, has made this topic a regular staple on his blog.

The conversation has lately found its way into the pages of the Chronicle of Higher Education, which recently featured five essays on the debate by Coyne and some of the top names in neuroscience and philosophy of mind.

April 06, 2012

In 9th century CE India, a Jain teacher called Jinasena composed a work called Mahapurana. The following is a quote from it.

Some foolish men declare that [a] Creator made the world. The doctrine that the world was created is ill-advised, and should be rejected. If god created the world, where was he before creation? If you say he was transcendent then, and needed no support, where is he now? No single being had the skill to make the world—for how can an immaterial god create that which is material? How could god have made the world without any raw material? If you say he made this first, and then the world, you are faced with an endless regression. If you declare that the raw material arose naturally you fall into another fallacy, for the whole universe might thus have been its own creator, and have risen equally naturally. If god created the world by an act of will, without any raw material, then it is just his will made nothing else and who will believe this silly stuff? If he is ever perfect, and complete, how could the will to create have arisen in him? If, on the other hand, he is not perfect, he could no more create the universe than a potter could. If he is formless, actionless, and all-embracing, how could he have created the world? Such a soul, devoid of all modality, would have no desire to create anything. If you say that he created to no purpose, because it was his nature to do so then god is pointless. If he created in some kind of sport, it was the sport of a foolish child, leading to trouble. If he created out of love for living things and [in his] need of them he made the world, why did he not make creation wholly blissful, free from misfortune? Thus the doctrine that the world was created by god makes no sense at all.

April 02, 2012

Do humans have an innate universal grammar, i.e., are we all born with certain foundational rules of language "hard-wired" in our brain—and which we don't need to learn? The dominant theory in linguistics, long associated with Noam Chomsky, says yes. However, this is not entirely accepted in the field, and challengers have only increased. Many now lean away from innate universal rules, and towards innate capacities or instincts that are shaped by culture into rules. Here is an excellent article on the debate and the work of a leading challenger, Dan Everett.

A Christian missionary sets out to convert a remote Amazonian tribe. He lives with them for years in primitive conditions, learns their extremely difficult language, risks his life battling malaria, giant anacondas, and sometimes the tribe itself. In a plot twist, instead of converting them he loses his faith, morphing from an evangelist trying to translate the Bible into an academic determined to understand the people he's come to respect and love.

Along the way, the former missionary discovers that the language these people speak doesn't follow one of the fundamental tenets of linguistics, a finding that would seem to turn the field on its head, undermine basic assumptions about how children learn to communicate, and dethrone the discipline's long-reigning king, who also happens to be among the most well-known and influential intellectuals of the 20th century.

More here. Some discussion on this article, including by Everett, took place here. This builds upon a really good New Yorker essay from 2007, The Interpreter (which carried the photo above). Also check out The Myth of Language Universals, which argues that the "claims of Universal Grammar ... are either empirically false, unfalsifiable, or misleading in that they refer to tendencies rather than strict universals."

March 27, 2012

Physicist Lawrence Krauss has a new book, A Universe from Nothing. I happen to like Krauss and have seen some of his lectures. He is brilliant, entertaining, and good at distilling cosmology for non-specialists (and the emerging picture of the cosmosis incredibly mind-bending). In his role as a science educator, I've also seen him as smarter on religion than the better known neo-atheists like Dawkins and Harris (which admittedly is not saying much, and Krauss seems to have gotten worse).

It was the book's subtitle, however, that really caught my attention: Why There Is Something Rather than Nothing. This is what Krauss has set out to explain—surely one of the greatest mysteries of all time, and perhaps the ultimate question in metaphysics. Can Krauss be serious, I thought? Then I read David Albert's excellent review and I'm persuaded that we are no closer to answering that question than we were before this book, and I am really stunned that Krauss thinks he is answering it. Another reviewer has supplied what may be a more accurate subtitle: "How It Is That There Happens to Be This Something rather than Some Other Something." Sure, this is not sexy but at least it's not false marketing. Here is Albert's review:

Lawrence M. Krauss, a well-known cosmologist and prolific popular-science writer, apparently means to announce to the world, in this new book, that the laws of quantum mechanics have in them the makings of a thoroughly scientific and adamantly secular explanation of why there is something rather than nothing. Period. Case closed. End of story. I kid you not. Look at the subtitle. Look at how Richard Dawkins sums it up in his afterword: “Even the last remaining trump card of the theologian, ‘Why is there something rather than nothing?,’ shrivels up before your eyes as you read these pages. If ‘On the Origin of Species’ was biology’s deadliest blow to super­naturalism, we may come to see ‘A Universe From Nothing’ as the equivalent from cosmology. The title means exactly what it says. And what it says is ­devastating.”

Well, let’s see. ... Where, for starters, are the laws of quantum mechanics themselves supposed to have come from? Krauss is more or less upfront, as it turns out, about not having a clue about that. He acknowledges (albeit in a parenthesis, and just a few pages before the end of the book) that every­thing he has been talking about simply takes the basic principles of quantum mechanics for granted. “I have no idea if this notion can be usefully dispensed with,” he writes, “or at least I don’t know of any productive work in this regard.” And what if he did know of some productive work in that regard? What if he were in a position to announce, for instance, that the truth of the quantum-mechanical laws can be traced back to the fact that the world has some other, deeper property X? Wouldn’t we still be in a position to ask why X rather than Y? And is there a last such question? Is there some point at which the possibility of asking any further such questions somehow definitively comes to an end? How would that work? What would that be like?

More here. Also see this interview in which Robert Wright brilliantly and relentlessly grills Krauss on the facile claim in the book's subtitle.

March 21, 2012

Professor Vinay Lal "is a cultural critic, historian, scholar and writer who divides his time between Los Angeles and New Delhi. He writes widely on the history and culture of colonial and modern India, popular and public culture in India (especially cinema), historiography, the politics of world history, the Indian diaspora, global politics, contemporary American politics, the life and thought of Mohandas Gandhi, Hinduism, and the politics of knowledge systems." [From Wikipedia.]

In this impassioned lecture bubbling with insights (and red meat for leftists), he discusses the "imperialism of categories", i.e., the taxonomy of classifications, analyses, and judgments that postcolonial societies have adopted wholesale from the West. He then talks about what one can do by way of resistance and alternative conceptions (see also my related essay). On his blog, Lal Salaam (leftist pun surely intended), he probes in more detail the issues raised in this lecture. This was part of a 2010 meeting that "brought together academics and activists from around the world to share their experiences in understanding and resisting Western hegemony in various areas, including agriculture, education, health care, history, media, politics and science." Many other lectures are archived here but I haven't seen any yet.

March 17, 2012

Akeel Bilgrami, Professor of Philosophy at Columbia University, has written a very interesting essay on Gandhi's philosophy. Bilgrami is struck by the integrity of Gandhi's ideas, in the sense that they derive "from ideas that were very remote from politics. They flowed from the most abstract epistemological and methodological commitments." Here is a brief excerpt for a flavor of Bilgrami's argument (via 3QD):

What I mean by truth as a cognitive notion is that it is a property of sentences or propositions that describe the world. Thus when we have reason to think that the sentences to which we give assent exhibit this property, then we have knowledge of the world, a knowledge that can then be progressively accumulated and put to use through continuing inquiry building on past knowledge. [Gandhi's] recoil from such a notion of truth, which intellectualizes our relations to the world, is that it views the world as the object of study, study that makes it alien to our moral experience of it, to our most everyday practical relations to it. He symbolically conveyed this by his own daily act of spinning cotton. This idea of truth, unlike our quotidian practical relations to nature, makes nature out to be the sort of distant thing to be studied by scientific methods. Reality will then not be the reality of moral experience. It will become something alien to that experience, wholly external and objectified.

It is no surprise then that we will look upon reality as something to be mastered and conquered, an attitude that leads directly to the technological frame of mind that governs modern societies, and which in turn takes us away from our communal localities where moral experience and our practical relations to the world flourish. It takes us towards increasingly abstract places and structures such as nations and eventually global economies. In such places and such forms of life, there is no scope for exemplary action to take hold, and no basis possible for a moral vision in which value is not linked to 'imperative' and 'principle', and then, inevitably, to the attitudes of criticism and the entire moral psychology which ultimately underlies violence in our social relations. To find a basis for tolerance and non-violence under circumstances such as these, we are compelled to turn to arguments of the sort Mill tried to provide in which modesty and tolerance are supposed to derive from a notion of truth (cognitively understood) which is always elusive, never something which we can be confident of having achieved because it is not given in our moral experience, but is predicated of propositions that purport to describe a reality which is distant from our own practical and moral experience of it.

All these various elements of his opposition to Mill and his own alternative conception of tolerance and non-violence were laid open by Gandhi and systematically integrated by these arguments implicit in his many scattered writings.

January 15, 2012

I spotted a recent book, The Dismal Science: How Thinking Like an Economist Undermines Community, by Harvard professor Stephen A. Marglin, who apparently writes from a socialist-communitarian point of view. The book jacket says the following:

Economists celebrate the market as a device for regulating human interaction without acknowledging that their enthusiasm depends on a set of half-truths: that individuals are autonomous, self-interested, and rational calculators with unlimited wants and that the only community that matters is the nation-state. However, as Stephen Marglin argues, market relationships erode community. In the past, for example, when a farm family experienced a setback--say the barn burned down--neighbors pitched in. Now a farmer whose barn burns down turns, not to his neighbors, but to his insurance company. Insurance may be a more efficient way to organize resources than a community barn raising, but the deep social and human ties that are constitutive of community are weakened by the shift from reciprocity to market relations.

Marglin dissects the ways in which the foundational assumptions of economics justify a world in which individuals are isolated from one another and social connections are impoverished as people define themselves in terms of how much they can afford to consume. Over the last four centuries, this economic ideology has become the dominant ideology in much of the world. Marglin presents an account of how this happened and an argument for righting the imbalance in our lives that this ideology has fostered.

Watch this excellent interview with Marglin to get the flavor of his central arguments (62 mins), and read the first chapter of his book and some book reviews here (Guardian, Times Higher Education, etc.) and on Amazon.

January 08, 2012

In this amusing TED talk, Tyler Cowen explores how stories work in our lives—how we receive them, how we tell them to ourselves and to others, and what we should be wary of—even as he is conscious that what he is talking about is also just a story!

January 07, 2012

Pico Iyer on the importance of cultivating a certain distance from our hyper-connected digital world:

We have more and more ways to communicate, as Thoreau noted, but less and less to say. Partly because we’re so busy communicating. And — as he might also have said — we’re rushing to meet so many deadlines that we hardly register that what we need most are lifelines.

So what to do? The central paradox of the machines that have made our lives so much brighter, quicker, longer and healthier is that they cannot teach us how to make the best use of them; the information revolution came without an instruction manual. All the data in the world cannot teach us how to sift through data; images don’t show us how to process images. The only way to do justice to our onscreen lives is by summoning exactly the emotional and moral clarity that can’t be found on any screen.

January 06, 2012

For many motivated readers, a favorite strategy for deflecting criticism of Krishna's dubious advice to Arjuna is to argue that, based on the events in the Mahabharata, the justification for the war is absolutely clear (in comments, one person saw it on par with the Allied case against Hitler!). I responded to this point in part 2 of my essay (Part 1, Part 2) but it's worth drawing attention to it again:

Some defend the Gita by saying that the Kauravas’ bad behavior made the war unavoidable and eminently justified. Perhaps, but that’s not the point. The point is about the quality of the arguments Krishna uses to persuade Arjuna to fight. If the best moral justifications for the war purportedly exist outside the Gita, and some of the worst inside it, what have we left? Given all the bad faith reasoning and the starkly instrumental view of human life in the Gita, which many saw through even in ancient times, what makes the Gita a work of wisdom? Why not get the Gita off its exalted pedestal in our minds and let it be an uncelebrated episode in the Mahabharata—an artful plot element in an epic work of literature?

However, the case for "just war" is not at all clear in the Mahabharata. It's debatable—and not black and white—which is exactly what makes the Mahabharata great. For starters, the standard rules of succession were inadequate for the situation at hand: Dhritarashtra is blind, so his younger brother, Pandu, is made the king. But then Pandu lands a curse and retreats to the forest with his two wives, leaving Dhritarashtra to rule instead. Yudhisthira is the oldest son in the family but he and the other four Pandavas are not really fathered by Pandu (due to his curse), rather Pandu's two wives find some "divine" lovers in the forest (!), raising questions about the royal Kuru lineage of the Pandavas. Nor did Pandu rule anytime during Yudhisthira's life. So as the first son of the long reigning and elder brother Dhritarashtra—who in his heart wants his son to be the king—doesn't Duryodhana, a warrior as skilled as any and an able administrator, have a claim to succession as well? I mean a reasonable case can be made, right?

Meanwhile Duryodhana's ambition grows and he wants the entire kingdom for the Kauravas, not just the better half of the Kuru kingdom that he will inherit. He loathes the Pandavas, partly because he saw them as uppity and mean to him in their youth, as princes are wont to be. So as an adult, Duryodhana is scheming and vicious to the Pandavas. But he can be kind to others, such as to the low-caste Karna. "Birth is obscure," he said, "and men are like rivers whose origins are often unknown." So while the Kauravas are not all-bad (it's worth noting that the elders, respected by both sides, end up supporting them, however reluctantly), the Pandavas are not all-good. The Pandavas spurn and insult Karna based on his caste; Arjuna's pride leads to Eklavya chopping off his thumb—and his hopes and livelihood. Draupadi taunts Duryodhana and his father's blindness. And why does Yudhisthira get so little flak for gambling and losing everything twice, including his half of the Kuru kingdom (after being forgiven the first time, he is foolish enough to play again), even wagering his own wife's body? What kind of man does that? Can we trust his judgment again with a kingdom? (And this when his real father is none other than the Lord of Judgment, Dharma.)

Is it any less obscene that while the Gita's Krishna goads Arjuna to fight the supposedly evil Kauravas, he has asked his own Yadava army to fight on the Kaurava side—because he wants to be seen as neutral! Countless foot soldiers get killed—pawns in the dharmic imperatives of big men, which we are so eager to applaud. The Pandavas, too, break the protocols of war and we rationalize it, why? Further, was it, or was it not, in the public-interest to continue the 13 years of Kaurava rule? These are all legitimate readings, befitting great literature. I think people are too often blinded by their instinct to defend the side "God" is on, or they are too easily seduced by simple narratives of good vs. evil. Anyhow, that's my take this evening._________________________________

January 03, 2012

(Cross-posted on 3 Quarks Daily, where it has received many comments.)

Why the Bhagavad Gita is an overrated text with a deplorable morality at its core. This is part two of a two-part critique (Part 1 is the appetizer with the Gita’s historical and literary context. This is the main course with the textual critique).__________________________________

The Bhagavad Gita, less than one percent of the sprawling Mahabharata, contains 700 verses in 18 chapters. It opens with Arjuna’s crisis on the battlefield, right before the start of the Great War. Turning to his friend and charioteer, Arjuna cries out,

‘O Krishna, I see my own relations here anxious to fight, and my limbs grow weak; my mouth is dry, my body shakes, and my hair is standing on end. My skin burns, and the bow Gandiva has slipped from my hand; my mind seems to be whirling.’

Arjuna is one of the bravest warriors alive and this visceral physical response, it is amply clear, is not due to performance anxiety, or fear of injury or death. Rather, it arises from Arjuna’s grave doubts over whether he is doing the right thing. He and his Pandava brothers wanted a minimally fair share of their material inheritance, but the devious, stubborn, and unjust Kauravas rebuffed them repeatedly. Though his cause is righteous enough, Arjuna now feels that the ends do not justify the means. He continues,

‘O Krishna, I have no desire for victory, or for a kingdom or pleasures. Of what use is a kingdom or pleasure or even life, if those for whose sake we desire these things—teachers, fathers, sons, grandfathers, uncles, in-laws, grandsons and others with family ties—are engaging in this battle, renouncing their wealth and their lives? Even if they were to kill me, I would not want to kill them, not even to become ruler of the three worlds. How much less for the earth alone? … We are prepared to kill our own relations out of greed for the pleasures of a kingdom. Better for me if the [Kauravas] were to attack me in battle and kill me unarmed and unresisting.’

Who among us can fail to be moved by Arjuna’s anguish? Hopelessly confused, Arjuna pleads with Lord Krishna to show him the way. Krishna obliges, taking on the role of a teacher to help Arjuna figure out the right course of action, which Krishna believes is to fight this war. The wisdom of the Gita—and the claim that it remains a relevant guide to our inner battlefield—is inseparable from Krishna’s advice to Arjuna. So, to evaluate the Gita, we need to evaluate the arguments Krishna uses to persuade Arjuna to fight. How good are these arguments?

I am aware that my reading of the Gita—like every other reading of it—is subjective and selective; I know that there are other ways of reading it. I have approached the Gita as expository literature, using the same yardsticks of truth and beauty that I take to other literary texts. I agree with Eknath Easwaran, an admirer and well-known translator of the Gita, when he says, ‘To understand the Gita, it is important to look beneath the surface of its injunctions and see the mental state involved.’ I have tried to do the same. I know that the Gita is not ‘mere literature’ to millions of Hindus, including many of my family and friends. It is also sacred scripture, a guide to practical wisdom, a source of personal and social identity, cultural and national pride, and more. My intention is not to offend as an end in itself, but this book review will likely unsettle many; a few will respond in angry and defensive ways. May they find in the Gita the wisdom to forgive my indiscretions.

January 01, 2012

Human cultures have long evolved folk concepts to describe and explain human behavior. This essay "highlights the potential arbitrariness of how we've carved up the psychological realm—what we take for objective reality is revealed to be shaped by culture and language."

If someone asked you to describe the psychological aspects of personhood, what would you say? Chances are, you'd describe things like thought, memory, problem-solving, reasoning, maybe emotion. In other words, you probably list the major headings of a cognitive psychology text-book. In cognitive psychology, we seem to take it for granted that these are, objectively, the primary components of "the mind" (even if you reject a mind/body dualism, you probably accept some notion that there are psychological processes similar to the ones listed above). I've posted previously about whether the distinction between cognitive and non-cognitive even makes sense. But, here, I want to think about the universality of the "mind" concept and its relationship to the modern view of cognition.

In fact, this conception of the mind is heavily influenced by a particular (Western) cultural background. Other cultures assign different characteristics and abilities to the psychological aspects of personhood. Wierzbicka (2005) delves into this problem in detail. She argues that speakers of a particular language make assumptions about what must be universal based on their own ability to imagine doing without a certain concept. Important cross-cultural differences in meaning become lost in translation.... Cross-linguistic research shows that, generally speaking, every culture has a folk model of a person consisting of visible and invisible (psychological) aspects. While there is agreement that the visible part of the person refers to the body, there is considerable variation in how different cultures think about the invisible (psychological) part. In the West, and, specifically, in the English-speaking West, the psychological aspect of personhood is closely related to the concept of "the mind" and the modern view of cognition.But, how universal is this conception? How do speakers of other languages think about the psychological aspect of personhood?

December 23, 2011

Alva Noë on what neuroaesthetics—a field that studies art through the insights of neuroscience—can never tell us about art:

What is art? What does art reveal about human nature? The trend these days is to approach such questions in the key of neuroscience. “Neuroaesthetics” is a term that has been coined to refer to the project of studying art using the methods of neuroscience. It would be fair to say that neuroaesthetics has become a hot field. It is not unusual for leading scientists and distinguished theorists of art to collaborate on papers that find their way into top scientific journals. ...

... Neuroaesthetics, like the neuroscience of consciousness itself, is still in its infancy. Is there any reason to doubt that progress will be made? Is there any principled reason to be skeptical that there can be a valuable study of art making use of the methods and tools of neuroscience? I think the answer to these questions must be yes, but not because there is no value in bringing art and empirical science into contact, and not because art does not reflect our human biology.

December 06, 2011

(Cross-posted on 3 Quarks Daily, where it has received many comments.)

Why the Bhagavad Gita is an overrated text with a deplorable morality at its core. This is part one of a two-part critique.(Part 1 is the appetizer with the Gita’s historical and literary context. Part 2 is the main course with the textual critique).__________________________________

In mid-first millennium BCE, a great spiritual awakening was underway in areas around the middle Ganga. People were moving away from the old Vedic religion—which revolved around rituals, animal sacrifices, and nature gods—to more abstract, inner-directed, and contemplative ideas. They now asked about the nature of the self and consciousness, thought and perception. They asked if virtue and vice were absolute or mere social conventions. Personal spiritual quests, aided by meditation and renunciation of material gain, had slowly gathered pace. From this churn arose new ideas like karma and dharma, non-dualism, and the unity of an individual’s soul (atman) with the universal soul (Brahman)—all pivotal ideas in Brahmanical Hinduism.

Some of these innovations in thought soon made their way into the texts we now know as the Upanishads, setting them qualitatively apart from the earlier Vedas. All of this occurred in the context of great sociopolitical and economic changes, marked by the rise of cities, trade and commerce, social mobility, public debates, new institutions of state, and even some early republics. This was also the world of the Buddha, Mahavira, and Carvaka.

The Great War of Yore

By this time, versions of a Mahabharata story had been circulating for centuries. Perhaps inspired by a war that took place c. 950 BCE around modern Delhi (the date is tentative), the story, through oral transmission, took on a life of its own. In The Hindus: An Alternative History (2009), Wendy Doniger writes that the earliest bards who told the Mahabharata story came from a caste of charioteers, who served as drivers, confidantes, and bodyguards to the Kshatriya warrior-castes. While on military campaigns, they recited stories around campfires. (No wonder God is a charioteer in the epic! Even Karna is raised by a charioteer.) In later ages and in times of peace, many bards took their performance art to lay audiences in villages and folk festivals. The story also came to be recited during royal sacrifices, where the Brahmins gradually took over its delivery and evolution, eventually writing it down in Sanskrit. Its "final form" dates from 300 BCE-300 CE and ranges from 75K to 100K verses, seven to ten times the Iliad and the Odyssey combined. (Read an outline of the story here.)

December 05, 2011

Is neuroscience the death of free will? Not at all, says Eddy Nahmias (bio) in this well argued piece where he questions the simplistic notions of free will employed by neuroscientists.

Is free will an illusion? Some leading scientists think so. For instance, in 2002 the psychologist Daniel Wegner wrote, “It seems we are agents. It seems we cause what we do… It is sobering and ultimately accurate to call all this an illusion.” More recently, the neuroscientist Patrick Haggard declared, “We certainly don’t have free will. Not in the sense we think.” And in June, the neuroscientist Sam Harris claimed, “You seem to be an agent acting of your own free will. The problem, however, is that this point of view cannot be reconciled with what we know about the human brain.”

Many neuroscientists are employing a flawed notion of free will. Such proclamations make the news; after all, if free will is dead, then moral and legal responsibility may be close behind. As the legal analyst Jeffrey Rosen wrote in The New York Times Magazine, “Since all behavior is caused by our brains, wouldn’t this mean all behavior could potentially be excused? … The death of free will, or its exposure as a convenient illusion, some worry, could wreak havoc on our sense of moral and legal responsibility.”

Indeed, free will matters in part because it is a precondition for deserving blame for bad acts and deserving credit for achievements. It also turns out that simply exposing people to scientific claims that free will is an illusion can lead them to misbehave, for instance, cheating more or helping others less. So, it matters whether these scientists are justified in concluding that free will is an illusion.

Here, I’ll explain why neuroscience is not the death of free will and does not “wreak havoc on our sense of moral and legal responsibility,” extending a discussion begun in Gary Gutting’s recent Stone column. I’ll argue that the neuroscientific evidence does not undermine free will. But first, I’ll explain the central problem: these scientists are employing a flawed notion of free will. Once a better notion of free will is in place, the argument can be turned on its head. Instead of showing that free will is an illusion, neuroscience and psychology can actually help us understand how it works.

November 28, 2011

An insightful talk by Georg Sørensen on the world order today, where he considers four dimensions: (a) war and peace, (b) global economy, (c) institutions and governance, and (d) global environment. Sørensen is distinguished professor of international politics and economics in Denmark and has written fifteen books "on international relations and development issues. His research areas include society and politics, international community, democracy and development, prospects for a liberal world order, transformations of the state and its effects on international relations." If you like this, check out another recent talk by him on Democracy and Democratization.

In his new book, A Liberal World Order in Crisis, Sørensen quotes me in his final chapter (from my essay, Being Liberal in a Plural World).

Primary Editors

Search

New Book by Namit Arora

“The Lottery of Birth reveals Namit Arora to be one of our finest critics. In a raucous public sphere marked by blame and recrimination, these essays announce a bracing sensibility, as compassionate as it is curious, intelligent and nuanced.” —Pankaj Mishra