The Left has traditionally assumed that human nature is so malleable, so perfectible, that it can be shaped in almost any direction. By contrast, a Darwinian science of human nature supports traditionalist conservatives and classical liberals in their realist view of human imperfectibility, and in their commitment to ordered liberty as rooted in natural desires, cultural traditions, and prudential judgments. Arnhart's email address is larnhart1@niu.edu.

Friday, January 29, 2016

In many of my posts over the years, I have defended an evolutionary classical liberalism, which I see as a tradition of thought that stretches from David Hume and Adam Smith to Herbert Spencer and Charles Darwin and to Friedrich Hayek and Matt Ridley. This evolutionary classical liberalism can be rooted in a universal history of cosmic evolution like that presented by Eric Chaisson and David Christian.

Chaisson is an astrophysicist who sees the entire history of the Universe from the Big Bang 14 billion years ago to the present as showing an evolution from simplicity to complexity that passes through eight epochs: the Particle Epoch, the Galactic Epoch, the Stellar Epoch, the Planetary Epoch, the Chemical Epoch, the Biological Epoch, the Cultural Epoch, and the Future Epoch. Chaisson has a website that sketches his account of this cosmic evolution, which is elaborated in his Epic of Evolution: Seven Ages of the Cosmos (Columbia University Press, 2006) and in his Astronomy Today (8th edition, Pearson, 2014).

Ever since the emergence of human self-conscious awareness, human beings have wondered about how the world came to be, how humans came to be, and how the human place in the world illuminates the meaning of human life. To answer their questions, human beings have told themselves myths about cosmic history, and generally these myths have appealed to religious beliefs about the powers of supernatural beings.

Chaisson says that his story of cosmic evolution is also a "cultural myth" (Epic, 426). But it's a scientific myth that does not rely on beliefs about supernatural beings or philosophical speculation, because modern science as it began in the Renaissance can achieve true knowledge through the scientific method of gathering relevant data, formulating theories, and then testing those theories through rigorous observation and experimental testing and rejecting those theories that fail to be empirically confirmed. Without mentioning Karl Popper, Chaisson assumes Popper's standard of falsifiability for science: a theory is not truly scientific if it is not in principle empirically testable, and a theory is falsified when it's empirical predictions fail.

I wonder whether this is true, or whether any science of cosmic evolution must confront the ultimate limits to science in facing fundamental mysteries of nature that are not open to observational or experimental study. Herbert Spencer set forth a scientific account of cosmic evolution that is very similar to Chaisson's. Like Chaisson, Spencer saw a cosmic evolution from simplicity to complexity, from homogeneity to heterogeneity, which could be explained through natural laws. But unlike Chaisson, Spencer thought that increasing scientific knowledge reveals "the ultimate mystery of things," and thus provides "a firmer basis to all true Religion" ("Progress: Its Law and Cause," 484). Modern science shows the power of the human intellect in explaining everything that comes within the range of human experience. But it also shows the weakness of the human intellect in dealing with all that transcends human experience.

In studying the origin of the external world, Spencer observed, we can hypothesize that all matter originally existed in a diffused form, but we cannot prove this, and we cannot conceive how this came to be, because we can have no experience of the distant origin of all things. Likewise in studying the internal world of our human mind, we cannot conceive of how consciousness is possible, of how mind emerges from matter. And thus we see that "absolute knowledge is impossible," because "under all things there lies an impenetrable mystery" (485). If science is limited to thinking about nature that is empirically testable and falsifiable, Spencer would seem to say, then there can be no scientific understanding of those realms of nature that are beyond human observation.

That this might be true has recently become a hotly contested issue among some scientists and philosophers of science. About a year ago, two physicists--George Ellis and Joe Silk--wrote an article for Nature complaining that some physicists were threatening the intellectual integrity of science by asserting that if speculative theories of the Universe were sufficiently elegant and explanatory, they should be accepted as true, even if they could never be empirically tested. This would deny the traditional understanding of science as empirical knowledge based on observational evidence and experimental testing, which might make it impossible to distinguish scientific knowledge from philosophical speculation or religious belief (Ellis and Silk, "Defend the Integrity of Physics," Nature 516 [2014]: 321-23).

According to Ellis and Silk, string theory and the theory of the multiverse are prime examples of theories accepted by many physicists that are not testable. According to string theory, elementary particles should be understood as infinitesimally thin vibrating one-dimensional strings in multidimensional space that are too small to be seen through microscopes or detected through collisions in any particle collider. According to the idea of the multiverse, our universe is just one of many universes with different cosmic and physical properties. Scanning over all possible universes, everything that can physically happen does happen an infinite number of times. But it's impossible for us in our universe to observe these other universes. Since these theories cannot be tested by observational or experimental evidence, Ellis and Silk argue that they are not scientific theories at all, because science should be limited to theories that are testable.

In December, a meeting of scientists and philosophers of science was convened in Munich to debate the issues raised by Ellis and Silk. A report on the meeting can be found at the website for Quanta Magazine.

David Gross, a proponent of string theory, opened the meeting by arguing that the problem identified by Ellis and Silk is simply a "fact of nature"--the fundamental constituents of nature are either too small, too far away, or too far in the past to be observed directly by us or indirectly through our instruments, and thus nature's secrets are buried so deep or so far away that we have no way to test our theoretical speculations about them.

Here is a sketch of what Gross sees as the limits on the scale of human observational experience of nature. At the small scale, microscopes have extended our experience beyond our visual reach, and the Large Hadron Collider (LHC) has extended our reach even deeper. We have gone from scales of centimeters to millionths of a millionth of a millionth of a centimeter. But we have reasons to believe that the fundamental constituents of nature that string theory attempts to describe lie at a distance scale 10 million billion times smaller than the resolving power of the LHC. At the cosmic scale, telescopes have extended our experience of the astronomical universe; but no telescope will ever look beyond our universe's cosmic horizon and see the other universes assumed by the multiverse hypothesis. The white area is the range of scales within human experience. The grey area is outside that range.

Philosophers of science like David Dawid (String Theory and the Scientific Method) have argued that this shows the need for Popperian falsificationism to be replaced with Bayesian confirmation theory, which allows for a non-empirical science in which we rate the confidence we have in a theory from zero to 100 percent. We can be confident in a theory even when it makes claims about phenomena that we cannot directly observe. No one has ever directly seen an atom or subatomic particles, but we can be confident in atomic theory based on indirect inferences using instruments. We might even have good reasons for accepting string theory, even though we might never have either direct or indirect empirical evidence for it.

At the Munich conference, some defenders of Popperian falsificationism pointed out that Popper recognized the importance of speculative theories that lack testable predictions. He identified such theories as "metaphysics," and he saw that such metaphysical thinking could be taken seriously, with the thought that eventually it might become possible to devise empirical tests for a metaphysical theory.

Chaisson adopts the Popperian position when he describes the scientific method that was first formulated by scientists in the Renaissance:

"They realized that thinking about Nature was no longer sufficient. Looking at it was also necessary. Experiments became a central part of the process of inquiry. To be effective, ideas had to be tested experimentally, either to refine them if data favored them or to reject them if they did not. The 'scientific method' was born--probably the most powerful technique ever conceived for the advancement of factual information. Modern science had arrived" (Epic, x).

With this standard in mind, Chaisson rejects string theory and the multiverse as unscientific theories: "I see no evidence for cosmic strings, eleven dimensions, or multiple universes" (xvi). Such thinking "borders on science fiction (or even religion)" (75). Such ideas cannot be truly scientific, because they depend on thinking without looking, speculative thought without empirical observation.

An alternative position would be to say that while our knowledge of the world is most reliable when it is based on both thinking and looking, we must ultimately come up against the limits of human observational experience, and then we must settle for thinking without looking, even as we hope to someday push the boundaries of our experience deeper into nature, so that we can empirically test our theories. Here we must fall back onto what Chaisson calls "informed speculation" (67).

Consider, for example, what might be the most difficult metaphysical question of all: Why is there something rather than nothing, and why are things as they are an not different? The modern theory of the Big Bang as the beginning of the Universe forces us to ask this question. What was there before the Big Bang? Why did it occur? What or who caused it?

In a previous post, I have argued that Why is there something rather than nothing? is a meaningless question, because it rests on two false assumptions. First, it falsely assumes the possibility of absolute nothingness. Since human beings have no experience of absolute nothingness, whereas all of our experience confirms the being of things, there is no empirical evidence for absolute nothingness. Even the very idea of nothingness as a product of the theological imagination pondering the doctrine of creation ex nihilo is dubious, because in the absence of any empirical evidence, I doubt that people even understand what they are saying when they ask why the world arose out of nothingness.

Chaisson comes close to agreeing with me when he says that questions about the origins of the Big Bang cannot be addressed by modern science, because there are no ways to experimentally test our thinking about such questions (Epic, 33). Asking what came before the Big Bang might be a "meaningless puzzle" (57). And yet he thinks it is reasonable for scientists to speculate about such questions by formulating various models of the Universe.

One is the steady-state model that stipulates that the average density of the Universe is eternally constant. Although the Universe is expanding, the average density of matter remains the same. For this to be true, however, new matter must be constantly created out of nothing, and thus the steady-state theorists still face the problem of explaining how something arises from nothing, which seems to contradict the scientific principle of the conservation of mass and energy.

A second model of the Universe is that of the "open Universe," in which the Universe expands eternally from the Big Bang. A third model is that of the "closed Universe," in which the Universe expands from the Big Bang until the gravitational pull of matter pulls everything back, and matter collapses back onto itself in a "big crunch." Both of these models face the problem of explaining the initial Big Bang.

A fourth model is that of the "cyclic Universe," in which the Universe expands from a big bang, collapses back onto itself, and then expands again from another big bang; so that the Universe oscillates eternally. In this view, there is no single, unique beginning that has to be explained, because the oscillation of the Universe is eternal. For that reason, Chaisson observes, "subjectively, in our guts, many researchers prefer it," because this model avoids the philosophical problem of what preceded the Big Bang (32). But notice that Chaisson appeals here to some subjective "gut feeling" for which there is no objective empirical test. Isn't this thinking without looking?

Repeatedly, Chaisson first affirms the truth of some mysterious idea in science and then admits that it might be purely imaginary. So, for example, he says that black holes "apparently really do exist" (95), but then says that they might be "a whim of human fantasy," because there can be no evidence, either direct or indirect, for their existence (106). There are no experimental tests for explaining what happens deep inside black holes, and so this might be "the ultimate in the unknowable" (101).

Sometimes, Chaisson insists that scientific cosmology can supplant religious myth (418). At other times, however, he says that we might need "a merger of science and religion" (76).

Was Spencer right in seeing "the ultimate mystery of things" as providing "a firmer basis to all true Religion"?

Friday, January 22, 2016

Skeletal Remains of Early Holocene Hunter-Gatherers Killed in a Massacre at the Site of Nataruk, West of Lake Turkana, Kenya

One of the fundamental debates in the history of political philosophy is over whether the state of nature was a state of peace or a state of war. Thomas Hobbes, John Locke, and Jean-Jacques Rousseau all agree that the first human beings lived as foraging hunter-gatherers, but they disagree about whether this original human condition was generally violent or generally peaceful. Hobbes claimed that without any government to enforce peace, life among these first human beings must have been an utterly lawless war of all against all. Locke inferred from reports about hunter-gatherer bands in America that life in a state of nature could be a state of peace, but it could easily become a state of war. Rousseau thought that the evidence refuted both Hobbes and Locke in suggesting that the first human ancestors were peaceful, and that war did not arise until the invention of agriculture led to a less nomadic and more settled social life.

As I have indicated in my many posts on this issue, this has continued to be one of the most contentious debates in the social sciences, with some social scientists (such as Richard Wrangham) arguing for the Hobbesian position that the evolution of our foraging human ancestors was shaped by warfare, and others (such as Douglas Fry) arguing for the Rousseauan position that nomadic foragers were generally peaceful, and that war was a cultural invention of the agricultural societies that emerged 5,000 to 10,000 years ago.

Now that we have more archaeological and anthropological evidence than was available to Hobbes, Locke, and Rousseau, we are reaching the point where we might settle this debate. I have argued that the evidence suggests that Hobbes was partly right, Rousseau was mostly wrong, and Locke was mostly right. Locke was right in seeing that foraging human bands can enforce customary laws of cooperation that secure a peaceful life, but that in the absence of formal governmental rule, feuding often leads to war.

And yet while the evidence pertinent to this debate has been growing, the evidence is still limited in ways that make it difficult to reach any demonstrative proof. Inferences about warfare among prehistoric foragers from ethnographic reports about foraging societies are open to dispute, because modern hunter-gatherers might be quite different from prehistoric hunter-gatherers, and because we can argue about whether acts of violence against individuals or small groups of individuals should be described as warfare. The archaeological evidence for prehistoric warfare among foragers has been severely limited, because when prehistoric skeletons show signs of homicidal violence, it's hard to know whether this should be interpreted as showing warfare rather than inter-personal violence within a society.

But now in this week's issue of Nature, we have a report of the first prehistoric skeletal evidence for a massacre in a foraging society: Marta Mirazon Lahr, et al., "Inter-Group Violence among Early Holocene Hunter-Gatherers of West Turkana, Kenya," Nature 529 (21 January 2016): 394-398. The New York Times has an article about this.

A team of scientists reports the discovery of the first clear evidence of the intentional killing of a small band of foragers in prehistory. In 2012, they found the skeletal remains of 27 individuals at the site of Nataruk, west of Lake Turkana in Kenya. The skeletal remains are dated at around 10,000 years ago. 12 of the individuals were preserved as articulated skeletons. 10 of these 12 show evidence of major traumatic lesions that would have been lethal. There are fractures from violence in heads, necks, hands, and ribs. For example, the skull pictured above shows multiple traumatic lesions to the cranium, involving blunt force to the frontal and left temporal bones. In one case, a projectile was found embedded in a cranium. In another case, projectiles were found within a body cavity. There is no evidence of burial for any of these skeletal remains.

The Hobbesians like Richard Wrangham will see this as one of the best archaeological discoveries for confirming that warfare was part of human evolution in prehistoric foraging societies before the invention of agriculture.

But the Rousseauans like Douglas Fry will say that skeletal evidence like this from one site is not enough to be conclusive. As quoted in the New York Times article, Fry has also said that these foragers might have been moving away from a purely nomadic life to a more settled life, and thus this would not be evidence for warfare among nomadic foragers.

I see evidence that the co-authors of the article in Nature disagree with one another over whether their discovery supports the Hobbesian view or the Rousseauan view. Here's the last paragraph of their article:

"As one of the clearest cases of inter-group violence among prehistoric hunter-gatherers, the event recorded at Nataruk offers information on the socio-economic conditions that marked the presence of warfare. However, there are two interpretations of how this fact impinges on our understanding of war among foraging societies. West Turkana 10,000 years ago was a fertile lakeshore landscape sustaining a substantial population of hunter-gatherers; the presence of pottery may be indicative of some storage and so reduced mobility. Thus, the massacre at Nataruk could be seen as resulting from a raid for resources--territory, women, children, food stored in pots--whose value was similar to those of later food-producing societies among whom violent attacks on settlements and organized defence strategies became part of life. In this light, the importance of what happened at Nataruk would be in terms of extending the chronology and degree of the same underlying socio-economic conditions that characterize early warfare in more recent periods. Alternatively, Nataruk may offer evidence not of changing conditions towards a settled, materially richer, and demographically denser way of life, but of a standard antagonistic response to an encounter between two social groups. As such, Nataruk would be important for the particular circumstances that preserved an ephemeral, but perhaps not unusual, event in the life of prehistoric foraging societies. In either case, the deaths at Nataruk are testimony to the antiquity of inter-group violence and war" (397).

The first of these two conflicting interpretations supports a modified version of the Rousseauan view. The second interpretation supports the Hobbesian view. The news reports about this article (here, here, and here) suggest to me that Marta Mirazon Lahr, the lead author of the article, is taking the Hobbesian view, while Robert Foley, a co-author, is taking a modified Rousseauan view.

In the Discourse on the Origin of Inequality, Rousseau claimed that warfare began with the invention of agriculture, which he called the "great revolution," because it brought the territorial settlements, the accumulation of property and the status hierarchies that provided the conditions for war. But Rousseau also recognized that before agricultural societies, the "first revolution" in human history was the establishment of family life in settled societies that produced some forms of property.

Some anthropologists today might see this as the pre-agricultural revolution in which nomadic foragers moved towards becoming complex foragers living a somewhat settled existence. If there were some abundant source of food, such as fish along a shoreline or in a river, foragers could camp there, either seasonally or year-round. They could then store some of their food. And their population could grow. But once they took control of this territory and its abundant resources, they became targets for other foraging bands who might want to take their property, and thus began warfare. There are many examples of this in the ethnographic and archaeological record. The most famous example of this is the complex foraging societies that formed on the Northwest Coast of North America. Once formerly nomadic foragers became dependent on the intense harvesting of aquatic resources, they settled into permanent villages, in which there was increasing population and inequality in property and status, and which produced competition over valuable resources (stored food, territory, and women) that led to warfare. These complex foraging settlements have appeared thousands of years before agricultural settlements. By contrast, fully nomadic foragers show inter-personal violence within their bands, but not inter-group warfare.

As quoted in the Washington Post, Robert Foley seems to adopt this interpretation of the foragers who were killed at Nataruk:

"According to Foley, the skeletons appear to have belonged to a group of hunter-gatherers living at the time on the lush, marshy edge of a lagoon where they used bone harpoons to fish and hunt. They were probably more sedentary than most foraging communities, as there are indications that the environment was quite rich.

"Although any guesses as to why they were killed are speculations, Foley said it is possible that another group found the area attractive and competed for it."

Similar quotations from Foley appear in the story at LiveScience:

"The number of casualties rules out the notion of an interfamily feud, Foley said. More people from the group may have been killed, and still others may have escaped, which suggests the group was larger than the average hunter-gatherer group. (Most hunter-gatherer groups tend to hover around 25 to 30 people per encampment, Foley said.) And given the simple tools used to deal death, the attacking group was probably larger still, he added.

"This idea suggests that the two warring groups were likely more settled than the average hunter-gatherer population, Foley said. That's not surprising, as hunter-gatherers who tend to stay in one place for longer periods often live near lakes, where food is plentiful and unlikely to be depleted by long stays, he added.

"'That fits into the idea of a slightly more densely packed population where intergroup conflict is likely to arise,' Foley said. 'It's quite difficult to have a war with a highly mobile group that's very dispersed.'"

Douglas Fry agrees with Foley in seeing this prehistoric warfare in Nataruk as showing the intergroup violence that arises among complex foragers, but not among fully nomadic foragers. Thus, Fry can argue that this discovery support his Rousseauan view that throughout most of human evolutionary history, when human beings lived as nomadic foragers, there was no warfare; and consequently war must be seen as a cultural invention of recent history.

Lahr, however, seems to disagree with Foley and Fry. As quoted by Deborah Netburn of the Los Angeles Times, Lahr says that while there is lots of evidence of warfare "among settled, sedentary communities," the discovery in Nataruk is the first "archaeological record of armed conflict between early nomadic hunter-gather groups." She suggests that the foragers who were massacred had not established a settlement on the lake, but rather they were a "small traveling band of hunter-gatherers who stopped by a lagoon to hunt or fish." And so, she seems to be adopting the Hobbesian interpretation of this archaeological discovery as confirming that warfare was prevalent among our earliest foraging ancestors, and thus deeply rooted in our evolved human nature.

Some of my posts on this debate over the evolution of war can be found here,here,here, here, and here, which include links to many other posts on this.

Monday, January 18, 2016

The term "liberalism" has had a confusing history. When this English word was first coined in the 1820s in Great Britain, it referred to the political and economic ideas of those who stressed individual liberty in both politics and economics, so that government should be limited to protecting individual liberty, and should therefore not intervene in the social and economic life of the community except to protect life, liberty, and property. But then by the end of the century, some of those who called themselves liberals were supporting social welfare policies in which government intervened in social and economic life in ways that denied individual liberty. Now, in the United States, those who favor interventionist government--those like Hillary Clinton and Barack Obama--are known as liberals, while those who favor protecting individual liberty from governmental intervention are called conservatives or libertarians. Sometimes the original meaning of liberalism is identified as "classical liberalism" as distinguished from "modern liberalism" or "left liberalism."

Daniel Klein of George Mason University has been leading an intellectual campaign for restoring the original meaning of "liberalism" as devoted to individual liberty and limited government. As part of that campaign, he has presented evidence that the use of the word "liberal" in its political sense originated with Adam Smith and other Scottish thinkers as a term denoting what Smith called "the system of natural liberty." He has presented his research as an essay on the website of The Atlantic, which includes a video of a lecture that elaborates his reasoning.

Scholars who have traced the history of the English word "liberalism" have often claimed that "liberal" as a political term originated on the European Continent at the beginning of the nineteenth century, which was then imported into Great Britain in the 1820s, when a suffix was added to coin the word "liberalism." In The Constitution of Liberty, Friedrich Hayek noted that it was often suggested that the word "liberal" derived from the early nineteenth-century Spanish party of the liberales (first edition, 530; definitive edition, 529).But Hayek indicated that he believed that this political sense of "liberal" derived from Adam Smith's language in The Wealth of Nations, where Smith identified "the liberal system of free exportation and free importation" (Liberty Fund, 538), and where he spoke of "allowing every man to pursue his own interest his own way, upon the liberal plan of equality, liberty, and justice" (Liberty Fund, 664).

Klein argues for the Hayek thesis as opposed to the importation thesis. Google has scanned millions of books, and Googles Ngram Viewer allows researchers to study this digital data to see how words have been used over the centuries. Klein has used this to study the meaning of the adjective "liberal." Prior to 1769, "liberal" had only non-political meanings, such as being generous, noble, or having superior status (as in "liberal arts" and "liberal education"). Beginning in 1769, there was a sudden jump in the number of times that writers used political terms such as "liberal policy," "liberal plan," "liberal system," "liberal views," "liberal ideas," and "liberal principles." In 1769, William Robertson was the first writer to use such terms in his book The History of the Reign of the Emperor Charles V.

Robertson was a friend of Adam Smith, who began to use "liberal" in this way in 1776, in his Wealth of Nations. He argues in favor of "the liberal system of free exportation and free importation," which would turn all of Europe into a single great empire bound together by free trade (538). In explaining this "liberal system," he thinks it applies as much to free trade in religion as to free trade in corn. "The people feel themselves so much interested in what relates either to their subsistence in this life, or to their happiness in life to come, that government must yield to their prejudices, and, in order to preserve the public tranquility, establish that system which they approve of" (539). As long as the laws secure "to every man that he can enjoy the fruits of his own labor," the "natural effort of every individual to better his condition" will carry a society to wealth and prosperity (540). Smith also speaks of this liberal system of freedom to trade as "allowing every man to pursue his own interest his own way, upon that liberal plan of equality, liberty, and justice" (664).

Smith's description of the "liberal system" suggests that it coincides with what he calls the "system of natural liberty," because in both cases, he speaks of a man's freedom "to pursue his own interest his own way":

"All systems either of preference or of restraint, therefore, being thus completely taken away, the obvious and simple system of natural liberty establishes itself of its own accord. Every man, as long as he does not violate the laws of justice, is left perfectly free to pursue his own interest his own way, and to bring both his industry and capital into competition with those of any other man, or order of men. The sovereign is completely discharged from a duty, in the attempting to perform which he must always be exposed to innumerable delusions, and for the proper performance of which no human wisdom or knowledge could ever be sufficient; the duty of superintending the industry of private people, and of directing it towards the employments most suitable to the interest of society. According to the system of natural liberty, the sovereign has only three duties to attend to; three duties of great importance, indeed, but plain and intelligible to common understandings: first, the duty of protecting society from the violence and invasion of other independent societies; secondly, the duty of protecting, as far as possible, every member of the society from the injustice or oppression of every other member of it, or the duty of establishing an exact administration of justice; and, thirdly, the duty of erecting and maintaining certain public works and certain public institutions, which it can never be for the interest of any individual, or small number of individuals, to erect and maintain; because the profit could never repay the expense to any individual or small number of individuals, though it may frequently do much more than repay it to a great society" (687-88).

Thus, the "liberal system" or the "system of natural liberty" does require government, but the duties of government are limited. The first two governmental duties are clear--protecting the individuals of a society from foreign attack and from the unjust attacks of other members of the society. But the third duty--providing public works or public institutions--is more open to debate. What Smith says about this in Book 5 of Wealth of Nations has been interpreted by some readers as suggesting that Smith might today be a left liberal or even a Marxist in favoring an interventionist government with a welfare state and redistribution of income. If you look at the video of Klein's lecture, you will see that the commentator on the lecture raises this point, and Klein acknowledges that this is a possible reading of Smith, although Klein favors the reading that sees Smith as a classical liberal who would not have supported left liberalism.

In any case, Klein has, I think, proven that Hayek was right in seeing the political sense of "liberal" as originating with Smith, so that the "liberal system" is to be equated with Smith's "system of natural liberty." Later, in Great Britain, the English noun "liberalism" was coined to refer to this Smithian understanding of "liberal" thought. The earliest use of "liberalism" that I have noticed is by Alexander von Humboldt in a letter to Thomas Jefferson on May 24, 1804, in which Humboldt writes: "Your writings, your actions, and the liberalism of your ideas have inspired me from my earliest youth." Later, Jefferson himself, in a letter to John Adams (January 22, 1821), spoke of "the advance of liberalism" that was shown in the 1820 revision of the Massachusetts Constitution favoring religious liberty.

Although Smith does not use the word "evolution," his account of the "liberal system" does have an evolutionary character to it. Hayek noticed this and developed it in his account of the liberal idea of "spontaneous evolution" or "spontaneous order." Smith's system of natural liberty is a spontaneous order that evolves from the bottom-up rather than being designed from the top-down. It is a natural evolutionary order in that it arises from the natural desire of all individuals to better their condition, which leads to wealth and prosperity whenever the laws secure to individuals the liberty to enjoy the fruits of their own labor.

In the nineteenth century, Herbert Spencer elaborated the principle of equal liberty as fundamental for liberalism, and he presented it as part of a cosmic evolution of order in which the whole history of the Universe could be seen as an evolution from simplicity to complexity.

In 1859, the Liberal Party of Great Britain was established under the leadership of William Gladstone; and Charles Darwin's Origin of Species was published.Darwin became an enthusiastic supporter of the Liberal Party, and one can see his liberalism in his Descent of Man, particularly in his criticism of slavery.

I will be writing more about the cosmic evolution of Darwinian liberalism.

Saturday, January 16, 2016

In recent years, it has become common for many political leaders to say that economic inequality--the growing gap between the few who are very rich and the many who are very poor--is the greatest political problem for the United States and many European countries. Two years ago, Thomas Piketty's Capital in the Twenty-First Century became a best-selling book warning about the growing economic oligarchy in the leading capitalist countries and recommending that the only solution was a redistribution of wealth through confiscatory tax rates of 80% to 90% for the wealthiest.

There has been little attention, however, given to one important point raised in Piketty's book: "Inequality is not necessarily bad in itself: the key question is to decide whether it is justified, whether there are reasons for it" (19). As one can see in Piketty's book, he thinks the inequality that he sees in the advanced capitalist countries is not justified, because the very wealthy haven't really earned their wealth, and those in the lower classes have no opportunity to improve their condition.

But is that really true? Is it true that economic inequality shows a rigid class structure, in which those at the top stay at the top, and those at the bottom have no chance to rise? Or is there a lot of mobility, with people moving up and down the class structure? If there is such mobility, wouldn't that be a good form of inequality?

In fact, there is a lot of evidence for such mobility. Economists who study this have shown that over 50 percent of Americans will be in the top 10 percent of income-earners for at least one year in their lives. Over 11 percent of Americans will be among the top 1 percent of income-earners (people making a minimum of $332,000 per year) for at least one year in their lives. 94 percent of the Americans who join the top 1 percent group will keep that status for only one year.

Moreover, the factors that explain higher household incomes among Americans are not fixed over a lifetime, and they are to some degree a matter of personal decisions, which means that people are not forced to remain in one income bracket for their whole lives. American households with higher than average incomes tend to be households where the members are well-educated, in their prime earning years (between the ages of 35 and 64), working full-time, and are in stable marriages. Households with lower than average incomes tend to be households where the members are less-educated, outside their prime earning years, unemployed or working only part-time, and they are likely to be unmarried.

A large part of the growth in economic inequality among Americans over the past 40 years has been a result of assortative mating: college students marry people they have met in college, and then form two-income households with the higher income levels correlated with higher education. These "power couples" are then in a position to help their children become successful, because their children will inherit the good genes of their parents as well as the good environments of rearing the parents provide. Since high educational achievement is correlated with high IQ, and since the higher paying jobs in a highly technological and mentally challenging economy require higher intelligence, what we see here is the emergence of what Charles Murray has called a "cognitive elite." So if we really wanted to reduce economic inequality, we would have to prohibit intelligent and well-educated people from marrying other intelligent and well-educated people.

Consequently, people can raise their chances of becoming wealthy by getting a good education, by getting married to other well-educated people, by getting lots of professional work experience, and by forming two-income households. When people do this, they create economic inequality. But this is good inequality.

Thursday, January 14, 2016

In some previous posts here,here,here,here,here. and here. I have argued that Aristotle's arguments for the supremacy of the contemplative life in book 10 (chapters 7 and 8) of the Nicomachean Ethics are dubious in ways that suggest that Aristotle is not endorsing these arguments, and that the true peak of the Ethics is in the books on friendship where Aristotle presents an inclusive conception of the human good as a range of moral and intellectual goods. In this way, I am challenging the Straussian reading of the Ethics as moving towards the transmoral and transpolitical life of philosophy in book 10 as the peak of the human good.

If this is correct, then Aristotle's arguments for the philosophic life as the only truly good life, set apart from and above the moral life, might be seen as an exercise in rhetoric for those of his readers who are Platonic philosophers. Aristotle indicates in the Politics (1267a1-13) that for those who desire a life of pure pleasure unmixed with pain that can be enjoyed without dependence on other people, philosophy is the best remedy. For such people, describing philosophic contemplation as a godlike, self-contained activity might be the best rhetorical strategy.

In Aristotle's Rhetoric, he emphasizes that the successful rhetorician must respect the opinions of his audience. Particularly, in epideictic rhetoric of praise and blame, he explains:

"It is necessary to consider in whose presence we praise; for, as Socrates said, it is not difficult to praise Athenians among Athenians. We must also speak of what is honored by the particular audience as actually existing there, such as among Scythians, Lacedaemonians, or philosophers. And generally what is honored is to be referred to the noble, since they seem to border upon one another" (1367b7-12).

This remark occurs in the context of Aristotle's claim that the rhetorician must praise what appears noble to the audience, and the audience tends to assume that what they honor is truly noble. The problem is that different audiences honor different things and therefore have different conceptions of the noble. The rhetorician must respect those differences. Furthermore, Aristotle indicates that philosophers constitute a distinct audience in that they have their own standards of honor and nobility. It seems likely, therefore, that when Aristotle, in the Ethics, praises the activity of solitary philosophic contemplation as the most honorable and noblest activity (1141a20, 1141b3, 1177a16, 1178a2), he is making a rhetorical appeal to the Platonic philosophers among his readers. It is not difficult to praise philosophers among philosophers.

The common opinion of philosophers, Aristotle indicates, is that the philosophic life is godlike because it consists in the contemplation of objects that are unchangeable and eternal. For Plato, the unchangeable and eternal objects of philosophy are the Ideas. Aristotle rejects the doctrine of the Ideas, particularly as applied to the study of the human good. The Idea of the Good, Aristotle argues in the Ethics, "is no more good by being eternal, just as a white thing that exists for a long time is not whiter than a white thing that exists for a day" (1096b3-4).

More common than Plato's doctrine of the Ideas, Aristotle suggests in the Ethics, is the belief that the unchangeable and eternal objects of philosophic contemplation are the heavenly bodies--the Sun, the Moon, and the stars. It is commonly thought that the life of wisdom is completely lacking in prudence, because while the prudent man studies the changeable and contingent affairs of human life, the wise man studies the unchangeable and eternal bodies of the cosmos (1141a30-41b2).

Although Aristotle speculates in some of his writings on the nature of the heavenly bodies, he concedes that such speculations are largely matters of "faith" (pistis) that depend on traditional myths handed down from the earliest times, and therefore such matters cannot be settled by demonstrative reasoning (Topics, 104b1-18; On the Heavens, 270b1-26, 279b4-12, 283a30-84b5, 291b24-92a10; Metaphysics, 1074b1-14). Consequently, Moses Maimonides could claim that Aristotle's arguments for the eternity of the cosmos were rhetorical rather than demonstrative (The Guide of the Perplexed, II, 13-15).

In contrast to the common view that the highest activity of philosophy is the cosmological study of the unchanging and immortal bodies of the heavens, a large part of Aristotle's philosophical writings is devoted to the biological study of the contingent and mortal bodies of living organisms. At the beginning of his Parts of Animals, Aristotle defends the philosophical dignity of his biological studies:

"Of substances that are composed by nature, some are ungenerable and indestructible throughout eternity, while others partake of generation and destruction. The former are honorable and divine but less subject to investigation by man (for there is little evidence from sensation that we can use to make inquiries about those things that we aspire to understand); but concerning plants and animals, which are destructible, there is much more information to use for knowledge, because they are all around us. . . . The knowledge of terrestrial things exceeds that of divine substances because of its greater accuracy and scope, and our knowledge of terrestrial things has the advantage that they are nearer to us and more akin to our nature. . . . Even in the case of those animals that do not delight our senses, nevertheless the nature that designed them gives inconceivable pleasures to those of us who are by nature philosophers and are able to gain theoretical knowledge of causes" (644b22-45a11).

Aristotle's defense of biology as a philosophical science is important for our reading of his Ethics and Politics, becausethis should lead us to consider the possibility that for Aristotle ethics and politics are biological sciences. Aristotle's biological study of human beings affirms the psychophysical unity of their nature, in which mind and body are separable in speech but inseparable in reality. And so for the philosophic life to be a human life, it must be a rational life of embodied intellect, a social life of friends who live together by talking and thinking together, and thus a moral life based on mutuality and reciprocity.

Furthermore, recognizing the biological character of Aristotle's philosophizing should make us wonder whether modern Darwinian biology can sustain Aristotle's biological naturalism.

Wednesday, January 06, 2016

Over the years, I have written a series of posts criticizing Friedrich Hayek's Freudian theory of human evolution, in which he argues that human beings have evolved instincts favoring socialism, and that the spontaneous order of free markets that makes modern civilization possible arises as a purely cultural tradition that requires the painful suppression of our socialist instincts.

Recently, I have read a paper by some libertarian economists that defends a modified version of Hayek's argument. They identify themselves as taking a position that is opposite to mine. Here I will offer my assessment of that paper, but I will respect the anonymity of the authors.

The authors make a provocative
argument about the genetic evolution of economic psychology—that human beings
are by their evolved instinctive nature inclined more strongly towards socialism (understood as “explicit
cooperation”) than they are towards laissez faire capitalism (understood as “implicit
cooperation”).

In
effect, this is a modified version of Hayek’s argument that human
beings are instinctively inclined to favor socialist central planning over the
spontaneous order of free markets, although these socialist instincts that were
adaptive for small prehistoric foraging bands are maladaptive for the large
extended orders of modern civilization, which would be destroyed by any attempt
to impose socialist planning.

My
first question is about whether I am right to see their position as a modified version of Hayek’s argument
about the appeal of socialism as rooted in an atavistic instinct.That they are agreeing with Hayek is
suggested by their quoting from Hayek in The
Fatal Conceit as observing that “man’s instincts” were not made for modern
civilization, because they were adapted to “life in the small roving bands or
troops in which the human race” evolved (6).But while the authors agree with Hayek that human beings have the
atavistic socialist instincts that are adaptive only for small foraging bands,
they seem to disagree with Hayek’s claim that the propensities for market
exchange that make modern civilization possible are purely cultural and not at
all instinctive, because there was no trade in the environments of evolutionary
adaptation (see The Fatal Conceit,
34, 67, 70, 80-81, 118-19, 130, 134; The
Constitution of Liberty, 40).And
yet while Hayek generally assumes that trade did not exist at all until the
last few thousand years of human history, he sometimes admits that there is
some evidence of trade going back hundreds of thousands of years (Fatal Conceit, 11, 16-17, 29, 38-45, 60,
133).This is the point developed by the
authors—that the evidence for trade going back hundreds of thousands of years
suggests that market exchange is genetically instinctive and not purely
cultural, as Hayek generally claims.And
yet their claim is that socialist benevolence emerged millions of years earlier
in the mammalian protohuman ancestors of human beings, and therefore we can
infer that socialist instincts are more hard-wired
than market instincts (5-6, 26, 28-29).This pushes the authors closer to my position—that the “propensity to
truck, barter, and exchange” is an instinctive inclination of human nature that
can be fostered by the human culture of bourgeois virtues.

Comparing
the authors with Hayek raises another question.Do they agree with Hayek that human beings in the modern world must live
in “two worlds”?While Hayek seems to
think these two worlds are compatible with one another, the authors imply that
these two worlds are contradictory.

Hayek
is clear that markets and other processes of spontaneous ordering are only
effective for certain kinds of social activities.He distinguishes “spontaneous orders” as “grown
orders” and “organizations” as “made orders,” and he makes it clear that any
large society requires both kinds of ordering.“In any group of men of more than the smallest size,” Hayek explains,
collaboration will always rest both on spontaneous order as well as deliberate
organization, “because the family, the farm, the plant, the firm, the
corporation, and the various associations, and all the public institutions
including government, are organizations which in turn are integrated into a
more comprehensive spontaneous order” (Law,
Legislation, and Liberty, vol. 1, 1973, p. 46).

Spontaneous
ordering works best for social coordination where the tasks are very complex
and where they involve large numbers of people who interact anonymously.But deliberate organization works best for
those tasks of social coordination that are simple enough and involve such a
small number of people interacting face-to-face and sharing a common purpose
that they can be planned out by deliberate design.The family is one of those social institutions
that work best as a deliberate organization rather than as a spontaneous order.

It
is important, then, Hayek explains, that we neither apply the rules of the
market to family life nor apply the rules of family life to the market.“If we were to apply the unmodified,
uncurbed, rules of the micro-cosmos (i.e., of the small band or troop, or of,
say, our families) to the macro-cosmos (our wider civilization), as our instincts
and sentimental yearnings often make us wish to do, we would destroy it.Yet if
we were always to apply the rules of the extended order to our more intimate
groupings, we would crush them.So we must learn to live in two sorts of
world at once” (Fatal Conceit, 18).

The
authors here say nothing about the need for the deliberate design of families,
firms, corporations, and public institutions.Do they disagree with Hayek about this?Do they think that all social
coordination would be done best through markets?Or do they agree with Hayek that “organizations”
require socialism or “explicit cooperation”?

This
leads to a question about how the authors understand socialism.They quote from an article by Milton Friedman
arguing that from the fact that “socialism is a failure,” and the fact that “capitalism
is a success,” it is fallacious to infer that “the U.S. needs more socialism”
(3, n. 2).But they are silent about
what Friedman says in the rest of that article.He says that while pure socialism—complete
government ownership and control of the means of production—has failed, partial socialism is necessary, because
the judicial, legislative, and military systems of government are socialist
activities that we need.Would the
authors argue that Friedman is wrong about this, because pure anarchism without
any government would be both possible and desirable?

Moreover,
in distinguishing the failure of socialism and the success of capitalism, the
authors are silent about the possibility of successfully mixing socialism and
capitalism.After all, as measured by
the “freedom indexes” of the Fraser Institute, the Cato Institute, and the
Heritage Foundation, some of the freest countries in the world are the Nordic
social democracies (Denmark, Finland, Sweden, and Norway).

But
then the authors insist that “we live in an unfree world” (3).According to the authors, there have been
almost no free societies in the world over the past two centuries, and they insist
on this as evidence that the socialist instinct is overwhelmingly stronger than
the capitalist instinct.If that is so,
does that mean the Great Enrichment of the past two centuries—the unprecedented
explosive growth in wealth and population that has spread around the world—has been
produced by socialism rather than capitalism?If so, isn’t that implausible?If
capitalism is responsible for that great improvement in the human condition,
doesn’t that show the power of the human instincts for capitalism?

How
would the authors explain the powerful capitalist instincts expressed by those
people who work in the illegal underground economy, who are not legally
registered or regulated by government, who are paid in cash, and who pay no
taxes on their incomes?In 2009, the
Organization for Economic Co-operation and Development estimated that over half
of the workers in the world work in the illegal economy, and that by 2020,
two-thirds of the workers will be in the underground economy.There are some estimates that in places like
Lagos, Nigeria, over 80% of the workers are in the underground economy.Friedrich Schneider estimates that the yearly
value of the underground economy around the world is over $10 trillion.If this were an independent nation, this
would be second only to the GDP of the U.S.

According
to the authors of this paper, the entrepreneurial energy of these people in the
underground economy elicits our disgust because it violates our instincts for
benevolence.“Far more deeply embedded
in the human psyche is our tendency toward explicit cooperation, or
benevolence, or altruism, and therefore this constitutes a far stronger impulse
in our decision-making.Biologically
speaking, explicit benevolence triumphs over the implicit trade variety” (26).But in a footnote to this passage, they say
that “trade, too, is benevolent; it, too, is mutually supportive in that there
are necessary gains from it at least in the ex
ante sense” (26, n. 17).Are they
agreeing with Hayek’s claim that the “morals of the market” are altruistic in
their effects as promoting the common
welfare, even though their intentionality
is not altruistic (Fatal Conceit,
81, 117-19)?

Hayek
would say that those underground entrepreneurs are also intentionally benevolent in that much of their motivation for
bettering their condition is so that they can help their families and friends
to live a better life.So, here again,
we see human beings living in two worlds—the small world of family and intimate
associations and the large world of trade and impersonal interaction—both of which are rooted in our evolved
human instincts.

But
how exactly do we empirically study those evolved human instincts and make
falsifiable predictions about them?The
authors rely on prehistoric archaeological and paleontological evidence of human
genetic evolution.But such evidence is
highly speculative, particularly since we cannot specify the genetic mechanisms
for complex social behavior.

If
there are such genetic mechanisms, they should be manifested in the neural
activity of the brain in a manner that might be directly observed.So, for example, we might use brain scanning
machines to study the brain activity of people who are asked to deliberate
about hypothetical economic policies that test whether their thinking is more
socialist or capitalist.We could do this
through economic game experiments.This could
provide us some empirical testing for our theories of the evolutionary
neuroscience of socialism and capitalism.Of course, there are some problems here that come from the fallacy of
interpreting brain-scanning as mind-reading.

Monday, January 04, 2016

The publication in 1943 of Rose Wilder Lane's The Discovery of Freedom: Man's Struggle Against Authority was one of the first statements of modern American libertarianism. She was a famous journalist, novelist, and world traveler, who was also known as the daughter of Laura Ingalls Wilder, the author of the Little House on the Prairie series of novels. Lane helped her mother in the writing of those novels, and some readers have seen libertarian themes in those Little House novels.

The Discovery of Freedom presented all of human history as a struggle between Freedom and Authority. She argued that every individual human being is naturally free in generating and controlling his human energy to satisfy his natural desires by making human life on earth safer, healthier, longer, and more enjoyable. And yet, she also argued, individuals cannot succeed in this without the cooperation of other human beings, and thus all men are brothers in their need to combine their energies in order to live; and so any man who injures another injures himself. This creates the problem of how to control the combined energies of many individuals for satisfying the desires of all. Every individual controls his human energy in accordance with his personal view of the desirable or the good, and this belief in a standard of value is a religious faith. "This is true," Lane observed, "whether his God is the God of Abraham and Christ, or Reason or Destiny or History or Astrology or Economic Determinism or the Survival of the Fittest, or any other god by any other name" (xxiv-xxv). For most human beings throughout recorded human history, over the past 6,000 years, this religious faith has been a pagan faith in a universe controlled by an Authority, which controls the energy of all individuals. Those who have this pagan faith in Authority do not know that all men are naturally free.

Over this six millennia of recorded human history, Lane believed, there have been three attempts to teach the fact that all men are naturally free. (Lane left open the possibility that in the prehistoric world, "perhaps all men once knew that men are free [73].) The first attempt was by Abraham and his descendants, who denied the existence of the pagan gods who were thought to control everything, and who affirmed the existence of the one God who is the Creator and Judge of everything, but who leaves individual human beings free to choose good or evil, and who unites all individuals as brothers who prosper only in combining their energies for mutual benefit. The second attempt to teach the fact of individual freedom was by Muhammad, who agreed with Abraham in denying the Authority of the pagan gods and in affirming the one God who wants all individuals to take responsibility for their lives. The third attempt was by the American revolutionaries who asserted the natural freedom of all men, so that no government has Authority over men, and that just government exists only by the consent of individuals for securing their natural freedom.

Many of Lane's readers have been surprised by her claim that Muhammad's religion is one of the three great historical moments in the struggle of Freedom against Authority. After all, hasn't Islam often promoted authoritarian social orders that suppress individual liberty? And don't we see that today in the authoritarian violence of the Islamic State and other Islamist movements directed ultimately to establishing a world-wide Caliphate? Don't we also see that in Islamic authoritarian regimes like Saudi Arabia?

Lane saw a libertarian teaching in Muhammad's religion--and particularly in the Quran as the divinely revealed text of his religion--that explained for her the extraordinary creativity of Islamic civilization during its first eight hundred years. In recent years, some classical liberal scholars have offered evidence and argumentation to support her view of Islamic libertarianism. One of the best examples of this is Mustafa Akyol's Islam without Extremes: A Muslim Case for Liberty (2013). Akyol is a Turkish Muslim and a classical liberal who sees a tradition of liberal or libertarian thought that can be rooted in the Quran, as opposed to the traditions of Islamic authoritarianism that have little or no grounding in the Quran.

Lane stressed that Muhammad was a merchant engaged in global trade, and that the Islamic empire created free trading networks from India to Spain. She also stressed the importance in the Quran (2:256) of the teaching that there is to be "no compulsion in religion" (Lane, p. 86), because genuine religious belief must be by the free choice of individuals. She noted that many Muslim societies protected religious liberty, so that Muslims, Jews, Christians, Hindus, and Zoroastrians could live in peace. Moreover, this freedom allowed for free exchange in commerce, science, and technology. This promoted rapid urbanization. By the year 800, the Islamic Middle East had thirteen cities with populations over 50,000, while Europe had only one--Rome.

In Iraq, a school of Islamic theologians known as the Mutazilites argued that God's Creation operated by rational laws that could be studied by science or philosophy. And thus faith and reason were compatible. This allowed Muslim philosophers like Alfarabi and Averroes to transmit the ideas of the ancient Greek philosophers to the Muslim world and then to medieval Christendom. In al-Andalus, the Muslim kingdom in Spain, Muslims, Jews, and Christians could meet to translate and study the texts of Plato and Aristotle.

There are many verses in the Quran that speak about fighting in war for the true religion, and Muhammad was a military leader. These are the verses quoted by Muslim authoritarians--like al-Qaeda, the Taliban, and the Islamic State--to justify holy war against infidels and apostates. But Lane suggested that these militant verses can be read as affirming the justice of only defensive wars--particularly, Muhammad's wars of defense against the pagan armies that attempted to destroy his new monotheistic religion. As the Quran says, "permission to fight is given to those who are fought against" (22:39).

The militancy of Islamist terrorism today is not rooted in the Quran but in the tribalism of Arab cultural traditions. The Quran warns about this: "The desert Arabs are more obdurate in disbelief and hypocrisy, and more likely not to know the limits which God has sent down to His Messenger" (9:97).

A few years ago, I taught a series of three courses on "The Politics of the Old Testament," "The Politics of the New Testament," and "The Politics of the Quran." What I found most surprising in reading the Quran with my students was how so much of what many of us identified as Islamic theocratic authoritarianism--particularly, in the enforcement of the Islamic law of Sharia--was not found in the Quran. Most of the repressive rules of Sharia--such as stoning adulterers, punishing drinkers, killing apostates and blasphemers, honor killings, female genital mutilation, the rule of a global caliphate, and the punishment of sinful behavior by "religious police"--were not found in the Quran, because they arose hundreds of years after Muhammad's death through the traditions that were justified through the Haddiths (the reports of Muhammad's sayings and doings). There is great disagreement among Islamic scholars over the interpretation of the Haddiths and even over the authenticity of these reports about Muhammad. Moreover, in treating Muhammad's words and deeds as sacred, and thus exalting Muhammad to superhuman status, the tradition of the Hadiths denies the teaching of the Quran that Muhammad was only a human being and not divine. "I am only a human being like yourselves," Muhammad declared. "It is only revealed to me that your god is One God" (18:110). Thus, the revelation of the Quran is divine, but Muhammad is not.

Akyol has shown that those Muslims who look to the divinely revealed Quran as more authoritative than the Haddiths can follow a liberal or libertarian view of Islam like that taken by Lane. One of the first expressions of such Islamic liberalism was by the Murjiites.

Muhammad died in 632 without leaving any instructions for identifying who would take his place as leader. The Muslim community decided that the most prominent of Muhammad's companions should lead them. Abu Bakr became the first "caliph" (the "successor" of Muhammad). Abu Bakr's "caliphate" was followed by those of Umar, Uthman, and Ali. Sunni Muslims regard these four as the "Rightly Guided Caliphs," but Shiite Muslims believe that Ali (the cousin and son-in-law of Muhammad) was the only true successor, and that the other three were usurpers. In 657, Muslims fell into a sectarian civil war, with Ali leading one army and Muawiyah leading another. In addition to this factional split between Sunnis and Shiites, a third faction--the Kharijites or "the Dissenters"--condemned both Ali and Muawiyah as infidels, and a Kharijite assassinated Ali in 701. Muawiyah then established the Umayyid dynasty, which was supported by the Sunnis, who became the mainstream of the Muslim community. (The tense conflicts today between the Sunni regime of Saudi Arabia and the Shiite regime of Iran is one example of how persistent this sectarian split is.)

Thus, within one generation after the death of Muhammad, Muslims were killing one another, because they had combined religious authority and political power, and so their disagreements over religious authority became violent political conflicts. One way to resolve this problem would be to separate religion and politics, so that political rule would be secular, and religious belief would be left up to individual choice, without anyone being allowed to enforce religious doctrines through compulsion. Although these ideas of pluralism and secularity might seem to be modern ideas that were not developed until much later in history, the Murjiite school of Muslim theology came close to these ideas in the first century of Islamic history.

The Arabic word irja means "postponing." Some Muslim scholars became identified as Murjiites or "postponers," because they argued that human beings could not judge who were the true believers, and that this should be "postponed" until the afterlife, when God would judge this and punish the unbelievers in Hell and reward the believers in Heaven. This sort of reasoning supports religious tolerance and religious liberty, because it is argued that true piety depends on a personal voluntary faith by individuals that cannot be enforced by social coercion, that it is arrogant for any human beings to assume that they can judge who are the true believers, and that only God in the afterlife can judge the inward heart of believers and unbelievers.

Similar reasoning was developed in the seventeenth century by Roger Williams and John Locke in arguing that the New Testament does not teach the compulsory enforcement of religious belief, and thus supports religious toleration and liberty in this earthly life, with the understanding that God will judge us in the afterlife. Jesus made it clear that his kingdom was not of this world, and so he had come into the world not to establish theocratic government, but to save those who might enter the kingdom of Heaven after death. The Murjiites reached the same conclusion about the Quran.

They could cite the verse of the Quran about "no compulsion in religion" (2:256). They could also quote another declaration of the Quran: "Had God willed, He would have made you a single community. Every one of you will return to God, and He will inform you regarding the things about which you differed" (5:51). Moreover, it is said that many questions are "held in suspense for the command of God, whether He will punish them, or turn in mercy to them: and God is All-Knowing, Wise" (9:106).

The Quran teaches that when people defy or ridicule God's message, "you are not to sit with them," which suggests that believers should peacefully shun their critics and leave their punishment to God (4:140). If God were to punish human beings for all of their wrongdoings, not a single human being would be left alive. Instead, God postpones His judgment (16:61). The Quran recommends: "Say this--oh, you that reject faith, I worship not what you worship. Nor will you worship what I worship. Unto you your religion, and unto me my religion" (109:1-6). Unbelievers are threatened with eternal punishment in Hell. But in this life, people are free to choose their own way. "It is the truth from your Lord," the Quran teaches, "so let whoever wishes have belief, and whoever wishes be an unbeliever" (18:29).

When I taught my course on the politics of the Quran, one of the students was a devout Muslim who often quoted these verses from the Quran to prove her claim that the theocratic Islamist radicals were violating the Quran's teaching. Although she never identified herself as taking the libertarian position of the Murjiites, that's what she was doing.

Akyol shows how this Islamic libertarianism of the Murjiites was first lost with the triumph of the Islamic traditionalists, and then revived in the Ottoman Empire of the nineteenth century, and then again revived in Turkey in the early 1980s. In 1856, the Ottoman sultan declared: "As all forms of religion are and shall be freely professed in my dominions, no subject of my empire shall be hindered in the exercise of the religion that he professes, nor shall he be in any way annoyed on this account. No one shall be compelled to change their religion" (Akyol, 151). An intellectual group called the Young Ottomans openly argued for liberal reforms that would secure individual liberty and representative government. In 1877, the first Ottoman Parliament was elected, and more than one-third of its seats were filled by non-Muslims.

This Ottoman liberal era was brought to an end by the First World War. The victorious European powers broke up the Ottoman Empire. After a War of Liberation (1919-1922) for Turkey, Mustafa Kemal Ataturk proclaimed the Republic of Turkey in 1923, and he abolished the caliphate in 1924. Those supporting Ataturk formed the Republican People's Party (RPP), which promoted authoritarian secularism, in which the public expression of religious belief was suppressed by government. The opposing party--the Progressive Republic Party (PRP)--promoted liberal policies, including respect for religious liberty. The Kemalist government shut down the PRP, because its policy of "respect for religious beliefs and ideas" would "encourage religious reactionaries." Radical Islamism arose at this time as a revolt against Kemalish secularism and directed to establishing Islamic theocracy.

The middle path between the secularist state and the theocratic state was Islamic liberalism, which was rediscovered in Turkey in 1983 when Turgut Ozal's Motherland Party won national elections. Ozal's policies were based on "the three freedoms"--ideas, religion, and enterprise. He was supported by liberal intellectuals who were secular but not secularists. He was the first Turkish prime minister to make a pilgrimage to Mecca. He showed that Muslims could enjoy religious freedom in a liberal state.

After Ozal's death in 1993, a political Islamist movement led by Necmeddin Erbakan attempted to establish an Islamist regime. Acting against this movement, the Kemalist military launched a coup to overthrow the government in 1997 and then to repress the Islamist groups.

In 2002, Akyol claims, another revival of Islamic liberalism began with the electoral victory of the Justice and Development Party (AKP) led by Tayyip Erdogan, which promoted an Islamic liberal capitalism that rejected both secularist authoritarianism and Islamic authoritarianism. Some Turkish liberal intellectuals advanced the idea of Homo Islamicus--which combined the ideas of free markets, free thought, and Islamic ethics.

Here is where Akyol's story becomes implausible to me, because he is silent about the brutal authoritarianism of Erdogan, which includes the suppression of journalists who disagree with him. When judges inquired into the corrupt practices of Erdogan's family, he banned Twitter and YouTube for reposting news of the judicial inquiries.

Libertarians like Lane and Akyol can argue that what all individuals need to live a fulfilling life is the liberty to think and act as one chooses, which includes the liberty to search for God. In a libertarian society, the government does not enforce coercively any particular religious beliefs and practices, but government does secure the liberty of all individuals choose their religious identity. So, those Muslims who wish to live by Sharia can do so, as long as those rules of Sharia are accepted voluntarily and without compulsion. For example, since 2008, Muslims in the United Kingdom have been free to go to Sharia courts for the enforcement of Muslim family law. Similarly, in the United States, and some other Western countries, Orthodox Jews can live according to the Halakha, their religious code. In the United States, employees of Christian organizations can sign contracts that have clauses in which they agree that any contractual disputes will be resolved by Christian arbitration agencies that follow biblical principles.

The Murjiites show how deep such libertarian thinking is in the history of Islam. And while most Muslims today have probably never heard of the Murijiites, many if not most Muslims today have adopted the libertarianism of the Murjiites. In fact, the Islamic State has recognized this as a great threat to its Islamist authoritarianism. Dabiq is the monthly magazine of Islamic State propaganda. In the eighth issue (March 2015), there is a long article on "Irja, The Most Dangerous Bid'ah." Bid'ah means "innovation," and it's the word used for "heresy." Irja refers to the Murjiite heresy of "postponing" God's judgment. The article warns that even many of the jihadist factions are not coercively enforcing Sharia in the territories that they have taken, because many Muslims resist the harsh rules of Sharia as too oppressive. In condemning the Murjiites as heretics, the article quotes from various Islamic scholars, but it does not cite specific verses of the Quran, not does it answer the Murjiite arguments based on the Quran.

Perhaps the best evidence in the Quran for the position of Islamic theocracy is the teaching that believers have a duty to "command the right and forbid the wrong" (3:104, 3:110, 9:71), which might be read as requiring the compulsory enforcement of the harshest rules of Sharia. But the Quran does not specify exactly what is meant by commanding the right and forbidding the wrong. Some Islamic scholars have argued that this means little more than separating the believers from the unbelievers, so that commanding the right means choosing to worship Allah, and forbidding the wrong means refusing to worship any other god. And when one considers this in the context of all the verses cited by the Murjiites indicating that there is to be no coercion in religious belief, the Islamist position looks weak.

This confirms the strength of the Murjiite arguments for reading the Quran as supporting liberalism. Similarly, as I have argued in some posts, there are good arguments for the liberal interpretation of the New Testament as supporting the liberal principles of secular politics, religious liberty, and the privatization of religious belief. Most Christians today--including the Catholic Church--have embraced the liberal interpretation of the Bible, just as many Muslims have embraced the liberal interpretation of the Quran.

As Akyol has indicated in a recent article in the New York Times, the article in Dabiq shows that many devout Muslims have become Murjiite liberals without realizing it, because they have embraced the Quran's teaching of "no compulsion in religion" as an Islamic basis for adopting Western libertarianism.

The battle against the Islamic State and the other Islamist radicals is not as much a military battle as it is an intellectual or theological battle of ideas. The defeat of theocratic Islamism in that battle of ideas will come with the triumph of Islamic libertarianism.

Consider how the libertarian understanding of religious liberty and pluralism would apply to the current controversy over the possible firing of Larycia Hawkins, a tenured political science professor at Wheaton College in Illinois, who has expressed support for Muslims by wearing a hijab and saying that they worship the same God as Christians like herself.

At Wheaton College, all employees of the school must agree to affirm a "Statement of Faith." Administrators at the school say that Professor Hawkins has refused to explain how her statements about Islam can be consistent with her affirmation of that statement of Christian doctrines. Consequently, they have begun proceedings for having her fired.

Wheaton's "Statement of Faith" affirms belief in "one sovereign God, eternally existing in three persons: the everlasting Father, His only begotten Son, Jesus Christ our Lord, and the Holy Spirit, the giver of life," and in Jesus Christ as "true God and true man, existing in one person." These Christian doctrines are clearly rejected by the Quran. Jesus is said to be a prophet of God and an apostle to Israel (3:49-51; 6:85). But Jesus is not divine and not the son of God. To affirm the trinitarian conception of God as "eternally existing in three persons" is said to be blasphemy, because it denies the oneness of God (5:19, 75-80). Therefore, to affirm Wheaton's statement of faith, Professor Hawkins must reject Islam as a false religion. If she refuses to do this, then the College can rightly fire her for violating the terms of her contract with the College.

This is an expression of religious liberty and pluralism, because the College is not exerting compulsion on Professor Hawkins. Joining the religious community of the College is a voluntary act that requires agreeing to the statement of faith. If she refuses that agreement, she can be expelled from the community, but as long as she is not punished with violent coercion, she retains her religious liberty. She is free to join a Christian community or a Muslim community that does not affirm the divinity of Jesus.

Islamic libertarianism would support the same arrangement. Muslims, Christians, Jews, and other religious believers would be free to join religious communities that could enforce the voluntary affirmation of the community's doctrines. But there would be no compulsion. And the determination as to which community was enforcing the doctrines of the true religion would be left up to God exercising His judgment in the afterlife.

* * *

Akyol summarizes his argument in a TED talk here. His New York Times article is here.