Main menu

Tag Archives: interview

Jacques Barzun (1907 2012) was probably best known for his writing on education at all levels until Dawn to Decadence reminded people of the remarkable breadth and depth of his scholarship. He specialised in the cultural history of modern times and it is likely that few people have engaged in a more thorough and invigorating manner with the leading issues in the field. The sheer bulk of Barzuns output is prodigious, bearing in mind his teaching and administrative responsibilities. He wrote more than twenty books, edited a similar number and contributed countless chapters to others, plus journal articles, Introductions and Forewords for books by other authors.

He fought a long battle against what he called hokum, ideas with no basis, which gain spurious credibility by repetition (in the way that so many celebrities are celebrities for no other reason than that they are regarded as such by the media and society pop pundits). One of these bits of hokum is the description of the 1800s as the century of laissez faire. He pointed out that the era of laissez faire in Britain was probably as short as a decade, from the repeal of the tariffs on imported grain (the Corn Laws) to the introduction of the Factory Acts and similar regulations.

His reputation achieved a remarkable boost when his massive and scholarly book Dawn to Decadence appeared in the year 2000 and quickly became a surprising best-seller. This was not entirely a new experience for Barzun because he was touched by the fickle flame of popularity in 1956 when he featured on the cover of Time magazine. However his profile waned during the 1960s when his brand of deep but politically disinterested scholarship fell out of fashion.

In the Authors Note, Barzun advised that he set out to be selective and critical rather than neutral and encyclopedic. Those who have tried to read this wrist-breaking 900 page tome in bed will be pleased that he did not set out to be more informative, and that the scope of the work was only the last 500 years. The book can be seen as a contribution to the culture wars, subtitled 500 Years of Western Cultural Life 1500 to the Present the aim is to counter those who would either build a wall against the past or, alternatively, draw upon the past to support the case of the adversary culture against the whole project of western civilisation.

Asked about the origins of his remarkable last work, and the reason why it came so far behind the main body of his writing, he explained:

When I was just beginning to teach, about 1935, I thought I would write a history of European culture from 1789 to the present. I was dissuaded from it by a friend of my fathers who was the director of the Bibliotheque Nationale. I was doing research there and he asked me what I was doing, and I told him, and he said, Oh, young man, please dont do any such thing. Youll write about things that you know at first hand, and you will fill the rest out with things you get out of secondary texts. Theres no need of that at any time. So I said, How long should I study original works before I begin? He said, Well, why dont you wait until you are 80. I think I waited until I was 84, 85.

The Man

Barzun grew up in Paris and Grenoble, the only child in a household where his parents conducted a modernist salon. His father worked in the Ministry of Labour but his heart was elsewhere. He wrote novels and poetry and hosted the likes of Apollonaire, who taught Jacques how to tell the time on his watch, and Marie Laurencein who painted his portrait. Other regular visitors included the painters Gleizes and Duchamp, the composer Varese and foreigners such Ezra Pound, Richard Aldington and Stegan Zweig. Members of the older generation such as Andre Gide also appeared occasionally to find out what the wild young men were up to.

During the war his father was withdrawn from active service to undertake diplomatic missions and after a journey to America he offered Jacques the alternative of completing his studies at a leading British university such as Oxford, or a leading American college. The young Barzun had been reading James Fenimore Cooper and other books about the red Indians, so he opted (hopefully but unrealistically) for New York.

He completed high school in the USA and in 1923 he entered Columbia College, graduating four years later at the top of his class. This earned him a lecturing position at Columbia University where he became a full professor in 1945, Dean of the Graduate Faculties in 1955, and the inaugural Dean of Faculties and Provost of the University in 1958. This level of involvement in administration by a serious teacher and scholar has few parallels and it adds authority to his account of the travails of the universities that flowed from their mushroom-like growth. In 1967 he resigned from his administrative duties and focussed on teaching and writing until his retirement in 1975. Subsequently he continued writing, lecturing and working in various posts including Literary Advisor to Scribeners, a directorship of the Peabody Institute for Music and Art at Baltimore, and membership of the Board of Editors of Encyclopaedia Britannica.

He started teaching as an undergraduate, tutoring students of French. Two of his middle-aged pupils showed so much benefit from his assistance that the head of the department in a major university invited Barzun to a meeting. He laughed aloud when confronted with a seventeen year old, who he had contemplated offering an instructorship. As a postgraduate student he formed a small commercial venture with some colleagues a perfectly legal and honest tutoring mill, whose grist managed to renew itself as we managed to put the backward rich through the entrance exams of famous colleges not our own. During that period he also wrote short crime fiction and book reviews under a pseudonym. One of the books furnished him with a formative experience. Whiteheads Science and the Modern World convinced him that there was no essential tension between the two cultures and ever after he envisaged science and the arts as potentially harmonious joint tenants in the house of intellect.

Some of his early teaching experiences were tinged with melodrama.

A big bruiser of a student whom I had failed came to my office threatening bodily harm, then hounded me by phone, wire and letter, pleading that I should pass him in the name of Christian brotherhood for he had powerful friends in Brooklyn. Nothing happened, but two years later, the tide turned in my favour. Another student, an impressive-looking middle-aged man in an Extension course, made a point of showing his gratitude, first by inviting me to his Turkish restaurant and then intimating that if I had any enemies he would only be too glad to get rid of them for me gratis.

In the 1930s he became closely associated with Lionel Trilling when they shared a famous course on Great Books. They also became close friends and during this time Barzun was near the centre of the progressive intellectual culture of New York, though unlike Trilling and most others he was never a committed man of the left. A reporter from the Austin Chronicle took this up in an interview with Barzun after the launch of Dawn to Decadence.

JB: I had no Marxist colouring, such as they hadI stood aloof, although not hostile, and I take it they werent hostile to me. They deplored my blindness.

AC: You started writing about Romanticism when that was not very popular. Its funny, you were aloof from Marxism, but also from the reaction to it, which was influenced so much by T.S. Eliot.

JB: Yes, I was always against the current. Eliot of course got it from Babbit, who got it from the French eminences of anti-Romanticism. What I read about Romanticism didnt agree with what was said about it. Everything in the books was contrary to fact and legitimate conclusions of fact. Including all sorts of fabrications, simply lies that had gotten into the critical stream and were reproduced over and over again without being checked.

AC: You seem temperamentally more comfortable being at the limit of the Zeitgeist than being in the center of things.

JB: Well, I would call that the historians detachment.

The Books

His first serious research produced a dissertation on class and race in pre-revolutionary France, and in 1937 this work was published in Race, A Study in Modern Superstition. This was not written as a tract for the times but as a part of a deep scrutiny of cultural history, although by that time the issue of race had become something more than an interesting topic for a doctoral dissertation. Barzun noted that The daily newspaper told us what uses could be made in our own century of the protean idea of Race. No longer was race simply one among many issues. The appeal to race, class or nation was in truth an epidemic attempt to supply a new motive power for social evolution. It expressed a desperate desire to breath life into the two European idols of Progress and Determinism.

At that point Barzun wrote On Human Freedom in an attempt to offer a civilised alternative to old idols and new dogmas. The next major instalment in his project was Darwin, Marx and Wagner: Critique of a Heritage (1941). The nomination of Wagner rather than Freud in the trinity of emblematic modern minds is a sign of Barzuns profound interest in music and the arts. He argued that these men achieved their reputations by catching the spirit of the age, like surfers on a wave, backed by the formidable public relations exercises mounted by their followers . This earned them the status of intellectual icons despite their lack of originality and the significant flaws in their systems. He described in some detail how all the leading ideas of evolutionary theory, socialism and the leading role of the artist were commonplace for decades before the big three started work.

Barzun was especially critical of the way that their adherents promoted determinism and scientism, with truly disastrous political consequences in the twentieth century. This runs parallel to Poppers concern with the myths of historical destiny and Hayeks critique of a certain kind of rationalism whereby utopian social reformers have felt obliged to recreate society in the shape of their dreams. In addition to the shortcomings of their systems, two of the three titans were monstrously egocentric and unprincipled exploiters of their friends and denigrators of their enemies. These personal characteristics became prominent in the modus operandi of their followers, setting the tone for bad manners in transactions between intellectuals that have persisted to the present time.

Barzuns critique of the cult of evolutionary theory and the canonisation of Darwin himself is impressive but it is difficult to identify where Barzun stands on the scientific status of evolutionary theory and this is the least convincing part of his work. He appears to be dissatisfied with materialism and determinism without explaining whether he adhered to vitalism, or some form of mysticism or religion. This underlines the problem of pursuing such a wide-ranging research project without the assistance of co-workers, so his reach may have exceeded his grasp at some points. This is especially apparent when he attempted to locate his work in the context of twentieth century physics and biology, where he was operating too far from his base in history and cultural studies.

In the course of writing about Darwin, Marx and Wagner he discovered how the movement of ideas around 1800 labelled Romanticism had been distorted and misrepresented by subsequent commentators. That became the topic of his next book Romanticism and the Modern Ego (1943). He suggested that the Romantic movement had brought back into favour some important ideas connected with social purposes and human attributes that the materialism of the eighteenth century and the violence of the French revolution had obscured. However these valuable elements were swept aside in a wave of unscholarly denigration.

The early, or Romantic part of that [nineteenth] century was held in particular detestation and contempt: it was nave, silly, wrongheaded, stupidly passionate, criminally hopeful, and intolerably rhetorical. The word romantic in fact stood for these defects wherever they might be foundAs a student of history, and particularly of cultural history, I thought I saw clear evidence that twentieth-century notion of Romanticism was an illusion. As for a change of direction in our culture, I would have welcomed it whatever its form classical, primitive or archaic. It struck me, however, that a true change would require a break, not with what had happened a century earlier or with its lifeless imitations, but with what had happened only thirty years before with Impressionism and Symbolism, which had done their work and could be deemed new and fresh only by virtue of a cultural lag.

Education, Intellect and the Academies

In his book on romanticism he laid the foundation for subsequent writing on art and aesthetics in the twentieth century, of which more later. He moved on to a series of works on the education front, starting with Teacher in America, first published in 1944. (The Preface is on line). This book is a tour de force of the major deficiencies and impediments in the education system from school to college, ranging from the notion that learning has to be fun, various misguided fads promoted in Teacher Training Schools and the soul-destroying drudgery of the PhD ordeal. In The House of Intellect (1959) he explored the influences that distract people from clear, direct and critical thinking. He pointed out that intellectuals themselves have been the major agents in the erosion of the life of the mind along with the influence of distorted views of Science, and the unhelpful contribution of Business inspired by misplaced Philanthropy.

The intellectual class has been captivated by Art, overawed by Science, and seduced by Philanthropy. The damage done by each has been that of heedless expansion combined with a reliance on the passage of time to restore order and decency.

He described some problems that result from the well-meaning efforts of foundations and corporations to ameliorate the human condition by funding university-based research and the international exchange of ideas. One is the impact on departmental budgets when foundations give short-term grants (with inadequate allowance for overheads), and the beneficiaries expect to be kept on in perpetuity. The other is the diversion of effort from serious long-term projects into preparing grant applications to attract funding for exciting and relevant research and preparing papers (similarly exciting and relevant, and identifying the need for further research), for international conferences. Barzun anticipated some sceptical comments by C Wright Mills who described conferences as junkets to permit professors to pursue their feuds and vendettas in exotic locations while younger players scramble for positions in the academic marketplace.

In Science: The Glorious Entertainment (1963), Barzun catalogued and criticised many conflicting and incoherent perceptions of science that are abroad in the land, some of them exerting a malicious influence on the humanities and many of them either trivialising or sensationalising the activities of scientists.

If science students leave college thinking, as they usually do, that science offers a full, accurate, and literal description of man and Nature; if they think theories spring from facts and that scientific authority at any time is infallible, and if they think that science steadily and automatically makes for a better world then they have wasted their time in the science lecture room and they are a plain menace to the society they live in.

The downside of that situation is the durability of creation science in the US, where the practitioners can play on the general lack of understanding of the provisional nature of scientific findings at frontier of knowledge and the critical and imaginative approach required for good scientific research.

In 1968 he published his revealing and extremely well informed account of the alarming tendencies in American higher education due to explosive growth in the universities at a time of great confusion about their aims and about the traditions and disciplines which nurture learning and scholarship. As if to underline his concerns, The American University appeared in 1968, the very year that students around the world started setting fire to their campuses, including his own. The conflagration started in time for him to put a note in the Preface to state that this did not prompt him to change a word that he had written. The Australian universities, in their rapid expansion and loss of focus, followed much the same path, a decade or two behind the US lead, without anyone visibly learning anything from the US experience that was clearly spelled out in 1968. It is interesting to note that the name of Barzun appears to be missing from the debates that have raged on higher education in this country, which suggests that his work in this area was done in vain so far as our academics and intellectuals are concerned.

The Arts

Another arm of his cultural project was to sort out the positive and negative elements in modern art, an area where he had a head start with his early exposure to some of the practitioners before 1914. His major positive statement on art appears in a collection of papers titled The Energies of Art which is a defence of certain types of revolutionary practices with genuine artistic merits which were not fully appreciated due to the distraction created by others who set out to deliberately affront the sensibilities of the general public. He claimed that the generation of artists who were in their prime during the period 1900 to 1914 were laying the foundations for major advances in art, transcending the schools of classicism, romanticism, naturalism and symbolism that held the stage during the previous two centuries.

There was this new surge of creation, inventiveness, new techniques, which gave promise that the 20th century would be one of the great productive periods of Western culture. It all collapsed into the tensions of the First World War. There were hundreds of thousands of gifted people killed. They were part of a break; they made a chasm. The generation that came to literary and other activities in the Twenties were very young men who did not have their elders guidance and lacked a sense of resistance to their elders, both of which are necessary to true literary creation.

So instead of consolidating the start that was made pre-1914, the arts have suffered from a number of debilitating ideas which Barzun subjected to criticism in his 1972 Mellon Lectures. These were published in a book titled The Use and Abuse of Art. He examined the rise of art as a substitute for religion in the nineteenth century, so art simultaneously became the ultimate critic of life and the moral censor of society. The next phase in that development was Estheticism and Abolitionism during the period 1890 to 1914, with the tradition of the New resulting in the unremitting destruction of past art as a point of reference for any moral or aesthetic standards. He wrote By making extreme moral and esthetic demands in the harsh way of shock and insult, art unsettles the self and destroys confidence and spontaneity in individual conduct.

Art in this function has helped to undermine the assumptions that the state and civilized society are valuable or admirable, thus impairing the effectiveness of political and social institutions and proving the destroyers own case. By linking the growing interest and respect for art in modern times with the dominance of bourgeoise values Art has effectively turned on art itself by becoming a vehicle for every kind of assault on traditional standards of beauty, craft, morality and commonsense.

This was written thirty years ago and all that has changed is the increased number of students who are exposed to more advanced theory to justify the assault of Art on our senses and sensibilities. In the fourth lecture he moved on to another piece in the crazy pavement of modern art, the function of art as redeemer, linked with the concept of art as a substitute for religion. Barzun accepted the common ground, that the power exerted by great art on receptive persons is a religious power, and he pursued the consequences that can follow when that kind of influence is not checked by critical thinking and a sense of history. He discussed the individual and collective forms of salvation through Art that have been promulgated for 200 years. By the term collective salvation he means the appeal of revolutionary art which offers the artist a special role, first as evangelist and later as beneficiary, in the utopian society brought about by the revolution.

Finally he turned his attention to the troubled relationship between Science and Art, describing how artists have entered into competition with scientists to claim some of the respect and the material benefits that have been generally granted to modern Science. One of the fruits of this endeavour has been the proliferation of art bollocks(not his term), the use of pretentious jargon to emulate the (supposed) precision and profundity of scientific discourse. He thoughtfully provided a sample, with a translation.

For Rousseau a painting was a primary surface on which he relied physically as a means for the projection of his thought [Translation: Rousseau wanted to paint on canvas]. Rousseau does not copy the exterior aspect of a tree: he creates an internal rhythmic whole conveying the true, grave expressionism of the essentials of a tree and its leaves in relation to a forestBut his style was established neither derivatively nor in obedience to fashion. It stemmed from the determination of his whole mind as it incarnated his artistic ambitions. [Rousseau painted just as he liked, and he liked painting trees].

As Barzun approaches his centenary he can look back on a body of scholarly work that few people can equal. However he is entitled to be disenchanted with the apparent failure of this body of work to exert the humanising and invigorating influence on cultural studies that one would have expected. It may be that he has suffered instead of gained by the expansion of the universities. William W Bartley, in his posthumous collection of writings on scholarship and the universities Unfathomed Knowledge, Unmeasured Wealth (1990) propagated the counterintuitive idea that the expansion of the universities, more especially the dissemination of examinable knowledge, represents a threat to the growth of knowledge and even to literacy itself. Such a view would have been regarded as ludicrous when three per cent of people went to universities, but nowadays with 30% on campus (and some talk of 60%) it looks more plausible.

Clearly Barzun challenged too many academic empires. Also, like some other original and independent scholars such as Edmund Wilson and R G Collingwood, he did not establish a significant school or following. This is apparent in the collection of papers in his honour, From Parnassus (1976) which is disappointing in the very ordinary quality of the contributions. Moreover the biographical piece by Lionel Trilling is practically useless because Trilling fell ill and died leaving little more than rough notes. This is most unfortunate because Trilling, as a longtime colleague and friend off campus, might have shed some light on little-known aspects of Barzun, such as the unbuttoned man in his domestic setting, and some insights into the demons and aspirations that drove him to read and write so much.

Light-hearted addendum, from the interview with the Austin Chronicle.

JB: Allen Ginsberg was a student of Lionels, and of mine, not in our joint course, but separately. But we joined together to save him from the penalties of the law, because he was involved in a very bad affair with an older man who seduced him sexually and used him to help dispose of the corpse of a man that this fellow had killed. Poor Allen, aged 17 or 18, helped to dump this body into the Hudson River. Well, was he in trouble there! With the help of the dean of the college, who also knew Allen, the dean, Lionel, and I waited on the district attorney who fortunately was a Columbia graduate and we said, This youth is really innocent, although he committed an awful blunder and hes also very gifted in the English Department. We didnt say he was a poet or that might have queered his chances! And that it would be a catastrophe to turn him over to a criminal court and put him in jail. We had to go again to a judge in Brooklyn, I think, because Allen came from Brooklyn or something. Anyway, the district attorney wasnt enough, so we went to a second hearing, which was much more sticky. But Allen was let off.

AC: You knew he was a poet even back then.

JB: Oh yes. He showed me his writing. Hed send me things.

AC: Did he send you Howl?

JB: No, I dont think he did. He sent me a letter from India, where I think he got a fellowship to spend a year or so. He sent me a letter that read, Ive just met a wonderful guru who can read minds. I want you to Allen had a way of saying I want you to do this, I want you to do that I want you to get him a position in the Philosophy Department. I wrote back, Dear Allen, the members of the Philosophy Department want nothing so little as to have their minds read.

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040.[6]

Many notable personalities, including Stephen Hawking and Elon Musk, consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity’s future.[7][8] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles.[citation needed]

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[9]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][10]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[11][12][13] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[12][14]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[15][16][17]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[18] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[19]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[20] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[21]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[22] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[23]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[24] Kurzweil believes that the singularity will occur by approximately 2045.[25] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][26]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[27]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[15]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[28]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[29] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[30]

Jared Diamond, in Collapse: How Societies Choose to Fail or Succeed, argues that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.[improper synthesis?]

Theodore Modis[31][32] and Jonathan Huebner[33] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[34] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[32]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[35][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[36]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[37]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[17] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[38] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[33] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[39] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[39]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[40]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[41] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[42]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[43][44] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[45][46] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[43]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[21] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[47]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[48]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[48]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[49][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[50] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[51]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[52] Singularitarianism has also been likened to a religion by John Horgan.[53]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][54]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[55][56] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[57] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[26]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[58]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[12][59] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[12]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[60] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[61][62][63]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[64]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”