Five Books About Our Future

Jordan Peacock has suggested interviewing me for Five Books, a website where people talk about five books they’ve read.

It’s probably going against the point of this site to read books especially for the purpose of getting interviewed about them. But I like the idea of talking about books that paint different visions of our future, and the issues we face. And I may need to read some more to carry out this plan.

So: what are you favorite books on this subject?

I’d like to pick books with different visions, preferably focused on the relatively near-term future, and preferably somewhat plausible—though I don’t expect every book to seem convincing to all reasonable people.

Here are some options that leap to mind.

Whole Earth Discipline

I’ve been meaning to write about this one for a long time! Brand argues that changes in this century will be dominated by global warming, urbanization and biotechnology. He advocates new thinking on topics that traditional environmentalists have rather set negative opinions about, like nuclear power, genetic engineering, and the advantages of urban life. This is on my list for sure.

Limits to Growth

Sad to say, I’ve never read the original 1972 book The Limits to Growth—or the 1974 edition which among other things presented a simple computer model of world population, industrialization, pollution, food production and resource depletion. Both the book and the model (called World3) have been much criticized over the years. But recently some have argued its projections—which were intended to illustrate ideas, not predict the future—are not doing so badly:

The Singularity is Near

I’ve only read bits of this. According to Wikipedia, the main premises of the book are:

• A technological-evolutionary point known as “the singularity” exists as an achievable goal for humanity. (What exactly does Kurzeil mean by the “the singularity”? I think I know what other people, like Vernor Vinge and Eliezer Yudkowsky, mean by it. But what does he mean?)

• Through a law of accelerating returns, technology is progressing toward the singularity at an exponential rate. (What does in the world does it mean to progress toward a singularity at an exponential rate? I know that Kurzweil provides evidence that lots of things are growing exponentially… but if they keep doing that, that’s not what I’d call a singularity.)

• The functionality of the human brain is quantifiable in terms of technology that we can build in the near future.

• Medical advances make it possible for a significant number of Kurzweil’s generation (Baby Boomers) to live long enough for the exponential growth of technology to intersect and surpass the processing of the human brain.

If you think you know a better book that advocates a roughly similar thesis, let me know.

A Prosperous Way Down

Howard T. Odum is the father of ‘systems ecology’, and developed an interesting graphical language for describing energy flows in ecosystems. According to George Mobus:

In this book he and Elisabeth take on the situation regarding social ecology under the conditions of diminishing energy flows. Taking principles from systems ecology involving systems suffering from the decline of energy (e.g. deciduous forests in fall), showing how such systems have adapted or respond to those conditions, they have applied these to the human social system. The Odums argued that if we humans were wise enough to apply these principles through policy decisions to ourselves, we might find similar ways to adapt with much less suffering than is potentially implied by sudden and drastic social collapse.

This seems to be a more scholarly approach to some of the same issues:

I would really like even more choices—especially books by thoughtful people who do think we can solve the problems confronting us… but do not think all problems will automatically be solved by human ingenuity and leave it to the rest of us to work out the, umm, details.

Post navigation

36 Responses to Five Books About Our Future

I read Kurzweil’s book back when it came out. When he says “the singularity” he means that the timescale for exponential growth of technology will get to the point where it is faster than humans can keep up, by about 2045. He thinks that when computer hardware is much faster and has more capacity than the human brain, the machines will take over technological development, and the future will get very weird very quickly. (You’re right, it is silly to call it a singularity when it isn’t really going to infinity.)

I feel like his estimate of 2045 is wildly optimistic, considering the current pace of progress in replicating functionality of the brain, and just throwing more hardware at it isn’t going to solve all the problems. I really loved looking at all the exponential growth charts he presented in the book, though.

Whole Earth Discipline is a great book, especially the part about urbanization and the role that cell phones play for people living in “slums”, Brand has a new perspective.

My interest in speculations about the future is limited, though, a little bit more than the visionary energy of most authors. Here is the only book that I at least started to read, that I would put on the list:

It starts with stories that have already happened, like managers of Coca Cola who started to generate ideas and discuss them with local folks about the water supply. Starting with the situation where the company exploited local water sources to the disadvantage of locals, they achieved solutions where the company cooperated with the locals to create lasting solutions for a steady water supply for both the people and Coca Cola factories.

It’s the kind of “revolution” that we actually can make happen, because it involves normal people in normal situations. Not visionary heroes of great power, but people somewhere in the middle management of large cooperations, for example, who use the existing social and capitalistic systems they find to create new solutions for sustainable production.

It overestimates future capabilities even more than I remembered. He predicts by 2029 a computer will pass the Turing test, and his predictions for the 2030s include that mind uploading becomes possible, and that nanomachines could be directly inserted into the brain and could interact with brain cells to totally control incoming and outgoing signals.

Kurzweil’s The Age of Spiritual Machines (1999) predicted that the decade up to 2009 would be one of continuous economic prosperity (p. 194). Also that nanobots would have been demonstrated which include “their own computational controls” (p. 191) and that “grammar checkers are now actually useful” (p. 196).

Out of curiosity, are grammar checkers found to understand things useful for anyone? I’ve been meaning to look at sites like afterthedeadline, but they’re all focused on online/plugin usage whereas I’m firmly in Linux land so there’s nothing directly usable. I’d be interested to know if the problem has yielded to brute force+data or not.

More generally, evaluating the law of accelerating returns is very difficult since he’s postulating exponential behaviour which is by definition very difficult to determine over short-ish periods at the left-hand side of the curve. It’s really difficult for me, at least, to figure out whether things are significant progress or not: at times it strikes me as utterly preposterous that simple internet algorithms like PageRank (or its replacements) work so well given how little they actually understand the problem, or that speech recognition is now something you can use as a “promotional freebee” for mobile phones. If the kind of raw pattern matching actually works reasonably for these problems, then maybe accelerating returns might be meaningful.

At the moment, if I had to judge I’d say accelerating returns isn’t a meaningful “law”, but really I’m still unable to render a judgement I believe in.

[T]he data is nonsense, comparing all kinds of events that don’t really compare at all — speciation is equivalent to Jobs and Wozniak building a computer in a garage? Really? — and arbitrarily lumps together some events and omits others to create points that fit on his curve. Why does the Industrial Revolution get a single point, condensing all the technological events (steam engines, jacquard looms, iron and steel processing, architecture, coal mining machinery, canal building, railroads) into one lump, while the Information Revolution gets a finer-grained dissection into its component bits? Because that makes them fit into his pattern. […] It’s familiarity and recency. If a man in 1900 of Kurzweil’s bent had sat down and made a plot of technological innovation, he’d have said the same thing: why, look at all the amazing things I can think of that have occurred in my lifetime, the telegraph and telephone, machine guns and ironclad battleships, automobiles and typewriters, organic chemistry and evolution. Compared to those stodgy old fellows in the 18th century, we’re just whizzing along! And then he would have drawn a little chart, and the line would have gone plummeting downward at an awesome rate as it approached his time, and he would have said, “By Jove! The King of England will rule the whole planet by 1930, and we’ll be mining coal on Mars to power our flying velocipedes!”

Years ago, I actually made a quantitative model for this, with a Poisson process generating “significant historical events” whose traces then decayed over time. Not surprisingly, the historical records which resulted showed very Kurzweillian “accelerating returns”. That was in 2005 or 2006; I wonder what happened to my notes on it.

I wish I’d read more of the other books on the list, so I’d have illuminating things to say. I did read a bit of the 2004 Limits to Growth, but it didn’t stick with me.

Kurzweil’s The Age of Spiritual Machines (1999) predicted that the decade up to 2009 would be one of continuous economic prosperity.

Well, so he was off by just two years: things were fine up to 2007.

But seriously, I guess this means we need to keep reading his older books, to see how he keeps correcting his earlier projections.

And I think you guys have convinced me not to bother with Kurzweil, unless you think it would be useful to anyone for me to write a blistering critique. Do people on Five Books ever rip a book to shreds? I don’t know. I doubt it.

And I think you guys have convinced me not to bother with Kurzweil, unless you think it would be useful to anyone for me to write a blistering critique. Do people on Five Books ever rip a book to shreds? I don’t know. I doubt it.

As Anton Ego says in Ratatouille, blistering negative criticism is fun to read and to write. I’ve certainly read some remarkably entertaining negative reviews. Some, like Geoff Pullum’s reaction to The Da Vinci Code, could become spoken-word performance pieces. But championing a good book — or two or five — might be even better!

On the subject of looking at old predictions:

Douglas Adams (of Hitchhiker’s Guide to the Galaxy fame) and John Lloyd wrote a book, The Meaning of Liff, in which they repurposed place names to provide words for things which should have them but didn’t. One of the more useful is zeerust, originally a town in South Africa, but now adopted to mean

the particular kind of datedness which afflicts things that were originally designed to look futuristic.

I’m not sure if this one fits into the genre you have in mind here but how about MacKay’s Sustainable energy – without the hot air? It doesn’t necessarily give any predictions about the future, but it certainly lays out a clear set of possibilities regarding how we may choose to meet our energy desires.

This book is not nearly as technical as many mentioned above, but I found Bill McKibben’s Deep Economy: The Wealth of Communities and the Durable Future to be an interesting qualitative take on a complex problem.

On a careful reading of the article, I see John asked a couple of questions about the Kurzweilian singularity. With regards to the term, I gather it comes from a conversation between Von Neumann and Ulam just after WWII, but despite them being mathematicians its intended meaning is more an analogy to an “event horizon”. (Ha, only just realised this is probably because it predates physicists understanding of black holes from the 60s.) It’s meaning, taken on by Kurzweil, is it’s supposed to be a point at which so much progress is occurring so rapidly that anybody predating the singularity can’t predict what happens afterwards (in analogy to seeing within an event horizon). Reading between the lines, Kurzweil seems to be using the logic “since I can predict some pretty wild things and no-one can predict beyond the singularity, whatever happens must be even more mind-blowingly good”. It’s not clear to me this inference is valid.

With regard to progressing towards the singularity, I think the point that he’s making is just that there’s a difference between a gross behaviour curve (eg, computing power in FLOPS/dollar) and actually interesting details of what that means. As a bad analogy, suppose you knew that your capital was going to rise according to some exponential. In the short term you can probably imagine what you’d do with the money (buy a laptop, buy a house, buy a farm, buy a wildlife reserve, …) but even if you knew you the dates you’d get one googol dollars or one googolplex dollars, can you imagine what you’d do with that amount of money. Can you even know what non-trivial things (eg, no “buy one googol of one-dollar golf balls”) you could do with a googol dollars without needing to spend a billion dollars on research for the prediction? As I understand it that’s the sense in which he thinks the singularity is an “event horizon”, except with the million dollar value being almost the same as the googol dollars so that you can only see “past” the singularity when you’re nearly there..

I’m pretty familiar with the history of the ‘technological singularity’, but it seemed like a strange term for Kurzweil to use, since his theme seems to be exponential growth rather than a moment at which things ‘blow up’. But okay, maybe he thinks that our ability to predict the future fizzles out at some point as the exponential growth of everything good reaches heights beyond our grasp. Or maybe he just thinks the word ‘singularity’ sounds cool.

Yudkowsky usefully delineates three major schools of singularitarianism. He puts Kurzweil in the first, ‘accelerating change’—but amusingly, he says this school believes

Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.

Look at that page, I think that it’s a bit skewed towards regarding artificial intelligence as a uniquely special part of technology whereas as I understand him (implicit in everything I write obviously) Kurzweil thinks AI is just one part of the world of technology that has accelerating feedback loops. Eg, he predicts human level artificial intelligence at around 2030 but “the singularity” only at around 2045: while the first few iterations of self-improving intelligences will keep increasing faster but won’t be beyond “current prediction” for a while. So I’d be inclined to relabel “Event Horizon” to “Event horizon (due to artificial intelligence only)” and put Kurzweil in a class “Event horizon (all technology)”.

This incidentally why I find Kurzweil’s vision much more seductive than people like Yudkowsky’s: they privilege artificial intelligence as the uniquely non-predictable transformative technology which seems much more vulnerable to a “capability gap” problem: maybe creating an artificial intelligence capable of self-improvement is a hundred year task, or thousand year task, etc. By seeing any technological improvement as a tiny feedback which all combine to give the “law of accelerating returns” (from things like the printing press through current internet search technology through AI to whatever lies beyond that), which seems to me less vulnerable to a sudden capability gap. Even then I think the issue is still very difficult to decide, and i wouldn’t be surprised if people like Blake Stacey are right and it’s seeing a pattern which isn’t there.

Unfortunately I gave away my copy of “The Singularity is Near”, but this point from the wikipedia page fits with what I remember of what he uses the word “singularity” for:

The Singularity occurs as artificial intelligences surpass human beings as the smartest and most capable life forms on the Earth. Technological development is taken over by the machines, who can think, act and communicate so quickly that normal humans cannot even comprehend what is going on; thus the machines, acting in concert with those humans who have evolved into postbiological cyborgs, achieve effective world domination. The machines enter into a “runaway reaction” of self-improvement cycles, with each new generation of A.I.s appearing faster and faster. From this point onwards, technological advancement is explosive, under the control of the machines, and thus cannot be accurately predicted.

It seems to me that one of the most difficult things about finding coherent ‘different visions of our future, and the issues we face’ is that we have enormous biases to think that the solutions to our future and our path forward necessarily follow from doing more of what we’re good at now.

In particular, I think there is a natural tendency to see solutions in terms of our backgrounds, which means that there will necessarily be blindspots. While many people have pointed out that ‘maybe what we need to do is change our culture’ I’ve found it surprisingly hard to come by people who do a good job of talking about the intersection of culture and technology both reasonably and normatively.

That said, here are the top five in this direction, based on my limited exposure (two are online, the others are easily torrentable or findable on AAAAARG):

Small is Beautfiul is a classic which emerged at the same time as Limits to Growth and carries a similar message. But I find it far more persuasively argued and presented in a framework that’s more portable and intuitive.

Ivan Illich’s Deschooling Society is incredibly insightful; I’m not sure I can easily capture how deep its criticisms go. Don’t let the nominal focus on school or institutions fool you; I think it goes directly to the heart of many of the issues you hope to contribute to.

Also [Ivan Illich’s] Tools for Conviviality is a similarly thoughtful and insightful text about the more general characteristics of sociotechnical systems which link together, from a more technical-rather-than-sociological POV, the issues raised in Deschooling Society (in Illich’s words: "In these essays, I will show that the institutionalization of values leads inevitably to physical pollution, social polarization, and psychological impotence: three dimensions in a process of global degradation and modernized misery.")

Seeing Like a State is probably the most humbling book you could read about what it means to intervene in complex sociotechnical and natural systems. It advocates for a far more humble, nuanced, and robust approach to ‘fixing things.’

Death & Life of Great American Cities is similar in tenor and lessons to Seeing Like a State, but far more closely observed and easier to connect with, since its problem domain is ‘the urban environment.’

I’d agree with that, and go further: I’ve been trying to think if there’s any book that uses psychology, particularly empirical and evolutionary psychology, to talk about the possible future given human’s inherent psychological quirks, bllindspots, biases and compulsions combined with the kind of challenges we’re facing. The single biggest criticism I tend to have of futurology books is that they pick out some pertinent facts and physical/technology predictions but assume that people as whole are reasonable, rational and energetic beings. This seems to me to miss a lot of the factors that will affect the future (eg, facebook updates/twitter/etc clearly are popular partly because we’ve got a strong psychological attraction to quick novelty, which an update/tweet provides). But futurology generally just doesn’t take account of so much oddness and suboptimality that’s “wired” in our natures from a biological/evolutionary history. But I can’t think of a concrete book that does this.

Incidentally, in case it seems odd that someone so open to the singularity ideas thinks this, I actually take this as mildly supportive of the singularity: if an evolved hominid species with so many mental biases, limitations and even faults can achieve what we have (for good or bad intents), it doesn’t seem impossiblethat artificial processing machines can be built just well enough they can bootstrap their own future development…

The single biggest criticism I tend to have of futurology books is that they pick out some pertinent facts and physical/technology predictions but assume that people as whole are reasonable, rational and energetic beings.

There are also the doomster books, which tend to assume roughly the opposite.

I agree partly about the doomster books, but it’s not like a single “irrationality” is the opposite of rationality, there’s lots of different ways to not be rational and its obviously most interesting if the ways chosen match human tendencies.

I can’t think of factual books, but (on the offchance you know any of them) in terms of fiction I’m thinking of things like J G Ballard’s Crash or Super-Cannes, or James Lovegrove’s Days.

“Collapse: How societies rise and fall”, by Jared Diamond. Is an excellent book that analyses the causes of collapse of past civilizations (they are unvariably linked to ecological problems).. Then try to draw conclusions for our future. A very interesting and very recommended lecture.

Against Kurzeil, here are 3 of the best books on human evolutionary biology:

Too Smart for Our Own Good – The Ecological Predicament of Humankind by Craig Dilworth (reviewed by George Mobus). It explains the evolution of mankind (in ample detail and 1755 references to literature) in terms of a “vicious circle principle”: Population pressure begets innovation begets population pressure. Looking at the past, the future looks grim given today’s global population pressure. Dilworth’s conclusion for the future: megadeath.

Another book for guessing future from past is E.O. Wilson’s latest: The Social Conquest of Earth (Long Now Foundation video). It is of utmost importance to understand our dynamics as social animals (e.g. aggression). This has already been stressed by Konrad Lorenz in his 1973 book, Behind the Mirror: A Search for a Natural History of Human Knowledge (bad translation of the German title). Lorenz and Wilson are more optimistic, given the growing amount of knowledge about not only the world, but also about ourselves.

I once read A Many-Colored Glass, Reflections on the Place of Life in the Universe by Freeman Dyson (2007, University of Virginia Press). It is based on a set of lectures the author gave at the University of Virginia in 2004.

In the book, Dyson digresses on various topics, ranging from biotechnology, climate change to a (serious) debate on if life can go on forever.

Changing areas of emphasis? Maybe you should have a look at Singularity University (I understand Ray Kurzweil teaches there).

As for books about the future. I recommend you choose among the five books,one book about picturing the future (I mean about the tools and practices of the futurists) such as Foresight: The art and science of anticipating the future by Dennis Loverridge. There are many more I can recommend to you.

Another book to consider, along the same lines in your list, is Future Imperfect: Technology and Freedom in an Uncertain World by David Friedman (a physicist-PhD, Chicago- turned economist and now a professor of Law at Santa Clara University).

Here’s a quick argument for its impossibility: Forget Descartes’ split of the world into res extensa and res cogitans – Biology shows that mind cannot exist without matter. As long as you can’t simulate life in silico (incl. metabolic closure) you can’t simulate mind in silico.

True artificial intelligence would prove Descartes right (for the artificial mind could run on any computer, and transmigrate from one machine to another machine, thus is independent of matter). Thus, AI is a religious metaphysical dream that’s highly implausible.

Yeah, that wasn’t very rigorous. But I couldn’t help. I’ve contracted a Singularity allergy late last century. And meanwhile we’ve indeed reached a singular moment in the history of Life: One single species overshot the carrying capacity of a whole planet. Yet folks keep dreaming of deathless minds. The mindlessness, the mindlessness…

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.