Kelly on Technology and What Technology Wants

Kevin Kelly, author of What Technology Wants, talks with EconTalk host Russ Roberts about technology and the ideas in the book. Kelly argues that technology is best understood as an emergent system subject to the natural forces underpinning all emergent systems. He argues that any technology creates benefits and costs but that the benefits typically outweigh the costs (perhaps by a small amount) leading to human progress. This is a wide-ranging conversation that includes discussion of the Unabomber, the Amish, the survival of human knowledge, and the seeming inevitability of the advancement of knowledge. The conversation closes with a discussion of the potential for technology to make an enormous leap in self-organization.

Highlights

Time

Podcast Highlights

HIDE HIGHLIGHTS

0:36

Intro. [Recording date: November 18, 2010.] Technology and ideas in your book, provocative book, brimming with ideas, great writing. Central idea is that the technology that surrounds us is alive. Has its own forces of existence that we do not fully control. Book is about making the case for that idea and understanding the benefits and costs. Describe the world of technology and in particular the word you've coined to summarize this phenomenon, the "Technium." What do you mean by the Technium and why don't you just call it technology? The problem the book set out to solve was coming up with a framework for understanding all this stuff in our lives. Surrounded by it or trying to market it or sell it. For me, trying to understand how I should think about the new things coming along: seems as if our technology was just one thing after another. One object, one gadget, one invention after another. Pretty unsatisfying, not really a theory at all. Set out to look at it to see if there was a framework. The only way I could look at was to see if it was a system. Cell phone or whatever you carry in your pocket--that thing, may only be the same size as a stone axe of long ago, made by one person, but that thing you have in your pocket maybe requires a thousand other technologies to create, maintain, and keep it operating; and each of those might rely on hundreds of technologies themselves. That iPhone, let's say, is not a standalone matter--let's say an orchid in the middle of the rainforest. Requires and depends upon many other organisms, other technologies to be and exist. I have been using the word "technology" to talk about both the gadget and this ecosystem. We talk about technology in kind of the plural. To me, this ecosystem is everything that is connected to everything else and is behaving like a rainforest. But because sometimes it's singular and sometimes it's plural, I found it kind of confusing. So, to emphasize the fact that I'm talking about the larger super-organism of all these technologies together, I call it the Technium. Some people might say: That's basically culture, human culture. I think culture does kind of convey the range of what I'm talking about, because it does include everything that you've made, including the hardware; but it also includes intangibles like the calendar and law and also the kind of works of art--the Lord of the Rings films, things that are expressions of the tools as well. But culture does not indicate or convey that this is a system, more than a gallery or museum; the artifacts themselves are co-dependent on each other are intertwined and ecologically related. Technium for me was a way to realize to the reader that this was a system, a kind of superorganism, supersystem of things we've made with our minds.

5:45

You might argue we've gone too far or not gone far enough. Some people would look at this web of interacting technologies and say: Come on, it's just stuff we make; not different from any other time in human history. We make a lot of stuff. The other view would say: That's creepy. Alive, wants things in some dimension. Talk about that sweet spot. One dimension, it's nothing different than a big novel, a creation that can never have any of its own autonomy. Evidence against that would be that there are many attributes of selfhood, selfness, that we know different parts of this system exhibit and we know from more articulated systems we've made in computers that can repair themselves. We have systems that are self-repairing. Internet can actually self-direct their traffic, and immune systems that can you can create to self-identify parts and take out spam. Self-governing systems. We know, we have already created aspects of self-hood. Any engineer could identify aspects of self-hood in the Technium. I'm arguing not that the Technium is completely autonomous, not binary; it's a continuum. Emerging sliver of autonomy and that autonomy is increasing over time. It's not independent of us entirely, but there are aspects that are. On the other side of saying it's out of control: We are part of this. It's only a small portion of the autonomy. The Technium includes us as humans: it's in its beginnings of its autonomy and we are absolutely necessary for it to continue on. It can't reproduce without us. But there's another way in which we are in sense a part of the Technium: We as humans have created our own humanity, invented our own selves to some degree. Obviously we are a continuation of the primate line, but we have also added many things that primates don't have, and we've added those with our minds. One example: our invention of cooking, a way to have an external stomach to digest foods that we ordinarily have to do biologically. Additional nutrition we gain actually changed the size of our teeth and our jaw and the enzymes we have in our body, to the extent we are now biologically different than we were before. We are now biologically dependent in our long-term fertility on cooking. Other places--domesticated milk-yielding animals, developed lactose intolerance. Continuing to accelerate the evolution of our own bodies which is now running 100 times faster than it was even 10,000 years ago. There's a very real way in which we are the first domesticated animals. Self-created; and when you are self-created, that means you are both the creator and the created. We inhabit both sides of this Technium: created by the Technium and creators of the Technium.

11:33

Simultaneous system. Talk a little bit more about the self-hood idea. Example of a robot. Tell that story, looking for a power source. I don't use "Technium" in the title because it was a word nobody would understand. But the other looming word is "want." I hasten to say that by "want" I don't mean a conscious, deliberate, intelligent way that we want as humans, as we say: I want some ice cream or to start my homework. The other way we use want: The cat wants to go outside; the plant wants light--so that tomato seedling in the window will lean towards the light. That leaning is what I'm talking about. The Technium leans in certain directions. Whenever you have a system, which is what I am arguing the Technium is--and no matter where you are systems exhibit two natural phenomena. They have a behavior at the level of the system that is not present in any of the parts. So you have a system of a beehive: the hive itself has hive behavior and that cannot be found in any of the bees. The Technium behavior cannot be found in the iPhone. Second thing is that any system that does have these behaviors exhibits certain biases, certain constraints made by the system. Inherent, doesn't matter about the individual parts what they are doing. If the system itself is complex enough it will exhibit these behaviors that are not contingent on the actual behavior of the bottom parts. That's what I mean by want: that the Technium, independent of our own choices, has certain tendencies, not all, certain leanings and certain driftings. The book is in a certain sense to look at that want. I had an experience where I could feel this sort of unconscious want; that was when I went to visit Willow Garage, which is a startup near Stanford making robots, and they had a robot that has been programmed, has orders to recharge itself by plugging its cord into an available socket. It's been taught how to identify those sockets but not where they are or when it needs to or how to get there. All emergent. It roams through the building when it needs juice, will identify a plug and grab its tail and plug in its cord to recharge its batteries. I had the opportunity to stand between it and the plug that it wanted. I could feel its want--it was behaving like an animal that wants to get out the door or wanted some food. It's not conscious, not very intelligent, but it definitely wanted that. I think I'm using the word "want" in that same way. We have the same issue in economics, never-ending challenge when I teach. We say in useful shorthand, but it's misleading: The market wants. So, in the face of price controls, the market tries to get around them by degrading quality sometimes. What does that mean? What it means is all the individuals in the market pursuing self-interest in the face of this new constraint are pushed to do something that would actually be counterproductive in the case of a normal market. If a seller tried to degrade the quality of a product in a normal market he'd lose customers, but in the face of price controls, he can gain customers by doing that because he can effectively evade the price control for customers who can't get the good. So those kind of what we sometimes call market forces are at work in ways most of us aren't educated to think about or we don't have the language to talk about it very well. What your book strives to do is give people a way to start thinking about that. In some ways this is a book to an earlier book, Out of Control, which is about these decentralized systems. I think we have a better acceptance of emergent behavior in nature, but when it comes to our own artificial systems we seem to be reluctant to accept emergent behavior there because we've made these things so we kind of project our own making on that and don't realize that these systems can exhibit emergent behavior that's not put into them by us as individuals. That may be changing over time as people have experience dealing with these very large systems like the Internet. Ironic because last week's podcast was on Adam Smith, and Adam Ferguson, predecessor of Smith, around 1750 or so, said: There are things that are the product of human action but not human design. We're two and a half centuries later, and unless you study economics or technology, you don't really absorb that insight. But we're making progress.

19:07

Startling, though not a central part of the book: your claim that old technologies and old tools don't die. Shocking; think it's true. Talk about your Montgomery Ward catalog experience. Not central to my thesis but maybe one aspect. Basically I posit that the Technium is an extension of the same evolutionary forces that run through biological life and the same self-organizing forces that have been working 3.7 billion years of evolution; that they self-organized our minds and now our minds are a vehicle for the same self-organizing forces that are propelling the diversity of the Technium. Because of that the Technium and biological evolution are very similar. However, there are some differences. One is a small one: extinctions are real and permanent. Global extinctions of species are real and permanent and a major component of the species is that most of the species ever born are now dead. In the Technium, technological evolution, extinction rare. What happens is that most technologies become obsolete, diminish their role, but they don't disappear--more idea-based and can be resurrected more easily. Seems like a silly claim: the hand-axe, the arrowhead--they don't exist any more. And that's what I thought too. Conversation with guy who was just adamant about that. Challenge: Not just to find an antique version existing in a museum, but looking for cases where arrowheads or steam-powered cars, I wanted to find a place where they were still making them brand new. Looking on the Internet was able to find an example of whatever the challenger had given to me: brand new steam-powered valve for a steam-powered car; they're making flint axes exactly the same way, using exactly the same tools to the point they are almost indistinguishable from the original; archeological artifacts, making in huge numbers today. As a challenge to myself I took a page from a Montgomery Ward mail-order catalog--basically your WalMart of a century ago, had everything. Took most challenging page, which I thought was farm implements--got to be completely gone. And there's a lot of them. Things like corn cob millers, things you didn't have any idea what they were, never heard of these things. Very quickly we were able to identify that every single one of them is still being manufactured brand new. Recently was telling Robert Krulwich, host of RadioLab, science journalist; he was totally unbelieving. Set a journalist and a research intern, for a month, going through the entire catalog and they could not find a single thing that was still not being manufactured brand new. There may be something out there. Greek fire, possibility of something we've lost; but we don't actually know what it is yet. As a rule, technologies don't go extinct; has implication that attempts to prohibit technology are just doomed because it's really hard to eliminate things once they are invented. We can't even do it deliberately. Important when we come to understanding what we want to do with technology, particularly when we come to ones we think are harmful: we can't get rid of them; have to do something else with them. Harder to reconstruct a wooly mammoth right now; but it's also a tribute to our wealth. We can afford to farm in an old-fashioned way. Colonial Williamsburg: If you go there, there are people making shoes by hand as a tourist attraction. That technology and all the tools same as those used at the time; but that's an incredible luxury we enjoy because we are very wealthy. It does point to one aspect of the Technium, which is that generally it adds options rather than just replaces them. We are slow to understand; tend to define technology as anything that was invented after we were born. In fact, most of the technology surrounding us already existed. We are still using concrete, fired brick, huge parts of our cities are this kind of technology. Kind of like in ecology--you are adding species into the mix. Sometimes they are replaced, but most often they are just supplemented. That is more the pattern.

27:26

One of the themes of the book is the expansion of choice and I couldn't agree more that it's what it allows humanity to express itself. One of the other charms of the book is you are not a salesman. You confront the costs. You basically say there are these incredible benefits from technology. But you also point out as critiques complain that it produces all these problems along the way. And yet you argue that the balance is probably net positive and you provide some evidence. What's the evidence? Truthfulness demands that we acknowledge that every new invention will create almost as many problems as solutions. It's to the point where most of the problems we have in our lives today are technogenic: created by previous technologies. We are inventing technologies to remedy them but most of the problems you will have in the future will be caused by technologies we are making today. If we look around there are huge problems caused by technology inventions. I would even go so far as to say that a technology is not really powerful and revolutionary unless it is can be powerfully abused. Now we sing the praises of the Internet. We have not yet seen it be powerfully abused; but it will be. There's plenty of abuses. Personal level. Crime. Distraction. Talking about really powerful. Having said that, that leads to most of the techno-elites' conclusion--the orthodoxy is that technology is neutral: you can use it for harm or for good. At a certain level, true: you can use a hammer to kill or to build a house. What I'm saying that's true to a certain extent, but when you invent a hammer, you suddenly invent a brand new choice that didn't exist before, which is to use this for good or for harm, and that choice is in itself a good. A tiny new good, and that moves the balance very slightly in favor of the good. Slight, but that's all we need. If, through technology, we can create just a tenth of a percent, then that compounded over time is what we call progress. That's what progress is, a slight compounding of choices. Only operating at the level of a fraction of a percent, given to us because we are inventing new choices with the technology. You give some wonderful examples that we are making progress--talked about on this program some of these; the expansion of the lifespan, decline in infant mortality, health benefits, playing tennis on an artificial knee at 75 is nice. One of the arguments of the critics is that that's temporary: that we are abusing through our vortex of materialism, sucking the life force out of the planet, all going to come tumbling down; we are about to hit the place where the road is missing. We have to acknowledge the fact that most of the problems that are technogenic--environmental degradation, impact on climate, natural landscape, one of my own concerns: pervasive way in which chemicals are moving into our bodies, no idea what the chronic exposure is--there are lots of problems caused by technology. Two things to say about them. One is that in the normal accounting we don't normally account for the environmental problems that nature itself presents to us. Disease, other aspects of nature, we tend to ignore those and say it's the impact of that--for every birth in the biosphere there's a death. That's a minor thing but something we should keep account of. The second thing is these problems are real, but so far there is not a single technology we've invented that we have not been able to invent a greener version of. That to me suggest that because the Technium is an extension of an evolutionary force, it is not inherently anti-life, it's most definitely compatible with life. Its inherent potential is to be compatible with life; can even say it wants clean water because some of these computer chips require water cleaner than what we drink. The potential to produce increasing living standards without major degradation of the natural world. We find that really hard to see right now because actually the Technium has phases of development much like an organism; some of the developments of the industrial age were like the terrible twos in humans--selfish, grimy, dirty, self-centered. With more technologies like the Internet and communication technology has revealed to us a more mature facet, showing to us that things don't have to be in that kind of inhumane, unfriendly industrial way; that the Technium is more biophilic. Inherently I think that's the drift it's moving toward. We can help to accelerate that; we can help it to steer. Think the fact we are constantly capable of inventing things that are greener than what came before suggests that the answer to bad technology is not less technology but better technology.

36:53

Talk a little bit about--you allude to this phenomenon--why is it that you think we have a certain romance or nostalgia for nature on the one hand--it can give you cancer and drown you and earthquake you and landslide you and do all kinds of brutal things--and a certain related romance about some technology, not others? You give an example in the book, I think it's Wendell Berry, who likes certain types of farming technology but not others. If it's before 1940 it's human, it's natural; after, it's artificial. Why do you think we have those urges, tastes? Has to do a little bit with my exposure to the Amish, who I found a great admiration for. Many surprises--tend to think of them as luddites but they are in no way luddites. Actually are adopting technology, but very selective in their adoption. Have not adopted cars, although they will ride in them but not own them. Horse and buggy; dress in gowns and bonnets, no zippers. But at the same time they buy disposable diapers, eat Cheerios for breakfast, have chemical fertilizers and are really into genetically modified crops. That mix doesn't seem to make sense at first; but mostly trying to suggest they are selective. Because they do do many things by hand--pitching hay into haystacks, milking cows not entirely by hand but not using robots or other things, chopping firewood by hand--the fact that they don't have all these labor-saving devices actually gives them a lot of leisure time. They've optimized their lives to produce leisure and a sense of who they are, a certainty about their roles, and a kind of satisfaction in community, togetherness, and family support that we find really hard to get in our modern lives. That's the romance part, the attractive aspect of their lives. But there's a cost to that, a price, and that's a little bit hidden. The cost of this is two. They are not self-sufficient by any stretch of the imagination. They require outsiders to mine the metal they use for their farm machinery, have the factories to make the rubber they are not going to put in their backyard, wells of oil that run their diesel engines. They are not participating in that, and consequently they are not inventing the new things, cell phones they are probably going to adopt. But more importantly, the cost of this is that they cut off the choices and possibilities. They maximize their contentedness, romance, great sense of who they are by basically eliminating the choices of what you can do. So if you are a woman in Amish you have one role, which is a mother; if you are a boy, you have two roles--could be either a farmer or a tradesman in a shop in the back. That's it. They are not making mathematicians, musicians, people who are going to invent, doctors who are going to invent the next medical breakthrough. I think we are attracted to the simple things because it reinforces, it doesn't challenge us in who we are. All the new stuff we are making, the new technologies--every single one from robots to gene therapy, nibble away at our identity. Even all this stuff on Facebook, all this social media--where do I end and someone else begin? What's me? Who am I? With robots--what are humans about? What are we for? All these new things are a real challenge to our identity. The Amish don't have that because they don't engage with that. The things that challenge our identity are one of the things they keep out. There is a huge attraction to be Amish-like, to say our human nature is fixed; it's not going to move or change; we're going to keep it that way. On the other side, all these new technologies reinforcing the fact that we are still inventing ourselves, still deciding and still having to decide what we want humans to be. A lot scarier, but in the end is probably more attractive to most people. You have a chapter in the book on the Amish, and it's an option not many people find attractive; but the ones who do, like it. Reminded of the fact that I keep the Jewish Sabbath, which means I'm sort of, roughly about 15%, Amish. I step off the grid on Friday sundown to dark on Saturday night. My non-Jewish or non-orthodox friends admire that. They often say to me: I wish I could do that. Now, like the Amish, there's a religious belief that makes it easier to accept that constraint. What is fascinating to me is that without that religious belief--and this gets back to what technology wants--your Blackberry, your iPhone, your laptop, and your desktop want you to check them for email. They are saying to you--sometimes it's a beep, come over here. Interesting that that siren is very difficult to say no to. Most people, even though they like the charm of the Amish or the Jewish Sabbath, if you say to them: Well, you can do it--just from Friday to Saturday say I'm not going to watch TV, I'm not going to turn on my Blackberry, I'm not going to get on the Internet, not going to get in my car, not going to go to the mall. But I don't know of anyone who does that. There may be some people literally who do that, but I don't know of anyone other than religious people. It's funny, but I was doing it for a while. The Sabbath was religious, but in the Christian circles there is no sabbath admonishment to not turn on your computer. But I actually think that this is incredibly healthy to do that for non-religious reasons, healthy. It's not that work is so bad, but rest is so good. There is a side-benefit I think the Amish would talk about, which is for 25 hours I'm a little more attentive to my wife and children, and I enjoy their company in a way I can't when the email is saying, "Over here." What's interesting is my sabbath vacation from email also was using the Jewish version rather than the Christian version, so for me it was on sundown on Saturday to Sunday afternoon--beginning in the afternoon. I'm suggesting not only just sabbath, but also jubilees--we take seasons off. New book I'm writing now is on techno-literacy: what we need to master now is not the individual technologies but the Technium itself and how it works, and the fact that you do want to have sabbaticals and jubilees, do want to understand that you are always in a permanent newbie mind. Skills about learning to deal with the Technium itself, much as you might deal with natural history. Want to know how the system itself works. Hard for people because it does want that. Think there is a romance to the simpler version. Duality baits into the nature of technology because we are self-created. When you are self-created you are both the creator and the creator, both the master and the slave, and I think that duality, tension is going to be present. We can certainly get better at being the master part of it. Basically, that's what techno-literacy is: upping our ability to say no. And also, there are people who find that a feature, not a bug, on either side of that, right?

48:43

There's another claim you make about technologies that never die. Hard to believe but I think you are right; and I think you've been challenged on this also: there's a certain inevitability to evolution in the biological world, but more interestingly for the book, the technological world. Almost every invention you can think of was invented by a bunch of people working totally independently, suggesting that there is a force of progress or knowledge-creation that is steered by, again no one's intention, but the previous technologies. Talk about that--amazing. Actually something that surprised me; didn't have it in my mind before I wrote the book. Conclusion that the sequence of technology is something that is inevitable was something I resisted for a long time. Didn't like that idea--cold technological determinism. Depressing. Didn't want to be one of those. Subjugating the human free will. But I think the evidence clearly shows that independent simultaneous inventions is the norm. We have an intuitive sense of that because that's what the patent office is all about. If it wasn't, we wouldn't really need the patent office. Someone would invent something and there it is. But somebody would copy it--so it's just there to keep people from cheating. One way to think about it. But also trying to sort out priorities. Most common that when something is being invented, someone somewhere else is also independently working on it. Best way to understand that is to go back to the ecological metaphor. No technology, particularly in modern eras, is a standalone device. It requires all these other supporting technologies. And when all those other supporting technologies are in place, have been invented, the next adjacent invention is almost inevitable in its existence. Who is able to invent it is not completely random. The great inventors are basically those who put themselves, buying a lot of lottery tickets--put themselves in the position to be the lucky person. Not just luck, but the idea that these things are inevitable when the time is right, when all the precursor inventions have been invented. Suggests there is a progression, not in a linear sense but in a developmental progression, where when the time is ready the technology will go through that, and there's an idea of a convergence, taken from the idea of convergent evolution in the biological world. Certain channels--two constraints, a negative constraint and a positive constraint, propelling them through these bottlenecks. We see in the natural world the idea there are certain forms that life is going to come back to again and again, sometimes governed by the fact that the physics govern it, sometimes by the fact that there are inherent biases based around DNA and DNA code. You can't produce everything; it's going to produce only some things. So the idea it's convergent. The idea of convergence as well as the idea of inevitability of a lot of inventions suggests that much of what happens in the Technium is kind of wired in to the nature of the Technium itself and is somewhat, not entirely, independent of our efforts and individual geniuses. Although of course the speed is going to be affected a little bit. And also the institutions that society, cultures, nations use to reward and punish and allow technology to flourish or not. Right; will be quick to say that while the larger, macro forms are inevitable, that's like saying four-legged quadrupeds are inevitable. The species are not inevitable, and the expressions of those inevitable ideas--which can't be patented because they are inevitable--the expressions of them are unpredictable and really are dependent on human genius. And they are the things that make the most difference to us. So, while the Internet might have been inevitable, the particular expression of the Internet was not--whether it was going to be open or closed, institutional or transnational, commercial or non-profit. All those things make a huge difference. Those things were not inevitable; those things are our choices. They are the speciation of the inevitable technology. One side-thought I thought about after reading that: We have this great reverence for creative people--correctly, I think--your suggestion of inevitability causes the reader to doubt that a little bit. So many things, if we hadn't had so-and-so working on it, well, it would have been six months later and the human enterprise wouldn't have been affected that much. But I feel like we overvalue them particularly because of what you said, and we undervalue the implementer, which is intangible, less glorious. Podcast with Amar Bhide on this. Implementing, marketing, packaging, making the invention useful in many ways is perhaps the more important thing. A lot of our greatest books--not going to suggest yours since yours are of course different, as are mine!--but a lot of great books don't have anything new to say. Nothing new under the sun. Part of the reason they are great is that they package the idea in a way you can absorb, remember, implement, sing to, harmonize with it in a way maybe you couldn't if had been done by somebody in a different style. Our copyright laws kind of recognize that--can't copyright an idea, genetic--can only copyright the expression of it. We honor talking about the idea translating that into the time and place of the culture, giving an expression that really means something to people. That is not inevitable, not predictable, something that is very choiceful. In the book, little chart about the idea that the more general the idea is, the more people will have it; and as it becomes more articulated and more specific down to the market to become implemented, it has to become more specific. As it becomes more specific, it is less inevitable. The person who does bring it to market, there should be a reward because it is difficult. Only if you will be able to bring it that far. Thousands of people have the idea of the jet pack or the invisibility cloak, or the phone you can see the caller on, picture phone; but each step to make it real and get it to many people, it has to become more specific, so it becomes less and less inevitable in its final form.

57:55

Shocking chapter in the book where you say that the Unabomber was onto something. Talk about what he had right and wrong; people choosing to live in cities. Idea about the Unabomber, whose real name was Ted Kaczynski--he was a serial killer who was bombing people who were in favor of technology. Who knows how he chose his subjects. He killed some, maimed 23 others. He was operating on the idea that civilization or at least industrial civilization was not the remedy for anything, needed to be eliminated. I announced in my chapter that the Unabomber was right about one big thing. He was not right about either his conclusions or his actions--or his justification for his actions--completely condemnable. But he saw very clearly that the Technium was a system that had its own agenda. He was a mathematician and wrote very logically; I was horrified to realize that of all the people I'd read, he had actually the best explanation of the reality of this Technium I'd been talking about. Alarming. Reduced to quoting him rather than any other theoretician because there was no one writing as clearly about how pervasive, how interdependent, how cohesive this Technium is; and why, in his view, you couldn't really even try to co-opt it, use technology to take it down, which he was trying to do. By using technology at all you were kind of participating in it and making it greater. Easy to say: you use technology, you use bombs; but a lot of his victims were far away; was he supposed to go strangle his victims with his hands? But he is only one of a small group of people interested in bringing down civilization; but most of those folks are writing their missives, manifestos on Macintosh computers at Starbucks. Long tradition in that. At least Kaczynski made a halfway attempt to live in a shack in Montana; however even he was completely dependent on his rides into town to WalMart to buy stuff. He didn't go all the way because it is too difficult. Our humanity is too dependent on technology to give it up completely. Self-sufficiency is the road to poverty. If he tried to live by himself he probably would have died. But he was right to see that this was a system with its own agenda. Most of the people who see that are people who have criticisms, who don't want technology. Very few of the people who are boosters of the technology like myself. Quoting him at length. Even the most severe critics can see that as well. Disagree with the Unabomber because he says that the agenda that this thing has is to rob people of their freedom; you can see this was a personal thing with him. He felt it was dehumanizing, controlling, robbing him of his individual freedom--and others--and therefore it had to be completely destroyed. Not just amended or fixed, but eliminated from the face of the earth. I concluded a vastly different conclusion, which is that actually what this brings us is increasing choices and freedoms. I don't think he had many freedoms living in the shack. I think he had a lot of latitude but he didn't have a lot of actual degrees of freedom. I think that's what technology does bring to us, increasing choices, options; those are all good. That's why we move to cities, because cities offered more of those. He moved to a shack that had less of them; shack that people in the world are trying to escape from. They come the cities, miserable places filled with horrible smells, human waste, yet they don't go back to that village because they get something in return. They have at least a chance of getting their kids educated, at least a chance of getting medicine if they need it, something different than hoeing potatoes. Put up with conditions not as romantic or beautiful as the mountains they left. Kaczynski had his own personal problems and didn't see that what technology was bringing opportunities. He's right it was bringing environmental degradation, that we should overcome, diminish; and I think we can. We have that choice to. We've done away with quite a bit of it, moved away from burning forests, polluting the air. Making progress, still a lot to do.

1:05:42

Story: my favorite cartoon. Shows a newlywed couple leaving the church; people are throwing rice, covered with Just Married, stuff tied to back of car. Groom has his hand on the radio knob of the car, and he says, "I'll just check the score." The cartoon is called "The First Straw." Real work of genius. I don't know who did that. Certainly a lot of men struggle to stay off the Internet, want the latest score--they don't want the latest score: they want to know if it was a ball or a strike. Already see it in my sons. My daughter doesn't struggle with it. Think it's a sex-based struggle but maybe I'm wrong. Technology does try to control us if we are not careful. But there are people worried about a much bigger form of control: the technological singularity is one name for it--that this Technium is going to not just have some autonomy, some self-hood, but it's going to actually take over. Not going to just influence our behavior and the quality of our life, it's going to rule the world. What do you think of that claim and if it's true, what do we do? Several different scenarios. A bunch of scenarios for this phenomenon of the singularity; and second, there are other scenarios beyond that. First have to entertain all the scenarios. There's one version of that that I do think is not plausible. One you don't have to worry about. Extreme version, Raymond Kurzweil who wrote the book The Singularity is Near. Very specific: idea that by the year 2039, I think, or 2040 or something, we will have made an intelligence smarter than us which will have made something smarter than itself; and the rate at which us making that thing smarter and it making the next thing smarter keeps increasing; half-life. Almost within days it's made something so smart it can solve basically all our problems. It can think through and do simulations in its mind. Major thing it's going to make is solve our problem of mortality; will solve immortality for us. Anyone living at that time it will tell how to live forever. Hayek and the knowledge problem: the idea that one source of computing would know so much, as if data would be the source of wisdom. I call it "thinkism." Completely erroneous. Extreme case of thinkism. Idea is all you need to do is live long enough to be there for the singularity. This is why Ray is taking 250 pills a day, to make sure he lives, because on the other side of this is some kind of rapture moment. Everything is solved. Also you'll be able to recreate your ancestors so Ray is going to resurrect his father. There's a name for this: religion. Primal belief in a lot of religions. This is a new one. Very improbable that this will happen. There is another version of it. Creepy version; going to enslave us, cut off our food supplies. To me that's a lot more possible than that we're going to have immortality. There are others versions a little milder. Weak version says that there is a phase change--speed and acceleration and ubiquity create something hard for us to see right now, impossible for the pre-language proto-humans to envision what language would bring. Could be equivalent to inventing language. Plausible but not the only scenario. Will the Technium become so big it takes over? I think yes, we have to entertain that scenario. But the fact that we are part of that scenario--don't see a direction where the autonomy starts to take over. We have a nature/nurture thing; culture has not taken over our genes. Hard to unravel what we owe to genetics or to culture. Cybernetic, cyborgian, symbiotic. Idea of becoming more symbiotic is second scenario to the one where they take over and eliminate us. Other scenarios we need to consider as well. First thought for those stories is dismissive, at least in the way they are described. Evil people can get hold of technology. But the part that is a little alarming: as an economist--you can't repeal the laws of economics, forces can't be stopped; play with them at your peril. Our ability to steer the Technium may be limited. The better we understand it the better we could understand the forces under which it could be steered. Whatever ability we should have we should maximize. The only way we are going to understand ability to steer is to understand where we don't. We don't necessarily have to do everything technology wants, but we certainly have to understand what it wants in order to be able to exert what we want. Important to understand how this system is working. If we just assume we are steering 100%, recipe for being blindsided. Not saying we have no control.

Comments and Sharing

I write this while listening to the podcast (about halfway now), while transferring an old Michael Parks LP to MP3 for my iPod, and with a box of Smith Corona word processor ribbons on the desk from my recent ebay search for a machine with a working drive so I can recover my old papers/theses from school - the paper versions long since lost in one move or other.

Just wondering if this is more meta-analysis than real. Technology is complex because we made it that way and enable it to become more so. Technology can't respond to its environment unless we tell it to do so. There is not real "life" there, just an imitation, a trick of anthropomorphism that we impose on it. Akin to pagan idolatry - golden calf etc...
You won't see bumper stickers urging us to "Save the Word Processors!"

I recently read an article about starfish/spider analogies of complex systems, and read that an Asian country was literally using mold (?)biological growth experiments to help plan its rail system efficiently. I also recently read that Sony has now stopped making the Walkman.

My point is, it still boils down to the fire in our own heads. It's still a creation of man, and the analysis is more our modern version of the Enlightenment Clockwork analogy (or even Clockwork Orange - man as a programmable machine. I once described chemo/marrow transplant as erasing the memory and rebooting with a new operating system).

Concerning threatening manipulation of the internet - I can't imagine a more gross violation than China's collaboration with internet companies to deliberately keep its populace ignorant of movements like the Tiananmen Square massacre, or any other facts its leaders care to hide.

I liked the podcast but believe that Kelly's thesis is both, needlessly broad, and not successfully answered in my opinion. He could have simply focused on the ways in which technology facilitates this or that human behavior. If the goal is to describe technology as an essential part of a living system then you really need to exhaustively define the elements of the system.

My attempt using Dawkins' memes:

You have the memetic code of an item (essentially) existing as electro-chemical pathways inside a variety of human brains.

Then there's the material form of the object. "Material" is used loosely. The memetic code is also material, just like our genetic code is material but the word generally works.

Finally there's another section of memetic code that describes how the object is to be interacted with(why use it, use it for what, when, with whom, when etc etc). This, let's call it "meaning", is more than the sparse materially functional fragment that Kelly focuses on. Indeed I think this is the section where his description of the technium is severely deficient.

To say that a flint hammer today is the same species of technology as a flint hammer 100,000 years ago is to completely ignore the "meaning" of the technology. Similarly an iron hammer used as a religious object is not the same technology, the same species in the technium, as its farm tool cousin. And yes as the same physical item passes from one person to the next it can become something else. Because, unlike actual living creatures, objects, ironically, do have a "soul" and it resides in the mind of their owner. Everything about an object's behavior and effect on the world(other humans included) is determined by the memetically encoded intentions and thought process of it's user.

This is not the case with a cow. The things that a cow does(eat, reproduce, sleep, get sick, get better, defend itself etc etc) are the same regardless of owner. There's an internal decision making process that human beings can influence(a whole lot), but not completely control.

Kelly's analysis of the Amish, of the difficulty of banning technologies or of the very nature of technology and the possibility of its extinction, largely ignores the importance of "meaning" in defining a given species of tech. Two different human beings can look at the same object or procedure and see completely different technologies, depending on the purpose and meaning they ascribe to the object.

If the goal is to explain the role of technology in human society this is, in my opinion, the essential thing that must be considered.

Allll that said, this was a wonderfully thought-provoking discussion. I think I wrote about 4 pages of comments for myself after listening to only about 3/4 of it.

Thanks for another great podcast. I still haven't finished the book, but I was happy that the interview touched on several apparent incongruities that I've been unable to reconcile between KK's recent writings and facts on the ground (or at least my own first-hand observations from a relatively fortunate vantage point within the "Technium"). Alas, those questions still remain unresolved -- but perhaps Mr. Kelly will be as generous with responses here as have many of your other interviewees…

My puzzlement basically boils down to this: if we stipulate that the Technium truly is autonomous, at least to some degree (c.f., KK's examples of the "victory" of Moore's Law-driven possibilities over narrow private economic interests in WTW, pp. 160-163), and we further presume that the Technium is broadly similar to the organic/ecological systems that inspired him to choose that particular neologism, then what should we expect to happen next year when remaining reserves of the Internet's most basic and critical protocol resource (IPv4 addresses) are completely depleted, but -- as is almost certain to be the case -- the designated successor protocol (IPv6) still continues to be operationally hobbled (if not completely broken) for individual customers and service providers who do not *also* have IPv4? [Note for those who are not familiar with all of the relevant issues, the hypothetical outcome described here would basically represent a triumph of "what short-term private economic/commercial incentives want," with the likely casualties including most/all future Moore's Law-related improvements of the kind that have made bandwidth, computer processing power, and memory/storage so much more affordable for everyone over the past two decades.]

If the Technium becomes increasingly fragmented as a result of widespread, recurring Internet protocol scarcity-driven partitioning events that greatly reduce or eliminate the possibility of interaction between now isolated pieces of the formerly integrated system, and this condition persists for an extended period of time, should we assume that this is what Technology, in fact, wanted? Would it possible to reconcile such a devolutionary, arguably tragic outcome with the basically Lamarckian evolutionary vision of the Technium laid out in WTW?

Perhaps Mr. Kelly will see no contradiction here because he (or the Technium) disallows the very possibility of any outcome like the one described above? If so it would be good hear that explicitly. It might even make me worry just a little bit less, evidence of the senses notwithstanding…

Fantastic podcast. Russ: comparing this with the interview on "Surprisingly Free", which wasn't bad, shows the extent to which more time and better discussion bring out more from the guest. You rock!

I'll only comment on one narrow point: the assertion that technologies do not disappear. I was as surprised as the rest of your to hear that every item in the Montgomery Ward catalog was still currently being produced, but I don't think that example proves anything. For one thing, the artifacts themselves may still be available, but the factories used to produce them will certainly have changed. Technology is not just the stuff produced, but also the skills used in production and the social organization that supports the arrangement of skills. It's a common error, especially by those who have never run a factory, to assume that production is an entirely logical process. Not so. There's an amazing amount of non-quantifiable knowledge involved in all production processes. Modern technology and manufacturing methods are designed to minimize the variability inherent in more skill-based processes, to the extent that many skills have actually gone extinct. Here are some examples: who today knows how to operate a money lending operation using only Roman numerals? Presumably this was a common skill in the Roman empire. Who today knows how to build a large wooden ship using only human powered tools? Nobody does this anymore. Who today knows how to manage a phalanx in battle? Who knows how to operate a lead mine using slave labor? A particular technology can't be separated from the social forms that gave birth to it. As those change, the technology changes. Fortunately, many of the barbarous social practices of our past have vanished, along with their particular technologies. Yes, we still have banks, ships, armies, and lead. But the societies, and the particular social/technical organizations that provided them in the past, are gone. As another commenter pointed out, the stone ax made today has a very different meaning than the prehistoric example. The artifact may still be here, but the technology that provided it has vanished.

Great interview (as usual). I'm definitely going to have to pick up Mr. Kelly's book.

I have one small thought regarding the extinction of technology. I have a feeling that the number of extinct technologies is generally likely to be vastly underestimated. While I was blown away by the fact that all of the items in the Montgomery Ward catalog are still produced, I kept wondering "what about all the items that aren't listed in there?" There must be many technologies that have been completely lost, with no records of them at all. It seems to me that this kind of complete (or near complete - as in greek fire) loss of the records of a technology are the only way that a technology can truly go extinct. If any substantial record of the technology exists, there is a good chance it can be reconstructed, and as a result, with six billion people around, there is a good chance that it will be reconstructed.

To summarize: It's highly likely that most of the technologies that we would consider to be extinct are technologies that we cannot find any references to.

(1) "The market" is, linguistically a form of "motonomy" (also spelled metonymy), a specific class of metaphor (sort of).

(2) A great book, "Before the Dawn", describes human evolution over the last 100,000 years from the perspective of molecular biology. It is made clear that the separation of physical and cultural anthropology as separate disciplines was a decidedly bad idea. The human brain evolved to keep up with the cultural evolution. (Even tries to get at the un-PC concept of a genetic basis of intelligence.)

(3) The Teaching Company has audio courses on CD (phenomenal). One on "History of Science: Antiquity to 1700" describes the development of technology and science. What jumped out at me was the realization that medieval beings were by no means oblivious to science and technology. They were, for example, busily trying to measure the distance to Saturn while studying Plato and Aristotle in the evenings. Their value, 73 million miles, was way off but illustrates that they had wrapped their brains around some amazing numbers. Not exactly a Monty Python image. Curiously, the notion of a "flat Earth" was a construct of 19th century scholars pondering the ignoramuses of the past. For the record, anyone tempted to read "How The Irish Saved Civilization", a very tempting thesis of the archiving of critical human knowledge during the middle ages, should save their money. IMO, it was horribly boring.

(4) Evolution does not have to be in a direction that humans would consider positive. I would argue that the egregious behavior within the banking system is, in large part, an evolutionary (free market) response to bad monetary policies. We tend to chalk off the bad stuff (maybe the Soviet Union?) as human error when in fact it was one of many experiments that cultural evolution tests for fit and finds to be wonting.

(5) I think government is an evolving system that has, as its sole purpose, to propagate itself. (Athlete's foot comes to mind, which suggests that one should be long cortisone.)

(6) Algorithmic trading programs self-direct until implosion or human intervention. They seem to develop a mind of their own for brief periods. (Obviously not true but...)

(7) In my world (physical science), progress is made through completely evolutionary steps, not giant eureka moments. Monkeys at typewriters all desperately trying to become a little more than a monkey. Human genius is a collective wisdom, not individual wisdom. The Einstein's and Edisons (Edison et al.) are the exception not the rule.

(8) Bobby Jones, the golfer, pieced together an eclectic mix of golf clubs based on feel. Supposedly, by modern measures, they were almost a perfectly matched set.

(9) The unibomber triggered a recollection of an interview of Jeffrey Dahmers in which he explained why he ate his victims. It as shockingly compelling in that you could understand his torment and his struggle to release the pressures.

First off, I haven't read the book, so I guess I don't know the whole argument that Kelly has against the proponents of the technology singularity, which he and Roberts call "Thinkism".

I think it actually might have been this podcast where a guest speaker, a scientist, said, that we are today in an age where science does not concern itself with mysteries anymore, but with puzzles. The idea behind this is, that back in the day, a scientist had a problem and would have to find the missing pieces to answer a problem.
Today science is swamped with huge amount of data about every aspect of any problem. The task of a modern scientist is now to make sense of that data, sort it in a pattern that makes sense, or in other words, piece a puzzle together.

With this in mind, is it so far fetched, that an extremely fast AI could run sophisticated simulations which analyze the interactions of these data-points and comes to new conclusions or at least give an output of what data is missing to get a relevant pattern?

Why is this unreasonable? Afaik Hayek said,a that no entity today can have all information which would be necessary to fine-tune a complex system such as a market. Why would this speak against the proposed AI that actually could analyze these amounts of data?

That is exactly what is happening. Molecular biologists are using very sophisticated computer programs scanning sequences of everything under the sun looking for patterns. There are fitting programs in physics (and probably on Wall Street) that will hurl 2500 adjustable paramters at a problem and simply let the computers cull out which are relevant and what they correlate with.

"(5) I think government is an evolving system that has, as its sole purpose, to propagate itself. (Athlete's foot comes to mind, which suggests that one should be long cortisone.)"

Hi David,

Out of curiosity, what would you say distinguishes your usage of the term "government" (or for that matter, "culture" or "physical science" or "purpose") as somehow less metonymic than your description of "market"? To a casual observer who values self-consistency, this statement suggests that "metonymy" may not be the only rhetorical tool in play here (c.f., "code word," "loaded language").

Also, since it seems that no (non-epiphenomenal) thing that doesn't embody some means for self-perpetuation over time (often accompanied by adaptation/learning) exists long enough to matter -- or at least, long enough for us to notice -- your singling out of "government" as the one thing that has no other purpose got me wondering. What other (non perpetuation, non-propagation-related) purposes do you attribute to "culture" and "the market"? How did they come by those (contingently) virtuous secondary features? Did "government" once embody some kind of adaptive/fitness-enhancing function function but subsequently lost it, or is "government" truly an unprecedented exception in biological history?

All of this seems doubly puzzling because in (4), you observe -- apparently with equanimity -- that evolution has no particular direction or purpose, and immediately afterward you seem to imply that "the market" is just one kind of evolutionary mechanism (maybe one that just happens to be of particular interest to us humans). That would seem to imply that the collection of human institutions called "banks" are "evolutionary" in some sense that places them beyond all considerations of right/wrong, productive/counterproductive, or adaptive/maladaptive, but the collection of human institutions called "government" is not.

Is that actually what you're suggesting? If not, a correction/clarification would be greatly appreciated.

That's a lot of questions. I totally view evolution and those entities and organizations that evolve as non-directed. Optimization comes in the culling process. The selection bias shows you the ones that work--the ones that withstand the test of time--and hides the ones that represented dead ends (became part of stratified layers somewhere.) I also do not view desirable traits as relevant, only the self-perpetuation (think cock roaches). That is not to say that I don't have strong preferences, it's just that what we want isn't necessarily an important parameter. To think of government as "We the People" is in part what I find not necessarily accurate.

When asked how we evolved to be such elaborate organisms, one of my genetics professors indicated that every one of our ancestors out of the many billions was a winner in the Great Wimbledon Bracket. Not a single failure along the tree. What would a failure look like? That's simple. It died before it perpetuated its DNA. (There is the pithy claim that a chicken is simply an egg's elaborate mechanism for propagation.)

I singled out government because it is such a massive part of our lives, is by no means necessarily acting in the interest of the individuals as often suggested, and will not necessarily evolve in a way that pleases us (We the People). Where government differs from biology is that it can spontaneously reinitiate (get a Mulligan) whereas an evolving organism, once instinct, is gone for good.

As a final note, I guess I don't distinguish banks from government except the former seem to control the latter in the hierarchy. This doesn't have to be the case, but it is currently my view of the global banking system. And, for the record, I find it repugnant. Alas, what I want is not necessarily a relevant parameter.

I was a little offended when you said you have to study economics or technology to understand it but left biology full out of it. It's funny because you are talking about a concept from biology (inspired by economics) that that has been applied to technology. I'm 23 minutes in right now, and the guests explanation of the differences between the Technium and Biological evolution could have different early organisms pasted in instead of the technologies. He doesn't know enough about biology.

Technologies do go extinct, we just don't know about them because the records have to be scrubbed before we can be said to have lost them for good.

Thanks for the clarifications David (and no, I must be the other Max),

FWIW I'm with you 100% on the role of contingency and path dependence in determining wherever it is that we happen to be, not only now but at every point going backwards and forwards in time. Also like you so I harbor no particular illusions about any timeless, invariant and universal causal linkage between everything that any of the varied institutions that fall under the rubric of "government" do on the one hand, and every wish and preference harbored by every individual "we" that can me classified as one of "the People" on the other. But then that doesn't really seem like a reasonable standard for judging the merits of any real-world institution -- nor, I believe, does it actually represent the implied (and typically, inherited) bargain between peoples and their governments as advertised or perceived by either party.

So it seems that we may part ways with respect to latent illusions about the metaphysical status of "the market." As you rightly note, individual biological organisms cannot count on getting any second chances life -- at least, not the ones that live in (the state of) nature. But the institution of property doesn't exist in that state either; exchanges of various kinds do occur, but none that I know of that could provide any privileged naturalistic justification for certain kinds of "unnatural" (aka human) institutions like corporations and banks over others that one might categorize as "government." Banks and corporations and governments are equally "biological" in nature, which is to say that each is pretty much orthogonal to the domain of biology. And in terms of salience to individual day-to-day life, "government" may loom pretty large for some people -- but do you think that it looms as large or larger than "the market", or "culture"? In any event, perhaps the de sensibus non est disputandum rule applies.

But perhaps our views are not so different after all. Banks (or perhaps "the market" more generally) have clearly exercised much more influence over "government" for the last 2-3 decades leading up to the current unpleasantness. I agree with you that "repugnant" is a fitting description for the results. But for better or worse, we (contingently) find ourselves living in a world where property rules are not enforced by the underlying physics of the universe, and where the fungibility of monetary resources by definition entails convertibility into power and influence which may be exercised in any domain of human activity, including the domain of property rule-setting. Even if our current oh-so-clearly-imperfect institutions were to all disappear tomorrow, the next day another set of equally (though perhaps differently) imperfect successors would start emerging. I expect this because IMO sentient beings really don't like living in the state of nature much; they tend to value the notion of "security," which in an uncertain world will inevitably lead them to construct new mechanisms to provide buffers against both misjudgments and bad luck -- buffers that exist today in the guise of both corporations and governments.

So unless/until we wake up in a universe where the fundamental laws of physics both enforce property rights and simultaneously establish absolute limiting conditions on the "gravitational" influence of evolving asymmetries in accumulated property (or maybe one in which sentient beings have no concept of insecurity), we're going to continue to be stuck in a world with imperfect institutions -- i.e., the kind of world where the only available options are basically pragmatic in nature.

Pre-emptive P.S. for our gracious list-host; this will be my last comment for this week.

I think we are in reasonable agreement. My view that government lives to perpetuate itself does not mean that it is necessarily good or bad.

I went to a government seminar a couple of weeks ago on the crisis of capitalism. This was clearly a person who hated capitalism to the core (to the extent that I could sort through her bloated prose). She had this nasty habit of saying absolutely nothing using the most florid prose. (It really did sound like one of those prose-generating computer programs.) I realized, while fighting off boredom, that crisis and capitalism are synonymous. In any case, the real high point of the talk was when I asked her to clarify something, she gave me more bloated, content-free prose, and I told her that if she really wants to "have a dialogue" she would have to "learn to speak more clearly." It was an unwelcome suggestion.

As I listened to the discussion of the Amish, I was reading an article about how Apple is "banning" in-app donations in iPhone and iPad apps, rejecting features from trusted companies like PayPal with no explanation. Kelly's suggestion that "things that change their identity, they just keep out" is the most concise description of the new Apple I've seen.

The better we understand it the better we could understand the forces under which it could be steered.

That doesn't sound very Hayekian. I remember some song lyrics about someone wanting to steer markets and someone else wanting them set free.

It seems like the discussion in this podcast was on aggregates of the complex and dynamic system of technology development and I was surprised that Russ didn't seem skeptical of this, quite the opposite. How is the "technium" any different than other complex and dynamic systems like language or customs that emerge? Why should it be different?

Since the Laughlin podcast, I have been continually tripping out on the layers of emergence that lead to us humans. The technium: another emergent phenomenon that possesses uncanny properties in common with what we call life? Why not? I've gotten over feeling threatened by the concept: if it is what comes next, so be it. We should find ways to get along with it/them.

I also think most of the people who are not convinced that the technium has as much aliveness as Kelly ascribes to it have perfectly reasonable points. However, emergence has often had surprising outcomes--why should that stop? It's hard to systematically predict whether a given system can shock the observer by developing something as baroque as life on earth, say, or whether it will produce gray goo.

I hope I am alive when we encounter the first other intelligence like ours (here's hoping we don't blow ourselves up before!). It will be most interesting to see how much of the same stuff they know, and whether they have trade, etc.

The example about the robot looking for the plug was a terrible example "wanting".
A robot plugging itself in is no more "wanting" to plug itself in than an arrow shot at a target by an archer "wants" to hit the target. They are both simply programmed to hit the target (if you will allow the archer to be considered a sort of arrow trajectory programmer), the only difference is that the robot, by plugging itself in, is able to move on to new targets (i.e. plugs), by way of cleverly engineered mechanisms. It sounds like he was simply wowed by the experience and over anthropomorphized (biologized?) the robot.

This isn't to dismiss the idea that technology can be fruitfully thought to have "wants" and otherwise biologized, it is just that this example doesn't fit in well with the thesis.

Once, some time back, I played a game on a Unix based computer at the National Bureau of Standards called "5 in a row." The object was to get 5 squares in a row, either a horizontal, vertical, or diagonal row. A human played the machine. If the the human won, the computer noted, "You won. Congratulations. I watched what you did and you won't win that way again." The strategy was fairly simple: if you could get two open ended strings of 3 with one move, you would win. Permitting the computer to get an open ended 4 in row, was a win for the computer. (Think about it.)

Later I find a new file that the computer developed over time which I deduced was the computer's memory for recording my wins. It occurred to me if I could get that memory to play another machine, eventually the other machine's memory file would exceed the ability of the first.

The cartoon The Last Straw which Jeremy H. helpfully identified as from The New Yorker can apparently also be found in The New Yorker Book of Baseball Cartoons. It's available on the Amazon Marketplace at $3.71 with free shipping or for free (actually, $0.01) and shipping at $3.99. Is $0.27 the price of being able to say “I bought it for a penny”?

This is a late comment, but I'm just now listening to Kelly's book. (As a result of this podcast so I guess I'm commenting on both.)

My only critique so far is that Kelly tends to isolate technology in its organic fashion of growth. When in fact anything that grows, evolves, develops, etc seems to take on this organic, self-organizing nature. Russ' praise of the book is most likely rooted in his own love of the Hayekian nature which is of course organic in the same fashion.

Point being that Kelly's book offers a lot of food for thought, but at times he comes across as almost too self congratulatory for discovering this organicyness when in fact it is all around us.

Also his treatment of matters still up for debate is a little arrogant. One of the traits I admire most about Russ - and one I honestly try to emulate - is that he acknowledges when he has a firm opinion on something others may not agree with.

For instance, Kelly mentions "the invention of morality." I'm acquainted with the debate of how morality developed in the human mind, and it is far from settled. Kelly speaks very certainly about something he cannot be that certain about in reality which casts doubt on his overall judgment, regardless of how attractive some of his ideas may be on the surface.

All in all it's an interesting read. Those interested in a more down to earth approach to similar subjects would enjoy Henry Petroski by the way.

Blogging software: Powered by Movable Type 4.2.1.
Sound engineer:
Rich Goyette
Music from Cleared up Sunset, by Yasuhiro Tsuchiya / unplug
Picture of Russ Roberts courtesy of the author.
All opinions expressed on EconTalk or in the podcasts reflect those of the authors or individual commenters, and do
not necessarily represent the views or positions of the Library of
Economics and Liberty (Econlib) website or its owner, Liberty Fund, Inc.

The cuneiform inscription in the Liberty Fund logo is the
earliest-known written appearance of the word
"freedom" (amagi), or "liberty." It
is taken from a clay document written about 2300 B.C. in the Sumerian city-state of Lagash.