“Technology is the 7th Kingdom of Life” – A conversation with Kevin Kelly

Kevin Kelly doesn’t need much in the way of introduction to Radar readers. He is a big thinker looking at the intersection of biology, technology and culture.

Kevin gave a great High Order Bit at the Web 2.0 Summit and I caught up with him afterward. This interview covers:

The impact of the web on our recent elections

The rich new possibilities for interaction and collaboration afforded by the web

The Wisdom of the Crowds vs. the Stupidity of the Mob

Technology is the 7th Kingdom of Life looking into “what technology wants”

This last section (at 7mins 30 secs) is the deepest and most provocative. Kevin assumes the point of view of technology to assess its needs and wants. This line of inquiry leads to some surprising conclusions.

My favorite quote from the conversation: “We are the sexual organs of technology” Indeed.

Isn’t he really adopting a kind of ethical nihilism here by attributing to “technology” a teleology that denies the role of humans as the ultimate “deciders”? To unpack that a bit:

He talks at length about what technology “wants”. For example, he says, technology wants clean water for the manufacture of chips. He says technology wants clean air (for unexplained reasons). And, yes, there’s that quip about humans being “the sexual organs of technology”.

Well, this is some very old school sophistry — that word game goes way back. A little green alien landing on the planet 250,000 years ago might wonder, upon first glance at things, whether a web is part of a spider’s toolkit for making more spiders or if it isn’t the opposite: perhaps a spider is a web’s way of making more webs.

Both are true in the most vacuous sense but once our little green alien observes the life cycles more clearly, surely the conclusion will arise that it’s more natural to recognize that spider’s make webs as part of the process of making more spiders (spider makes web, spider catches food, spider converts energy from food to make more spiders). The spider’s web is a passive thing and the main thing it “wants” to do is fall apart: thus we see spiders constantly repairing their webs and making new ones.

There being a cycle there — spiders and webs locked in an inter-dependent relationship — we could poetically say that “the universe here ‘wants’ to make spiders and webs” but that hardly says anything more than “spiders and webs exist”.

Technology, like spiders webs, seems mostly to “want” to fall apart and stop functioning. A running internal combustion engine, for example, “wants” to keep running for a while while there is fuel coming in but it “wants” to use that fuel up, with wear and tear, then stop, then rust in place. The engine has no tendency — no “want” — to drive to a refueling station or, for that matter, to go exploring for oil. The engine, as any mechanic can tell you, wants to be rock.

Rocks don’t have sex organs.

He makes a kind of comparison of his way of talking about the “wants” of technology to Dawkins’ “selfish gene” concept and yet, if we take the most defensible version of the selfish gene concept, it’s nothing like technology. A gene is a partial-control element within a metabolism such that — and this gets to the key ethical point — such that the metabolism is constrained by physical law to honor the gene’s control. Taking a simple single-celled asexual life form, for example: in certain environments the cell will reproduce. There is no choice. The fundamental laws of physics assure it. It “wants” to reproduce in exactly the same sense that the universe “wants” the cell to be there in the first place: the existence of the cell is real (the universe “wants” precisely “what exists”) and an aspect of what exists is an unavoidable, fated reproduction of the cell (in a suitable environment).

Technology is quite different. Its reproduction is a question of choice. We can only start calling the reproduction of technology a natural “want” of technology if we assume that humans have no choice over the matter. And that’s why his position is one of ethical nihilism: Why, there’s nothing we can do!

One can see how such nihilism would be attractive in the context of trade press like radar: the last thing you want to hear when you are trying to make money on the margins of cheerleading an industry is “all of our techno-niche here is a big mistake and we should kill it.” But you are in no danger at all of having to contemplate such a question if you just assert from the beginning, perhaps helped with a little grey-beard sophistry, that, well, the technology is just going to do what it will do because it “wants” to so let’s make the best of it. Ethical nihilism.

I found more interesting his earlier comments (prompted by your interview questions) about “setting defaults”. He looks at examples like “opt-out vs. opt-in” or the rules that give structure to the editing process of Wikipedia and exclaims some fascination about how those defaults and that structure “nudge behavior” of crowds.

That’s not sophistry it’s only trite but it does refer to empirical facts. It refers to “what the universe wants”. People know this well from time-motion studies, from marketing studies, from studies of the human factors of user interfaces, and on and on.

Yet it’s an uncomfortable area to get into here, at “Web 2.0” central, precisely because to the extent we take it seriously it isn’t ethical nihilism but ethical problematics. Perhaps it is wrong to not be deeply critical of, say, Wikipedia precisely because of the power imbalances that are reinforced by the structure given to it by its elites. Perhaps spinning flattering yarns about its evolution is wrong because it encourages powerful people to invest huge sums in trying to make more of the same, or similar.

But there, we are no longer talking about the “wants” of technology at all but rather of the wants and relations among people. When we start doing that we have to start recognizing that much of technology is in fact employed as human-on-human weaponry and that the dynamics of its creation, promotion, spread, and acceptance are all-too-human questions.

But if we go down that path we might start hesitating. We might trying to resist and substitute. We might try to extinguish a given technology and replace it on ethical grounds.

So much simpler if we can just suck up to power and explain away their actions by saying “Why, they’re powerless! It’s just what technology ‘wants’ to do!”

Isn’t he really adopting a kind of ethical nihilism here by attributing to “technology” a teleology that denies the role of humans as the ultimate “deciders”? To unpack that a bit:

He talks at length about what technology “wants”. For example, he says, technology wants clean water for the manufacture of chips. He says technology wants clean air (for unexplained reasons). And, yes, there’s that quip about humans being “the sexual organs of technology”.

Well, this is some very old school sophistry — that word game goes way back. A little green alien landing on the planet 250,000 years ago might wonder, upon first glance at things, whether a web is part of a spider’s toolkit for making more spiders or if it isn’t the opposite: perhaps a spider is a web’s way of making more webs.

Both are true in the most vacuous sense but once our little green alien observes the life cycles more clearly, surely the conclusion will arise that it’s more natural to recognize that spider’s make webs as part of the process of making more spiders (spider makes web, spider catches food, spider converts energy from food to make more spiders). The spider’s web is a passive thing and the main thing it “wants” to do is fall apart: thus we see spiders constantly repairing their webs and making new ones.

There being a cycle there — spiders and webs locked in an inter-dependent relationship — we could poetically say that “the universe here ‘wants’ to make spiders and webs” but that hardly says anything more than “spiders and webs exist”.

Technology, like spiders webs, seems mostly to “want” to fall apart and stop functioning. A running internal combustion engine, for example, “wants” to keep running for a while while there is fuel coming in but it “wants” to use that fuel up, with wear and tear, then stop, then rust in place. The engine has no tendency — no “want” — to drive to a refueling station or, for that matter, to go exploring for oil. The engine, as any mechanic can tell you, wants to be rock.

Rocks don’t have sex organs.

He makes a kind of comparison of his way of talking about the “wants” of technology to Dawkins’ “selfish gene” concept and yet, if we take the most defensible version of the selfish gene concept, it’s nothing like technology. A gene is a partial-control element within a metabolism such that — and this gets to the key ethical point — such that the metabolism is constrained by physical law to honor the gene’s control. Taking a simple single-celled asexual life form, for example: in certain environments the cell will reproduce. There is no choice. The fundamental laws of physics assure it. It “wants” to reproduce in exactly the same sense that the universe “wants” the cell to be there in the first place: the existence of the cell is real (the universe “wants” precisely “what exists”) and an aspect of what exists is an unavoidable, fated reproduction of the cell (in a suitable environment).

Technology is quite different. Its reproduction is a question of choice. We can only start calling the reproduction of technology a natural “want” of technology if we assume that humans have no choice over the matter. And that’s why his position is one of ethical nihilism: Why, there’s nothing we can do!

One can see how such nihilism would be attractive in the context of trade press like radar: the last thing you want to hear when you are trying to make money on the margins of cheerleading an industry is “all of our techno-niche here is a big mistake and we should kill it.” But you are in no danger at all of having to contemplate such a question if you just assert from the beginning, perhaps helped with a little grey-beard sophistry, that, well, the technology is just going to do what it will do because it “wants” to so let’s make the best of it. Ethical nihilism.

I found more interesting his earlier comments (prompted by your interview questions) about “setting defaults”. He looks at examples like “opt-out vs. opt-in” or the rules that give structure to the editing process of Wikipedia and exclaims some fascination about how those defaults and that structure “nudge behavior” of crowds.

That’s not sophistry it’s only trite but it does refer to empirical facts. It refers to “what the universe wants”. People know this well from time-motion studies, from marketing studies, from studies of the human factors of user interfaces, and on and on.

Yet it’s an uncomfortable area to get into here, at “Web 2.0” central, precisely because to the extent we take it seriously it isn’t ethical nihilism but ethical problematics. Perhaps it is wrong to not be deeply critical of, say, Wikipedia precisely because of the power imbalances that are reinforced by the structure given to it by its elites. Perhaps spinning flattering yarns about its evolution is wrong because it encourages powerful people to invest huge sums in trying to make more of the same, or similar.

But there, we are no longer talking about the “wants” of technology at all but rather of the wants and relations among people. When we start doing that we have to start recognizing that much of technology is in fact employed as human-on-human weaponry and that the dynamics of its creation, promotion, spread, and acceptance are all-too-human questions.

But if we go down that path we might start hesitating. We might trying to resist and substitute. We might try to extinguish a given technology and replace it on ethical grounds.

So much simpler if we can just suck up to power and explain away their actions by saying “Why, they’re powerless! It’s just what technology ‘wants’ to do!”

@Thomas –
thanks for the thoughtful reply. I will let Kevin speak for himself but I took his line of inquiry into “what technology wants” as just that – a line of inquiry- a device that helps create new possible answers as to the meaning of technology and our relationship to it. From what I know, I do not think he is a technological determinist (see Jacques Ellul for that POV).

Regarding the “defaults” discussion – it is an interesting conundrum since, for me personally, a healthy society needs to have structures in place to protect citizenry (both majorities and minorities)and ensure a certain balance in power relations through society. A lack of structural defaults and norms leads to Abu Ghraib. Wikipedia is trying to put structure in place that “nudge”us towards a more stable, accurate version of an encyclopdia… the surprise lay in the very fact that, under the right conditions, a diverse group of people could create an encyclopedia. I am not sure I understand your objections on that score or what the “power imbalance” is etc.
J

I don’t accuse Kelly of being a technological determinist and I’m
not sure how you got that.

Here, let’s look at it a little bit differently:

Let’s suppose that you and I were to agree on some checklist
of properties “technology” must have if we’re to count it as the
“7th kingdom” in a biological taxonomy. So, a checklist that if
it’s all marked true tells us technology is a life form. I
think we could do that well enough for the sake of discussion —
we wouldn’t need to come up with the ultimate, perfect
definition of life.

And, let’s suppose we find excellent empirical support:
looking at technological artifacts and history we see “dynamics”
that look like life. One by one we check off all the items on
the checklist. I’ll stipulate that although I don’t believe
it’s likely. More likely, I think, is that when we went looking
for evidence we’d come up with nothing more than a handful of
“just so” stories — but, let’s pretend.

What happens next? Well, people will start to try to take
advantage of this. For example, perhaps the best ways to
influence “technology”, if it is life, most resemble
“cultivation” and “tending”. That is, people will start making
their decisions about how to behave towards technology based on
this “technology is a kind of life” model.

We might even, and I’ll stipulate again, find that the more
people start thinking about technology as a form of life, and
acting accordingly, the more and more confirmation we get that
technology is, indeed, a form of life.

Suppose all that happens.

We forgot an important ethical consideration along the
way.

Surely if technology “looks like, walks like, and quacks
like” a life form that is only because of the choices humans
make about how to develop and deploy technology.

Is it in any way necessary that technology be
life-like that way? Only if humans don’t have any meaningful
choice about how to develop and deploy it.

Conversely, let’s take an opposing position. Forget all the
stipulations above. Forget about “technology is life”. Here’s
an opposing number:

Technology is the product not of a biological evolution but
of rather is the artifact of a series of significant choices
made by people.

From that perspective, if we want to know why the technology
that arrives tomorrow is going to arrive, we don’t appeal to
some impersonal evolutionary force but, rather, we look into why
the people who built the technology made the choices they
made.

Wikipedia was not created by random mutation, variation, and
natural selection. Wikipedia was created by a specific set of
programmers who made distinct and motivated choices for every
line of code they wrote. The form and function of
Wikipedia, in this view, is an ethical responsibility of those
programmers — not the consequence of an impersonal evolutionary
process.

Most importantly: if this second view is true, that
technology is an artifact of choices, then could we not choose
for technology to *not* satisfy the checklist for the 7th
biological kingdom? That is, couldn’t we choose to create and
deploy technology in such a way that it is not life and
might that not be worth considering doing and therefore isn’t
our choice here an ethical choice?

On the other topic, you wrote:

Regarding the “defaults” discussion – it is an
interesting conundrum since, for me personally, a healthy
society needs to have structures in place to protect citizenry
(both majorities and minorities)and ensure a certain balance in
power relations through society. A lack of structural defaults
and norms leads to Abu Ghraib.

I’m not sure where you are coming from. It seems to me that
Abu Ghraib was not created by a “lack of structural
default” but by an unethical structure. It’s like the Stanford
Prison Experiment: the (human designed structure of the
arrangement) led, quite unsurprisingly, to that particular
outcome. A more thoughtful design would have helped fewer such
mistakes to be made and fewer such crimes to be committed.

Wikipedia is trying to put structure in place
that “nudge”us towards a more stable, accurate version of an
encyclopdia… the surprise lay in the very fact that, under the
right conditions, a diverse group of people could create an
encyclopedia. I am not sure I understand your objections on that
score or what the “power imbalance” is etc.

What I find interesting about Wikipedia is how it is a small
group of people came to possess the exclusive power to “nudge”
us in such ways — and how they guard and manage that power
now.

It was clear when Wikipedia started and still is that there
is nothing that is technologically essential to its
centralization. There is no need for a singular “stable,
accurate version”.

An analogy could be drawn to GNU/Linux operating system
distributions. Many separate projects (analogous to Wikipedia
articles or groups of articles) operate independently, often
competing side by side in a given category. Downstream of that
nearly anyone can cull from those projects a “stable,
accurate, complete distribution.”

By form and function Wikipedia has resisted such
decentralization thus maintaining a high barrier to entry to
would be competitors (as we’ve seen through empirical
evidence).

That centralization is directly responsible for the peculiar
politics and social dyanmics that characterize controversies
within the Wikipedia community.

And that form and function creates the “power imbalance” I
mentioned. First, by maintaining that barrier to entry around a
centralized implementation a small group of programmers gain
exclusive control over the form and function of effort.
Second, the particular form and function they chose creates
positions of non-democratic authority over the administration of
the effort.

If you believe that the “big news” about Wikipedia is that it
is the natural product of evolution, or if you want to focus
entirely on the “upside” of how it exemplifies Web 2.0, you are
distracting attention away from its rather un-natural,
economically skewed, power-play technological form.

-t

Falafulu Fisi

Josh said…“what technology wants”

Josh, you make it out as if technology has a conscious mind of its own. This is wordsmithing/word-messaging/word-obfuscating at its best. It is human(s) society that wants technology and not the other way round.

I agree with Thoma Lord’s comment above. I have frequently come across on the net about some no-body/tech-wannabe somewhere trying to foresee the future of technology evolutions, where these individuals have no solid background in technology research or something like that. Such people are attention seekers in my view.

If you want to quote/interview some real technology futurist out there then perhaps you can invite this guy (Prof. Anton Zeilinger) to give you a brief perhaps a half a page about the impending arrival of Quantum Computing (QC). This will change everything in technology. It won’t be Web-2.0 or Web-xxx that will revolutionalise computing technology but QC when they arrive.

@ Falafulu — a couple of clarifications — I was quoting Kevin Kelly in looking at “what technology wants” so you begin with a mistake — Next, I think you missed the point of “what technology wants”. This is a device (emphasis on device) that helps in a philosophical inquiry. Dawkins uses it in looking at genes, Michael Pollan in plants etc. Braudel and Manuel DeLanda similarly take a counterintuitive POV by looking at material flows (while explicitly exempting human agency) in order to see new possibilities in reading history. Surely you are not arguing that philosophical projects may only be guided by a narrow set of permissible inquiry.
Lastly, I am not sure who you are calling “some no-body/tech-wannabe somewhere trying to foresee the future of technology evolutions, where these individuals have no solid background in technology research or something like that.” and I am not sure why you were moved to rudeness in the course of disagreement. The link to Zeilinger is appreciated.