Ethics of the Future is an exploration of how our morals and ethics may change with new developments in technology.
In each post, I explore a potential scenario and its implications, and maybe write a short sci-fi-esque story illustrating how they might play out.

Pages

Thursday, 19 August 2010

Last week, I had a long drunken pub discussion type conversation over Twitter (as far as was possible in smatterings of 140 characters!) with BBC colleagues Paul and Michael. It all started when Paul tweeted some quotes from Marshall McLuhan:

This got us talking about the morality of technology. Michael was of the opinion that yes, technology could be intrinsically evil when used to do evil. I was of the view that like Wallace's 'Techno Trousers' in Nick Park's brilliant animated short Wallace & Gromit in The Wrong Trousers, technology was completely neutral. Gromit used it to paint the ceiling in his house, the Penguin used it to rob a bank. Only the good/evil intentions of the user decided how the Techno Trousers were used – the trousers were merely an inanimate, amoral tool.

“How about a nuclear weapon, or a gas chamber?” Michael challenged me.
“Nuclear/gas pressure technology can be/are used for good. Only evil intentions created weapons/killing apparatus.” I replied.

For me, it boiled down to an ontological question: can something be good or evil without free will? My answer was no. Until technology attains true artificial intelligence with the ability to make independent moral judgments (Hello, Skynet!), I didn’t think we could ascribe moral responsibility to it. But Michael then said (over several tweets), “I think technology is always an attempt to extend human capabilities. Which capabilities we choose to extend & therefore which technology choices we pursue reflects deeply on human motivations and motivations reflect ethics. So probably true that a technology cannot intrinsically be good / bad it's soaked in enough human desire that it's imbued with good / bad. Or technologists shouldn't attempt to sidestep ethics with the usual 'i just made this, i don't choose how it's used' malarkey.” And I must say, this made sense to me.

The idea of technology being an extension of human capabilities is not new. McLuhan explored the idea in 1964. As early as 1877, the German philosopher Ernst Kapp published his “organ projection theory,” which stated that technology was in essence an extension of human (organic) corporeal apparatus. He argued that the tools or technical apparatus that humans create “operate as unconscious projections of the sensorimotor apparatus, and it is through various kinds of technological extensions and augmentations of gestures and organs that human beings constantly model, replicate and recreate themselves in the course of evolution." So a camera could be an extension of the human eye apparatus, a microphone could be an extension of the vocal apparatus, etc., and the technogical environment in turn shapes us.

Indeed, scientists have claimed that “the brain works like the internet.” But mightn’t it be because the internet was built by the brain to extend and augment the brain in all sorts of ways that it mimics the brain? McLuhan famously pointed out that “the medium is the message.” According to him, it is not the content but the medium itself that defines what we do/say/think. When the internet was built in the 1960's, it was conceived as a way to pass information between disjointed limited network points. 50 years later, the way the web has developed goes way beyond anything that the original creators of the internet imagined, in the way it enables us to create content, share and interact with each other. The original creators (many different groups over the years) also couldn’t have known the extent to which it gives us the ability to steal and snoop on each other, and the way it's made some of us far less engaged in real life friendships and relationships.

We shape technology to extend our capabilities, but technology in turn shapes us and the way we live and communicate. It’s a symbiotic relationship. To borrow Michael Smethurst’s words, the web has developed into the ethical layer on top of the internet, which are, essentially, just some pipes. (Read his excellent full post here.)

So, to use the internet as one example of widespread, indispensible technology in today's world, is the internet a good thing? Undeniably. Can it be used for evil purposes? Absolutely. Should the creators of the internet have not made the internet because it might be used for evil? I don’t think so. But to say that “I just made this thing, it’s not up to me how it’s used” would be a cop-out. Not having a moral stance is a moral stance in itself. Inspiringly, the ‘Father of the Internet’ Tim Berners-Lee continuously advocates openness, neutrality and universal access. Likewise, any programmer, technologist, engineer or scientist has the moral responsibility to continue to monitor their creations and to try their best to ensure that it continues to be used for good. Not just looking at the technology/website/application itself, but looking at its overall effect on society and how it’s changing people’s behaviour.

And how about for users? We also have a responsibility to use technology for good. This may be subtler than just actively not being evil (e.g. stealing or using it to spread hatred). We have a responsibility to think about how our use of technology is affecting other areas of our lives and other people's lives. Are we spending enough face-time with our families, friends, lovers? Can we be doing something else to brighten someone else’s life or improve our bodies and minds instead of mindlessly surfing social news feeds? Too much of a good thing can be a bad thing and technology is very addictive. According to a recent New York Times article, as we get more addicted, our brains use up more energy anticipating the next hit and responding to “news” with ever-increasing urgency, and it leaves our brains with less "room" for tasks at hand. Just how urgent is that email, facebook update or tweet? I love technology, social networking and online sharing as much as the next person. But it is our responsibility to make sure that we do not let technology shape our lives and of those around us for the worse.

To end, today’s story is inspired by a real news story. Hope you enjoy it:

It was a grey rainy day in April that I first noticed a little girl sitting on top of the steps on the way to my flat. She looked to be about 5 years old. Her hair was a dank bird’s nest and her clothes were tatty and wet. Her frail and thin body was shivering.

“Hey there,” I said to her with a smile.

She didn’t reply but simply stared at me with big hungry eyes.

“Do you live here, in this building?” I tried again.

She didn’t say anything.

My husband looked surprised when I walked into the flat holding her hand but after a brief explanation, soon started chatting to her in the easy way he has with kids.

“Hello, I’m Jon. What’s your name?”

She mumbled something, barely audible.

“What did you say?” Jon cupped his ear and put it right next to her mouth.

“She’s hungry,” he said.

When we placed a plate of rice and veggies in front of her, she tore into it with her bare hands as if she hadn’t eaten for days. After she’d shoved the last grains into her mouth, she laid down right next to the table and immediately fell asleep. As Jon and I looked at the small heap by the table and saw her chest heave up and down in contented sleep, we knew we had to keep her.

Within the next few days and weeks, we began to learn more and more about her. Her name was Anima. She was actually 7 years old, although she looked a good two years younger. At first, she hardly spoke. It was heart-breaking to see her so awkwardly self-conscious at an age when she should have been blissfully unaware of any real sorrow. As she began to trust us more, she began to reveal bits of precious information about her past life.

She told us that she had lived in a house in a forest with an old woman whom she called Nanna. It appeared that she had been abandoned by her parents when she was a baby and was brought up by this old woman who had taken pity on her. She had never gone to school. Where they had lived or how she came to be sitting on the steps of our building was still a mystery. We felt like we were in a treasure hunt - every little nugget of information she revealed about herself, every ounce of weight she gained, every tinkle of a laughter, gave new meaning to our empty lives. I dressed her in pretty flowery dresses and spent hours brushing her hair until it shone. She learned to use cutlery, to read and write, and I even began to teach her the piano. I’ll never forget the day she finished playing ‘Fur Elise’, turned around and said, “I love you, mummy.”

One day, as I was putting Anima to bed, stroking her hair as her big, now-shiny eyes began to droop to the sound of my bedtime story, I became aware of a strange noise in the house. I realised after a while that it wasn’t the noise but the lack of a noise that was odd to me. I got up from my computer and went to look around the flat. It was cold and dark – I had forgotten to put the heating or the lights on while I was engrossed with Anima. Then I realised that the baby wasn’t crying.

I almost gagged as the stench of unchanged nappies hit my nose when I entered the baby’s room.

She was in her cot, looking painfully emaciated. Her clothes were stained and tatty. Dried tear stains and snot covered her little face. At least she wasn’t crying now.

I reached out and touched her cheek. It was cold.

-------------------------------------------------------------------

My story is based on this news story of a South Korean couple whose baby died while they were obsessed with nurturing a virtual girl called Anima. It's an extreme case, but are we unwittingly doing that to a lesser extent with our loved ones while we live virtual lives? And how about the people who design and market those games to be as addictive as possible?

You can see the full conversation between Michael and I here (Paul's tweets are not included as they are protected.)

4 comments:

Interesting post. Particularly as my original reason for posting the McCluhan quote was more to make a point about moral panics. The fact that it sparked off a conversation about ethics is great, too ;-)

I've only skimmed this — I'll read it properly later — but the “brain works like the Internet” thing is silly distraction more than anything else. I can't believe people actually recite that in all ernestness.

The brain works like the Internet purely in the sense that it's a mass of seemingly-random interconnected nodes of some kind. The Web, of course, has even less structure. There are lots of things in the universe which work along these lines if you visualise them in comparable terms — in part through necessity, but the brain and the Internet (or the Web, depending upon who's drawing the comparison) are singled out because they perform what I can only really describe as knowledge processing.

Of course, the Internet doesn't do this on its own. The Internet does this because of the brains who make use of it. In these terms, the brain vs Internet thing boils down to the fact that the Internet is an aggregate of lots and lots and lots of brains. And so, if you take the people using the Internet out of the equation, brains and the Internet don't share any real properties beyond basic structure (which even then is loosely defined). Boiled down, then, this equates to “big things are made up of smaller things joined together”. This is not what you might call an earth-shattering observation, at least not in the fifty years :)

I do tend to agree with Mo tho. The whole internet like the brain thing is true but always reported as if some subconscious urge made it so. But the brain is a set of interconnected nodes because that's the best way to ensure that if one part is damaged the rest can still function. Same with the internet. In both cases it's really a question of efficiency. The only difference is the brain resulted from evolutionary efficiency and the internet resulted from design efficiency...

Aside from that I *think* the only point I was trying to make (altho it's a while back so i might just have forgotten) is that technology reflects our desires and we shouldn't think of it as a shiny edifice of reason separated from morality. Sometimes technologists get carried away by the shininess of the technology and tend to skip the ethical questions. To me that just feels like an abdication of responsibility...

I think as people who make things it's always important to ask "what good things could come from this" and conversely "what bad things could come from this". And I think that's particularly true for the web because the interconnectedness of what we make causes ripples that we're only just starting to understand

@Mo and Michael, Your points about the internet having been made to share information with maximum efficiency rather than as an imitation of the brain is a valid one. I guess I was trying to make the point that anything we make or do will reflect our own capabilities, whether that is the way we connect/absorb information or our imagination and experience. Not subconsciously but necessarily. But I agree that the claim internet works like the brain is a tenuous one. I only included it in the post because that article came out on the day we were discussing the concept of 'technology as an extension of ourselves'.

"The interconnectedness of what we make causes ripples that we're only just starting to understand" - I think this is very true. It's an area I'm fascinated by and will explore further in the future.

@Paul - thanks for inadvertently stimulating a very interesting debate - I look forward to more in the future :)