SHADES OF GREY: An essay by Extropia DaSilva

INTRODUCTION.

Question. What connects Alan Watts, Richard Dawkins and Henrik Bennetsen? The answer is, they have all written about the human need to make distinctions and separate things into classes. In ‘The Two Hands Of God’, Watts wrote, ‘from the standpoint of thought, the all-important question is ever, “is it this, or that?”. By answering such questions, we describe and explain the world’. Richard Dawkins pointed out that ‘many of our legal and ethical principles depend on the separation between homo sapiens and all other species’.

But look more closely at what these people wrote. While all three identified various ways in which we draw distinctions, they also argued that reality is often not like that. Alan Watts cautioned, ‘in nature herself there are no classes… The importance of a box for thought is that the inside is different to the outside. But in nature the walls of a box are what the inside and the outside have in common’. Richard Dawkins, meanwhile, explained how we can only talk about ‘species’ because so many forms of life have gone extinct and fossil records are so incomplete. ‘People and chimpanzees are linked via a continuous chain of intermediates and a shared ancestor… In a world of perfect and complete information, fossil information as well as recent, discrete names for animals would be impossible’. And while Bennetsen did give his essay the title ‘Augmentation versus Immersion’ and various other bloggers have referenced it when writing about clashes between incompatible beliefs in SL, it seems to have been forgotten that he wrote, ‘I view these two philosophies placed at opposite ends of a scale. Black and white, if you will, with plenty of grey scales in between’.

I think this remark applies to many distinctions, such as ‘natural’/’artificial’; ‘actual’/’virtual’ and ‘person’/’machine’. These distinctions, arguably, are no more grounded in reality than the separation of life forms into species. Furthermore, while the illusion that humans are a distinct species separate from all other animals was brought about by past events (those events being extinctions and the destruction of fossils via geological activity), one can dimly glimpse how current research and development in Genetics, Robotics, Information technology and Nanotechnology might result in a future where it no longer makes sense to distinguish between the natural and the artificial; the actual and the virtual. The consequence of this will go much further than making all those essays about ‘immersionism versus augmentationism’ seem nonsensical to future generations. It also suggests that a technological singularity could happen without anybody noticing.

To understand the reasoning behind both of those suggestions, we need to take a wider view than just the ongoing creation of Second Life. It is, after all, a virtual world existing within a much larger technological system, namely the Web. As we progress through the 21st Century, what is the Web becoming?

THE GOSPEL ACCORDING TO GIBSON.

Transhumanists and related groups tend to imagine that the arrival of the Singularity will be unmistakable, pretty much the Book of Revelation rewritten for people who trust in the word of William Gibson, rather than St. John the Divine. Is this necessarily the case? I would argue that, if the Singularity arrives on the back of ‘Internet AI’, the transition to a post-human reality could be so subtle, most people won’t notice.

The transition from Internet to Omninet (or global brain, or Earthweb, or Metaverse, choose your favourite buzzword) has at least three trends that might conspire to push technology past the Singularity without we humans noticing. The first trend, networking embedded computers using extreme-bandwidth telecommunications, will make the technological infrastructure underlying the Singularity invisible, thanks to its utter ubiquity. Generally speaking, we only notice technology when it fails us, and it seems to me that, before we can realistically entertain thoughts of Godlike AI, we would first have to establish vast ecologies of ‘narrow’ AIs that manage the technological infrastructure with silent efficiency.

The second trend is the growing collaboration between the ’human-computing layer’ of people using the Web, and the search software, knowledge databases etc. that are allowing us to share insights, work with increasingly large and diverse amounts of information, and are bringing together hitherto unrelated interests. Vinge noted that ‘every time our ability to access information and communicate it to others is improved, in some sense we have achieved an increase over natural intelligence’. The question this observation provokes is ‘can we really pinpoint the moment when our augmented ability to access information and collaborate on ideas is producing knowledge/technology that belongs in the post-human category’? Finally, if the Internet is really due to become a free-thinking entity, loosely analogous to the ‘organism’ of the ant colony, would we be any more likely to be aware of its deep thoughts than an ant appreciates the decentralized and emergent intelligence of its society?

Looking at the first trend, there’s little doubt that we are rapidly approaching an era where the scale of information technology grows beyond a human’s capacity to comprehend. The computers that make up the US TeraGrid have 20 trillion ops of tightly integrated supercomputer power, a storage capacity of 1,000 trillion bytes of data, all connected to a network that transmits 40 billion bits/sec. What’s more, it’s designed to grow into a system with a thousand times as much power. This would need the prefix ‘zetta’ which means ‘one thousand billion billion’, a number too large to imagine. Then there is the prospect of extreme-bandwidth communication. ‘Wavelength Division Multiplexing’ allows the bandwidth of optical fibre to be divided into many separate colours (wavelengths, in other words), so that a single fibre carries around 96 lasers, each with a capacity of 40 billion bits/sec. It’s also possible to design cables that pack in around 600 strands of optical fibre, for a total of more than a thousand trillion bits per second. Again, this is an awesome amount of information that is being transmitted.

These two examples represent two of the four overlapping revolutions that are occurring, thanks to the evolution of IT. The first of these, the growth of dumb computing, is referred to as James Martin as ‘the overthrow of matter because it stores such a vast number of bits and logic in such a small space’. It was not so long ago that futurists were making wild claims about a future web with 15 terabytes of content. That’s not so impressive compared to Google’s current total database, measured in hundreds of petabytes, which itself now amounts to less than one data centre row.

The second revolution is the ‘overthrow of distance’, a result of fibre-optic networking and wireless communication. These revolutions will ultimately converge on a ‘global computer’ that embraces devices spanning scales from the largest to the smallest. Data centres sprawling across acres of land, acting as huge centralized computers comprised of tens of thousands of servers. Optical networks will transport their data over vast distances without degradation. Today, many of the duties once delegated to the CPU in your PC can now be performed on web-based applications. Current research, inscribing lasers on tops of chips, and the aforementioned all-optical networks, will radically decentralize our computing environment, as the Omninet embraces handheld communicators and receives data from ubiquitous sensors no larger than specks of dust. As George Gilder put it, ‘(the Omninet) will link to trillions of sensors around the globe, giving it constant knowledge of the physical state of the world’.

The human species has two abilities that I marvel at. The first is that, collectively, we are able to bring such radical technology out of vapourware, into R+D labs, and eventually weave it into the fabric of everyday life. The second is that, as individuals, we become accustomed to such technology, to the extent that it becomes almost completely unremarkable, as natural as the air we breathe. This latter trait may play a part in ensuring the Singularity happens without us noticing. It’s commonly believed that its coming will be heralded by a cornucopia of wild technology entering our lives, and yet today technologies beyond the imagination of our predecessors are commonplace. It can make for amusing reading to look back on the scepticism levelled at technologies we take for granted. A legal document from 1913 had this to say about predictions made by Lee De Forest, designer of the triode, a vacuum tube that made radio possible: ‘De Forest has said… that it would be possible to transmit the human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public… has been persuaded to purchase stock in this company’.

To get an idea of just how much attitudes have changed, consider the research that shows users of search engines are satisfied with results delivered within a twentieth of a second. We grow impatient if we’re made to wait much longer. In 1913, the belief that the human voice could be transmitted across vast distances was laughed off as ‘absurd’. In 2007, we have what amounts to a computer-on-a-planet, allowing not only global voice communication but near instantaneous access to pretty much all knowledge, decentralized communities sharing text, images, music and video and even entire online worlds where you can explore every possible facet of self. Our modern society is clearly filled with technology beyond the ‘absurdity’ of trans-atlantic voice communication, so why are we not in a profound state of future shock?

Well, recall the difference between ‘visible’ and ‘invisible’ innovations. Radio waves transmitting voice across the ocean almost instantaneously, actually TALKING to someone on the other side of the world as if they were IN THE SAME ROOM was truly unprecedented. On the other hand, chatting on a mobile phone or online via Skype are just variations on established innovations. In the future, we may have homes fitted with white light LEDs, replacing incandescent light bulbs. This would provide low energy light, and unlike existing light sources it could be readily adapted for optical wireless broadband internet access. Again, I could cite the advantages that this would have over current wi-fi and other radio wave-based wireless. I could also play devil’s advocate and cite all the technical challenges that must be overcome before it is practical. But how much of this will be noticed by the user when they connect wirelessly to the web, as many of us do now? There is nothing here that is startlingly innovative, not any more. It’s now utterly unremarkable that we can flood our homes with light at the flick of a switch, that we have electricity whenever we need it, that the airwaves are filled with radio, TV and telecommunication. It’s all so thoroughly woven into the fabric of our society that it is invisible. We only really appreciate how much we depend upon it on those rare occasions when we hit the ‘on’ button and, thanks to an equipment or power failure, nothing happens.

Computers, though, are still not quite as ‘invisible’ as the TV set is, and that’s because they are not yet ‘plug and play’. I think most people switch on the computer, half expecting it to not boot, fail to connect to the Internet, drop their work down a metaphorical black hole and so on. But it’s certainly the case that modern PCs are vastly easier to use than those behemoth ‘mini’ computers of decades ago, despite the fact that, technically speaking, they pack in orders-of-magnitude more power and complexity. Miniaturization and ease-of-use are both factors in the personalization of computing, and technophiles have plenty of metaphors to describe the natural end-point. Wired’s George Johnson wrote, ‘today’s metaphor is the network… (it) will fill out into a fabric and then… into an all pervasive presence’. Randy Katz explained, ‘the sea is a particularly poignant metaphor, as it interconnects much of the world. We envision fluid information systems that are everywhere and always there’.

In other words, a time when the Internet becomes the Omninet, cyberspace merges with real physical space and simply… vanishes, having become so completely woven into the fabric of society and individual lives we forget it is there. Most people, I think, believe that there is the natural world, consisting of all that is biological, and then there is the artificial world, to which belong products of technology. These two worlds are distinct… or at least they are until you give it some thought. When we use snares, or nets, or bolas, we consider these to be tools and therefore products of the artificial world of technology. But when spiders use their silk to construct snares, or to use like a gladiator uses a net, or something so like a bolas that this particular arachnid is known as a ‘bolas spider’, in which category do these functional items belong? I suppose a difference between spiders’ various webs and our analogous tools is that the silk is produced by the spider itself, and so could be considered to be just as much a part of its body as its legs or eyes. But other animals make use of discarded items they stumble across, like hermit crabs which crawl into discarded shells. This is simply re-using the shell’s original ‘purpose’, of course, but beavers fell trees to use as raw building materials for their dams and lodge. When we build dams or erect skyscrapers, these feats of engineering seem incongruous in a way a beavers’ dam or termite mound is not. Yet, in what sense are these not engineering/architectural projects as well?