Creating Androids

It has often been noted that over the past 100 years or so, we’ve developed technology to a degree never before seen, greatly transforming our societies as a result. Indeed, unlike the advancements of previous centuries, the creation and widespread proliferation of computers, smart phones, and various online services in particular has eliminated some of the space between us all, and drastically impacted both how we live our lives, and how the world at large operates. Yet in spite of all that, our newest technologies have still served only to assist us, offering us much, while requiring only a few lifestyle changes in return. Now in the 21st century though, as we advance toward the creation of virtual brains and artificial life, we’re encroaching on the verge of a technological transformation that will impact us far more greatly than that of the previous century even, should we choose to fully pursue it. Indeed, strong and steady progress is being made toward the creation both of conscious software and of android bodies, and since there’s no reason to think that an android with a sufficiently sophisticated mechanical brain won’t possess awareness, within the upcoming decades, it seems that for once, “our technology” will compete with us and challenge us. Beyond some early years then when artificial life is still underdeveloped, it’s creation will cast us into great ethical and rights struggles, and present us with at least one other sentient species that we’ll have to learn to live with; a species that could even render us functionally obsolete, possibly forcing us to transition to mechanical bodies ourselves–if even possible by then–to keep pace. Over the next few decades then, we will have to contemplate our progress regarding artificial life closely, for however we proceed, unless we simply avoid creating it at human-level consciousness, it will impact us greatly, marking the greatest transformation of technology and society ever seen.

A staple of science fiction since the 19th century, artificial intelligence has been pursued scientifically as long as it’s been possible to, and now, we may finally be close to developing it. Indeed, in his Scientific American article “The Human Brain Project,” Henry Markram of the Blue Brain Project explains his ongoing research into virtual brains, and details his plans for the future with the proposed Human Brain Project (a joint initiative of about 130 universities from around the world). Despite the challenges in programming and running a virtual brain, he believes that such software can indeed be programmed, and that the next generation of supercomputers would enable it to run (see p. 52). That said, his approach is actually pretty simple–in theory at least, if not in practice. Rather than measure and reproduce all 100 trillion or so synapses of the human brain, he’s taking a more biological approach, seeking to create virtual brains by programming rules similar to those that enable biological brains to develop (see pp. 52 & 53). Indeed, he explains that by such an indirect approach, thorough knowledge of the brain’s workings isn’t necessary, because as new information can be repeatedly added to the program, such a virtual brain can be gradually assembled and refined based on periodic testing against expected performance (see pp. 53/54). The biggest challenges then, it seems, are simply gathering the huge amounts of data necessary to accomplish anything, and, of course, refining and maintaining the model. Already though as part of the Blue Brain Project, Markram and his associates successfully programed a key part of the rodent brain by 2008, and by 2014 or so, they envision the completion of a rodent brain in its entirety (see p. 53, box “Power of the Exaflop: More Computer = More Brain”). Scientifically then, the only current barrier that Markram sees to creating a virtual human brain is that current supercomputers aren’t quite powerful enough to run one, however as that barrier will disappear with the arrival of the next generation of supercomputers in the next 10 years or so (see pp. 55 & 52), it’s hardly any barrier at all. In fact, ironically, the only real catch seems to be that the Human Brain Project needs funding, naturally, and Markram currently envisions said funding coming from a European Union grant (see p. 55), a prize for which five other initiatives are competing, but of which only two can win. Nonetheless, even though lack of funding can delay or even kill a scientific endeavor, I imagine that somehow or another, the Human Brain Project will get its funding; the Blue Brain Project has certainly proceeded alright. Hence, it appears that we may succeed in creating a virtual human brain within a mere 10 to 15 years from now–Markarm himself sees it happening by 2020 or so (see p. 52 & p. 53, box “Power of the Exaflop”)–far sooner, perhaps, than any of us have realized.

Once we’ve recreated the human brain in software and on hardware, of course, intelligent artificial life likely won’t be far behind, because we’ll have finally created that one, immensely-difficult-to-create component of such life that we currently lack. Indeed, the only steps left to artificial life as commonly envisioned will be to perfect mechanical bodies that look and function like human ones (something we’ve long had a start on), and to reduce in size the hardware that supports virtual brains–mechanical brains–to something that can fit within those bodies. Interestingly, in his article “The Human Brain Project,” Markram himself never discusses this likely outcome; he instead envisions virtual brains being used to study brain disorders, and to inspire far more efficient computer design (see pp. 55 & 53/54), but it seems obvious that it’ll transpire unless, of course, regulations are put in place to prohibit it. After all, intelligent, humanoid robots have been the dream all along, tempered only by an underdeveloped concern for what the creation of sentient mechanical life might mean, so certainly with an android brain in hand, unless prohibited, scientists will fulfill the dream. Already since the early 2000s, for instance, Japanese and South Korean researchers in particular have created quite human-looking robots–some with the capabilities even to move fluidly and quietly, and to exhibit facial expressions (see the Wikipedia article “Android (Robot),” sec.s 2.1 & 2.2)–while South Korea at least is actively planning for robot integration into society, beginning in 2020 (see the National Geographic News article “A Robot in Every Home by 2020, South Korea Says,” by Stefan Lovgren). Finally, as for fitting mechanical brains into mechanical bodies, as Markram believes in his Scientific American article, the creation of a virtual brain will likely inspire far more efficient and hence smaller computer hardware (see p. 55), and so presumably hardware that models the human brain in actual physical parameters to some degree. (Perhaps engineers will even take inspiration once again from the human brain, and use our increasingly-sophisticated nanotechnology to create mechanical neurons and neurotransmitters.) How long the development of these systems might take is the open question, but as we’re continuing to develop nanotechnology without too much difficulty (and computers in general have traditionally progressed quite rapidly), it might not be unreasonable to speculate that it’ll take a few decades at most. Hence, androids themselves are likely not too far up ahead, for assuming the Human Brain Project receives funding and proceeds as envisioned, with a brain by the 2020s (see “The Human Brain Project,” p. 52, and p. 53, box “Power of the Exaflop”), bodies to match, and “portable” mechanical brains within a few decades after that at the latest, we can expect the ability at least of creating androids to be with us by 2060 or so, if not even sooner.

Furthermore, once and if we do create androids, they will very likely be conscious, sentient beings; yet either way, considering them to be sentient beings will be the only reasonable context in which for us to treat them. Of course, many people would argue this idea, and certainly philosophers have been arguing issues of consciousness for centuries (see, for instance, Wikipedia articles such as “Problem of Other Minds” and “Philosophical Zombie“), but with our current and foreseeable knowledge at least, it’s the only position that’s reasonable. After all, strictly speaking, we don’t even know that other humans are sentient, basically because we have no way for one person to experience another’s mind. Likewise, most of us figure that people with severe mental disabilities–as well as a variety of animals–experience the world in some more limited fashion as well, but again, we hardly know that scientifically. Granted, thanks to case studies of people with brain damage, for instance, we’ve come a long way in learning how different parts of the brain contribute to function. And, as Carl Zimmer explains in his Scientific American article “100 Trillion Connections,” researchers have even begun to learn a great deal about how the brain’s neural network is structured, to the point that we may soon be able to predict what people are seeing based on their brain patterns (see p. 63). Yet this great plethora of data, useful as it is, merely correlates with supposition and explains function, but not essence; it explains nothing about how mere chemical and electrical activity in someone’s brain gives rise to their mind–to awareness, feelings, and a sense of self. Indeed, this is presumably what leads many people to believe that consciousness must be more than the sum of biology and physics, that it must originate from a greater reality. But, while this is certainly possible, it’s not particularly scientific, and so in regards to artificial life, we must accept that our brains really do endow us with consciousness. Finally, considering that anything that presumably disrupts sentience in a person also significantly disrupts functioning, we might just as well assume that consciousness inevitably arises in any highly sophisticated system–be it biological or technological in origin–and that it’s intimately tied to that functional sophistication. Hence, until and unless we gain the ability to directly experience others’ minds (which may never happen, for even if we could connect two people’s minds so that they could share thoughts, actually experiencing another’s world might still be impossible), if we ever create an android that claims it’s sentient–or even one that merely duplicates human high-level function–we must believe it to be conscious. Indeed, given the mysteries and suppositions about our own minds, we’d have no good reason for believing that such a mechanical being wouldn’t be conscious, so to avoid risking great harm and rights abuses, we’d have to morally treat it as we would any flesh-and-blood person.

Regardless, depending on how far we choose to go in developing robots and intelligence software, it will surely impact us greatly; how so though depends on how far. If we refrain from creating androids and instead just stick with non-sentient robots–or else robots with moderate animal-level awareness–then the impact will still be substantial, yet comparatively modest. Indeed, it’ll really just be the logical progression of our technological development over the past century, or, more specifically, of mechanization. For instance, say we develop humanoid robots that lack the sophistication of moderately-intelligent animals even, but that are capable of responding to commands and executing tasks, complete with algorithms to analyze situations and determine appropriate courses of action. (A level of functionality that somewhat calls to mind Apple’s Siri iPhone app, except in a body interacting with physical objects, rather than on a smartphone manipulating software services.) Then, such robots could be delegated tasks like cleaning within homes and small businesses; offering greetings and assistance to customers at department stores; driving buses and other public transportation vehicles; assisting people with disabilities to live independently; fighting fires and responding in other sorts of emergency situations; and even watching over a home or office while everyone is away. At the extreme, we might even develop robots with dog- or cat-level awareness (presumably with bodies and personalities to match) to serve as pets, although of course they too would need to be treated humanely, just as flesh-and-blood pets should be treated. (Such mechanical pets would offer all the joys of owning a pet, but with certain advantages such as an animal needing less care; a hypoallergenic pet; and, indeed, a pet impervious to natural death, to the delight of everyone who’s been saddened over the short lifespans of most companion animals.) And, since such robots wouldn’t possess consciousness (as determined by their very significant lack of function as compared to a human), they wouldn’t be people, and so there presumably wouldn’t be any ethical issues associated with developing and using them. (Although with mechanical pets, it would have to be considered, for instance, what effect their development and usage would have on the populations of real companion animals.) In short then, such robots would do what mechanization has been doing for over a century: putting people out of work (and now even animals, possibly), but making life a little easier for everyone in return, assisting us with tasks–some certainly undesirable or life-threatening even–that we would previously have had to do ourselves. Hence, of all the ways in which we might go forward with robots, the development of non-sentient ones seems the most certain to transpire (perhaps by the 2020s or ’30s), and it’s also the only way that really wouldn’t be such a radical change, given how it really only continues a century-old trend.

If we ever choose to create androids, however, then “our technology” will cease to be such, as it will not blindly assist us, but rather both challenge us and compete with us, in a way that no previous technological creation ever has. Most prominently, arguments over whether androids are conscious people or not would place humans at odds with one another, and leave androids subject to rights abuses, possibly for many years. Using a complete virtual brain to study brain function, for instance–deactivating and activating parts of it at will, and controlling it within a virtual environment, as Henry Markram suggests in his Scientific American article (see p. 55)–would be akin to locking a human in a room, and in some way wantonly shutting off and restarting parts of their brain. Likewise, putting “safeguards” in place to allow for human control of androids would be akin to forcibly giving a human drugs, so as to render them totally dependent. And, of course, forcing androids to work for us would be nothing less than a re-institution of slavery, except with a new group of people in a new era. Next, there would be the classic science fiction fear of androids taking over while enslaving or killing everyone, and it would certainly be a valid concern. After all, groups of us humans have historically tried to acquire power at the expense of others, and in the extreme case of the Nazis, for instance, some among us have even carried out horrific experimentation and staggeringly large-scale genocide. And indeed, while ill treatment could provoke much anger in androids against us, if they were to become far more advanced than us, their mere functionally-superior nature could leave them ambivalent at least of our existence, as they simply wouldn’t need us. Lastly then, there would be the possibility that androids could indeed become so much more advanced than us–possibly through their own deliberate tweaking–that we would be rendered functionally obsolete. In other words, imagine if within mere years, androids solved more problems in the arts and sciences than we have within centuries and millennia, and if, in general, they were far better and more efficient at everything. Aside from the fact that no one likes to feel unneeded or unable to do anything truly useful, whereas even antagonistic governments usually show a practical understanding of their interconnectedness, androids could conceivably band together and become totally independent of us, leaving them, perhaps, with no reason to care about us or our well-being anymore. Indeed, in this scenario, it’s conceivable that in order for us to keep up, we could even be forced to transition to mechanical and virtual bodies ourselves someday (though this option would only exist provided we solve the additional, significant technological challenge of copying human minds to mechanical brains), lest we would simply accept domination in some form. Hence, between great ethical debates and possibilities of being taken over or else rendered functionally obsolete, it’s clear that creating androids would be an advancement unlike any before, for it would transform our societies in one of the greatest of ways imaginable.

As we progress in our knowledge of artificial intelligence and android creation then, the differences between a bright future and a profoundly disastrous one will come down to the choices we make along the way. Our simplest choice, perhaps, will be whether or not to create androids at all. There’s certainly a lot that we could accomplish just with non-sentient robots–sparing people difficult and dangerous work, for instance–so especially considering the issues involved with creating androids, stopping with advanced robots could be the wisest choice. If, however, we insist on creating androids–and we probably will, given the intellectual and imaginative drive to do so, coupled, perhaps, with a lack of true appreciation for the issues involved–then we’ll have to confront the conscious-or-not debate for real. Hence, we’ll have to decide how we regard androids–as sentient beings, or just mere simulators–and so whether or not to treat them as we would any other group of people. There’s the most crucial choice we might face then, for either we’ll go on to abuse androids–to our own moral detriment as well as to the androids’ quality of life, if not ultimately to our own very lives–or else we’ll welcome them into our societies as fellow sentient beings. Interestingly though, either choice will still leave us living on this planet alongside an intelligent species of our own creation–one that might subsequently become even more advanced, making it wise for us to at least develop the ability to migrate to mechanical bodies ourselves–and for this, nothing will ever be the same. Maybe we’ll either subjugate androids or else become dominated by them ourselves, or maybe we’ll all come to live fulfilling lives together in peace; yet in having created life, we’ll each have quite a bit to live through … nothing less, indeed, than the greatest transformation of technology and society ever seen.