Synopsis: A look at the SF trope of uploading human minds to synthetic bodies and one of its current real-life precursors, autonomous robotics. Human Rights Watch warns about the use of such systems in war.

Includes links to: (a) PDF of the report Losing Humanity, the Case Against Killer Robots, and (b) a news item.

Includes embedded video: background video from Human Rights Watch about the report.

I wrote that post as a way of exploring real-life, current-day counterparts to a science fiction trope that I use in my fiction: the uploading of the human mind into an artificial body.

In my novella Los Angeles Honeya character refers to this union of a natural mind with a synthetic body as homo artificialis and I have a topical blog, also called Homo Artificialis, devoted solely tothe idea.

There are researchers today working on making that aspect of science fiction into fact. And, whether they succeed or not, there are various precursors to homo artificialis that exist or are being developed now, including:

Following the joint report discussed in the last post, Human Rights Watch issued a 50-page report of its own on another homo artificialis precursor. The report urged national and international legislation pre-emptively banning “killer robots,” by which they mean weapons of war that are able to autonomously make life-and-death decisions with no input from a human being.

Now, when I say killer robots are a precursor to homo artificialis I don’t mean that they, in themselves, are an essential step in the development of a functional artificial human body, but highly sophisticated and adaptable autonomous systems are, and the fastest and surest way to get them to the advanced level needed for the uploading project is to allow them to be free range–let them develop without constraint. If we’re not going to do that, if we’re going to limit how those systems are developed and in what applications their use is acceptable–which might well be wise–then those limits are germane to the evolution of homo artificialis.

Why are autonomous systems important? The natural human body includes systems that are either completely or normally outside of conscious control–like heartbeat and respiration–and which are regulated by the autonomic nervous system. To have a viable instantiation of a human consciousness in a synthetic body, we’re going to need a comparable artificial system so that we don’t have to consciously control every bodily function. That kind of coordination is going to require sophisticated autonomous systems.

A section of lab-grown trachea, as used in the world’s first synthetic organ transplant (details here).

As with the report on human augmentation, I’ve made the Killer Robots report available as a free, downloadable PDF in the Homo Artificialis Library (on my topical blog Homo Artificials), filed under Ethics and Homo Artificialis.

As Raw Story reports in its news item on the report, the weapons in question aren’t yet deployed, but they are in development:

Such weapons do not yet exist, and major powers, including the US, have not decided to deploy them. But precursors are already being developed. The US, China, Germany, Israel, South Korea, Russia, and Britain are engaged in researching and developing such weapons.

The HRW Report, wisely, not only proposes legislative solutions, which can sometimes reflect the realities of the political landscape more than the issue at hand, but also a grassroots approach rooted in professional ethics, urging roboticists themselves to generate a code of conduct, tasking them to:

Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.

Military applications of advanced technology are inevitable–indeed, much advanced technology begins life as a military project, for instance within the Defense Advanced Research Projects Agency (DARPA). This has several consequences, among them:

as with any technology, there is the potential for error or abuse, but because of the military context this can result in serious injury or death,

there is likely to continue to be a trickle-down effect in which military applications migrate to civilian applications, like law enforcement and civil security, that also have the potential for error or abuse resulting in serious injury or death, and

the first two issues raise the possibility for an alarmist backlash that ends up limiting the positive, beneficial effects such technology can have (and, as we know from laws ostensibly intended to curb the pirating of intellectual property, we are sometimes likely to get all the bad consequences of such a measure without it actually accomplishing its stated goal).

On balance I’m an optimist regarding the life-enhancing potential of technology. Clearly, though, recognizing the immense benefits that have come from technology and that will continue to flow from it isn’t an excuse to be naive about possible negative consequences. If those consequences are going to be minimized (along with the potential anti-technological backlash) then we have to engage with these issues in a constructive way.

I haven’t yet read the report, so I haven’t decided if it’s sensible and constructive, alarmist and over-reaching, or a bit of both, but if we’re going to act constructively then killer robots isn’t a bad place to start.