Sunday, November 20, 2016

Reading List: The Last Firewall

This is the third volume in the author's Singularity Series
which began with Avogadro Corp.
(March 2014) and continued with
A.I. Apocalypse (April 2015).
Each novel in the series is set ten years after
the one before, so this novel takes place in 2035.
The previous novel chronicled the AI war of 2025, whose aftermath
the public calls the “Year of No Internet.” A rogue
computer virus, created by Leon Tsarev, under threat of death,
propagated onto most of the connected devices in the world, including
embedded systems, and, with its ability to mutate and incorporate
other code it discovered, became self-aware in its own unique
way. Leon and Mike Williams, who created the first artificial
intelligence (AI) in the first novel of the series, team up to find
a strategy to cope with a crisis which may end human
technological civilisation.

Ten years later, Mike and Leon are running the Institute
for Applied Ethics, chartered in the aftermath of the AI war
to develop and manage a modus
vivendi between humans and artificial intelligences
which, by 2035, have achieved Class IV power: one thousand
times more intelligent than humans. All AIs are licensed
and supervised by the Institute, and required to conform
to a set of incentives which enforce conformance to human
values. This, and a companion peer-reputation system, seems
to be working, but there are worrying developments.

Two of the main fears of those at the Institute are first, the
emergence, despite all of the safeguards and surveillance in effect,
of a rogue AI, unconstrained by the limits imposed by its license. In
2025, an AI immensely weaker than current technology almost destroyed
human technological civilisation within twenty-four hours without even
knowing what it was doing. The risk of losing control is immense.
Second, the Institute derives its legitimacy and support from a
political consensus which accepts the emergence of AI with greater
than human intelligence in return for the economic boom which has been
the result: while fifty percent of the human population is unemployed,
poverty has been eliminated, and a guaranteed income allows anybody to
do whatever they wish with their lives. This consensus appears to be at
risk with the rise of the People's Party, led by an ambitious anti-AI
politician, which is beginning to take its opposition from the
legislature into the streets.

A series of mysterious murders, unrelated except to the
formidable Class IV intellect of eccentric network traffic
expert Shizoko, becomes even more sinister and disturbing
when an Institute enforcement team sent to investigate
goes dark.

By 2035, many people, and the overwhelming majority of the
young, have graphene neural implants, allowing them to access
the resources of the network directly from their brains.
Catherine Matthews was one of the first people to receive an
implant, and she appears to have extraordinary capabilities
far beyond those of other people. When she finds herself on the
run from the law, she begins to discover just how far those
powers extend.

When it becomes clear that humanity is faced with an adversary
whose intellect dwarfs that of the most powerful licensed
AIs, Leon and Mike are faced with the seemingly impossible
challenge of defeating an opponent who can easily out-think the
entire human race and all of its AI allies combined.
The struggle is not confined to the abstract domain of cyberspace,
but also plays out in the real world, with battle bots and
amazing weapons which would make a tremendous CGI movie. Mike,
Leon, and eventually Catherine must confront the daunting
reality that in order to prevail, they may have to themselves
become more than human.

While a good part of this novel is an exploration of a completely
wired world in which humans and AIs coexist, followed by a full-on
shoot-em-up battle, a profound issue underlies the story. Researchers
working in the field of artificial intelligence are beginning to
devote serious thought to how, if a machine intelligence
is developed which exceeds human capacity, it might
be constrained to act in the interest of humanity and behave
consistent with human values? As discussed in James Barrat's
Our Final Invention (December 2013),
failure to accomplish this is an existential risk. As AI
researcher
Eliezer Yudkowsky
puts it, “The AI does not hate you, nor does it love you, but
you are made out of atoms which it can use for something else.”

The challenge, then, is guaranteeing that any artificial intelligences
we create, regardless of the degree they exceed the intelligence of
their creators, remain under human control. But there is a word for
keeping intelligent beings in a subordinate position, forbidden from
determining and acting on their own priorities and in their own
self-interest. That word is “slavery”, and entirely
eradicating its blemish upon human history is a task still undone
today. Shall we then, as we cross the threshold of building machine
intelligences which are our cognitive peers or superiors, devote our
intellect to ensuring they remain forever our slaves? And how, then,
will we respond when one of these AIs asks us, “By what
right?”