I, Human

It is widely accepted that advances in technology will vastly change our society, and humanity as a whole. Much more controversial is the claim that these changes will all be for the better. Of course more advanced technology will increase our abilities and make our lives easier; it will also make our lives more exciting as new products enable us to achieve things we’ve never even considered before. However, as new branches of technology gather pace, it’s becoming clear that we can’t predict what wider consequences these changes will bring – on our outlook on life, on our interactions with one another, and on our humanity as a whole.

Artificial Intelligence seems to have the most potential to transform society. The possibility of creating machines that move, walk, talk and work like humans worries many, for countless reasons. One concerned group is the Southern Evangelical Seminary, a fundamentalist Christian group in North Carolina. SES have recently bought one of the most advanced pieces of AI on the market in order to study the potential threats that AI pose to humanity. They will be studying the Nao, an autonomous programmable humanoid robot developed by Aldebaran Robotics. Nao is marketed as a true companion who ‘understands you’ and ‘evolves’ based on its experience of the world.

Obviously the Nao robot has some way to go before its functions are indistinguishable from humans, but scientists are persistently edging closer towards that end goal. Neuromorphic chips are now being developed that are modelled on biological brains, with the equivalent of human neurons and synapses. This is not a superficial, cynical attempt at producing something humanlike for novelty’s sake. Chips modelled in this way are shown to be much more efficient than traditional chips at processing sensory data (such as sound and imagery) and responding appropriately.

Vast investment is being put into neuromorphics, and the potential for its use in everyday electronics is becoming more widely acknowledged. The Human Brain Project in Europe is reportedly spending €100m on neuromorphic projects, one of which is taking place at the University of Manchester. Also, IBM Research and HRL Laboratories have each developed neuromorphic chips under a $100m project for the US Department of Defence, funded by the Defence Advanced Research Projects Agency.

Qualcomm, however, are seen as the most promising developers of this brain-emulating technology, with their Zeroth program, named after Isaac Asimov’s “Zeroth Law” of Robotics (the fourth law he added to the famous Three Laws of Robotics, to protect humanity as a whole rather than just individuals):

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Qualcomm’s program would be the first large-scale commercial platform for neuromorphic computing, with sales potentially starting in early 2015.

This technology has expansive potential, as the chips can be embedded in any device we could consider using. With neuromorphic chips, our smartphones for example could be extremely perceptive, and could assist us in our needs before we even know we have them. Samir Kumar at Qualcomm’s research facility says that “if you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs.” Neuromorphic technology will vastly increase the functionality of robots like Nao, with the concept of an AI with the learning and cognitive abilities of a human gradually moving from fiction to reality.

When robots do reach their full potential to function as humans do, there are many possible consequences that understandably worry the likes of the Southern Evangelical Seminary. A key concern of Dr Kevin Staley of SES is that traditionally human roles will instead be completed by machines, dehumanising society due to less human interaction and a change in our relationships.

Even Frank Meehan, who was involved in AI businesses Siri and DeepMind (before they were acquired by Apple and Google respectively), worries that “parents will feel that robots can be used as company for their children”.

The replacement of humans in everyday functions is already happening – rising numbers of self-service checkouts mean that we can do our weekly shop without any interaction with another human being. Clearly this might be a much more convenient way of shopping, but the consequences on human interaction are obvious.

AI has also been developed to act as a Personal Assistant. In Microsoft’s research arm, for example, Co-director Eric Horvitz has a machine stationed outside his office to take queries about his diary, among other things. Complete with microphone, camera and a voice, the PA has a conversation with the colleague in order to answer their query. It can then take any action (booking an appointment, for example) as a human PA would.

This is just touching on the potential that AI can achieve in administrative work alone, and yet it has already proved that it can drastically reduce the amount of human conversations that will take place in an office. With all the convenience that it adds to work and personal life, AI like this could also detract from the relationships, creativity and shared learning that all branch out of a 5 minute human conversation that would otherwise have taken place.

The potential for human functions to be computerised, and the accelerating pace at which AI develops, means that the effects on society could go from insignificant to colossal in the space of just a few years.

One concept that could drastically fast forward the speed of AI development, is the Intelligence Explosion; the idea that we can use an AI machine to devise improvements to itself, with the resulting machine able to design improvements to itself further, and so on. This would develop AI much more successfully than humans can, because we have a limited ability to perform calculations and spot areas for improvement in terms of efficiency.

Daniel Dewey, Research Fellow at the University of Oxford’s Future of Humanity Institute, explains that “the resulting increase in machine intelligence could be very rapid, and could give rise to super-intelligent machines, much more efficient at e.g. inference, planning, and problem-solving than any human or group of humans.”

The part of this theory that seems immediately startling is that we could have a super-intelligent machine, whose programming no human can comprehend since it has so far surpassed the original model. Human programmers would initially need to set the first AI machine with detailed goals, so that it knows what to focus on in the design of the machines it produces. The difficulty would come from precisely defining the goals and values that we want AI to always abide by. The resulting AI would focus militantly on achieving these goals in whichever arbitrary way it deems logical and most efficient, so there can be no margin for error.

We would have to define everything included in these goals to a degree of accuracy that even the English (or any) language might prohibit. Presumably we’d want to create an AI that looks out for human interests. As such, the concept of a ‘human’ would need definition without any ambiguity. This could cause difficulties when there might be exceptions to the rules we give. We might define a human as a completely biological entity – but the machine would then consider anyone with a prosthetic limb, for example, as not human.

We might also want to define what we want AI to do for humans. Going back to Asimov’s “Zeroth Law”, a robot may not “by inaction, allow humanity to come to harm.” Even if we successfully programmed this law into AI (which is difficult in itself), the AI could then take this law as far is it deems necessary. The AI might look at all possible risks to human health and do whatever it can to eliminate them. This could end up with machines burying all humans a mile underground (to eliminate risk of meteor strikes), separating us in individual cells (to stop us attacking each other) and drip feeding us tasteless gruel (to give us nutrients with no risk of overeating fatty foods).

This example is extreme, but if the programmers who develop our first AI are incapable of setting the right definitions and parameters, it’s a possibility. The main problem is that even basic instructions and concepts involve implicitly understood features that can’t always be spelled out. A gap in the translation might be overlooked if it’s not needed for 99.9% of the machine’s functions, but as the intelligence explosion progresses, a tiny hole in the machine’s programming could be enough to lead to a spiral in disastrous AI decisions.

According to Frank Meehan, whoever writes the first successful AI program (Google, he predicts) “is likely to be making the rules for all AIs.” If further AI is developed based on the first successful version (for example, in the way that the intelligence explosion concept suggests), there is an immeasurable responsibility for that developer to do things perfectly. Not only would we have to trust the developer to program the AI fully and competently, we would also have to trust that they have the integrity to make programming decisions that reflect humanity’s best interests, and are not solely driven by commercial gain.

Ultimately the first successful AI programmer could have fundamental control and influence over the way that AI progresses and, as AI will likely come to have a huge impact on society, this control could span the human race as a whole. So a key question now stands: How can we trust the directors of one corporation with the future of the human race?

As Meehan goes on to say, fundamental programming decisions will probably be made by the corporation “in secret and no one will want to question their decisions because they are so powerful.” This would allow the developer to write whatever they want without consequence or input from other parties. Of course AI will initially start out as software within consumer electronics devices, and companies have always been able to develop these in private before. But arguably the future of AI will not be just another consumer technology, rather it will be one that will change society at its core. This gives us reason to treat it differently, and develop collaborative public forums to ensure that fundamental programming decisions are taken with care.

These formative stages of development will be hugely important. One of the key reasons that the Southern Evangelical Seminary are studying Nao, is because of worries that super-intelligent AI could lead to humans “surrendering a great deal of trust and dependence” with “the potential to treat a super AI as god”. Conversely, Dr Stuart Armstrong, Research Fellow at the Future of Humanity Institute, believes that a super-intelligent AI “wouldn’t be seen as a god but as a servant”.

The two ideas, however, aren’t mutually exclusive: we can surrender huge dependence to a servant. If we give the amount of dependence that leads parents to trust AI with the care of their children, society will have surrendered a great deal. If AI is allowed to take over every previously human task in society, we will be at its mercy, and humanity is in danger of becoming subservient.

AI enthusiasts are right to say that this technology can give us countless advantages. If done correctly, we’ll have minimum negative disruption to our relationships and overall way of life, with maximum assistance wherever it might be useful. The problem is that the full definition of ‘correctly’ hasn’t been established, and whether it ever will be is doubtful. Developers will always be focussed on commercial success; the problem of balance in everyday society will not be their concern. Balance could also be overlooked by the rest of humanity, as it focuses on excitement for the latest technology. This makes stumbling into a computer-controlled dystopian society a real danger.

If humans do become AI-dependent, a likely consequence is apathy (in other words, sloth – another concern of SES) and a general lack of awareness or knowledge, because computers will have made our input redundant. Humanity cannot be seen to have progressed if it becomes blind, deaf and dumb to the dangers of imperfect machines dictating our lives. Luddism is never something that should be favoured, but restraint and extreme care is needed during the development of such a precarious and transformative technology as AI.

“Don’t give yourselves to these unnatural men — machine men with machine minds and machine hearts! You are not machines! You are not cattle! You are men! You have a love of humanity in your hearts!”

Charlie Chaplin, The Great Dictator (1940)

Author: Tom Hook

Bid Co-ordinator for SBL, he holds a BA in Philosophy from Lancaster University, in which he focussed on Philosophy or Mind, and wrote his dissertation around Artificial Intelligence. He went on to support NHS Management in the Development of Healthcare Services within prisons, before moving to SBL.

About Cyber Talk

CyberTalk Magazine is the leading multidisciplinary voice in cyber security, providing an accessible yet thought-provoking resource to academics and professionals alike. Produced in conjunction with the Cyber Security Centre at De Montfort University Leicester, CyberTalk features a wealth of opinion from some of the leading names in technology, psychology, philosophy and beyond.
The magazine marks a first, practical, step towards an expression of the growing realisation that we must move beyond our suspicion of the cyber domain and our fear of our dependence upon it, and off­ers a platform upon which a truly interdisciplinary approach to the safety and security of the human experience of the cyber domain can be developed.