Machines Now Know How to Terrorise Humans

The Shelley Project studies how to produce horror stories as a result of collaboration between humans and artificial intelligence.

Ernest Thesiger and Colin Clive in Bride of Frankenstein directed by James Whale, 1935 | No known copyright restrictions

Shelley is the first artificial intelligence that writes horror stories. The project aims to explore how humans and machines can collaborate, the obstacles of that relationship and above all to confirm whether artificial intelligence is capable of provoking primary emotions in humans. We talk about this with Manuel Cebrián, an MIT Media Lab scientist, who talks to us about deep learning and explains the process by which they taught Shelley to write horror stories.

1816 was the Year Without a Summer. The climate went completely crazy. Frosts and droughts ruined harvests and spread hunger around North America and Europe, while in Asia Monsoons caused flooding. The warm weather stayed away and the snow kept falling, even in June, which led to a group of five English friends being forced to spend their holidays in Switzerland confined in a mansion close to Lake Geneva. The five were the Lord Byron, the poet; John Polidori, a doctor, Percy Shelley, poet, his wife Mary Shelley, writer, and her step-sister, Claire Clairmont.

In their boredom they challenged each other to write the most frightening horror story possible. That competition led to Lord Byron writing his poem “Darkness”, narrated by the last man on Earth; Polidori thought up a tale of vampires that some time later would inspire Bram Stoker to create his famous Count Dracula. And Mary Shelley conceived Frankenstein, although it took her another 14 months to finish the novel.

Who could have guessed that 2017 would turn out to be another year without a summer and that Byron, Shelley and Polidori would once more find themselves confined, although this time on Twitter, where they would write new takes of terror.

For this, Shelley is based on an AI capacity called deep learning, which can learn for itself based on large quantities of data, imitating the functioning of the brain’s neuronal networks.

The algorithm analyses the information and extracts that which is relevant for the task that it is going to carry out, as well as patterns from which to diagnose diseases, for example, in the case of biomedical applications; discovering new exoplanets, as recently announced by NASA and Google; or writing new and original horror stories from scratch, like Shelley.

For her debut as a horror writer, this tweeting android prepared by “reading” a vast quantity of horror literature, from classics such as Edgar Allan Poe and H.P. Lovecraft, to modern authors like Stephen King. She also devoured the horror channels on Reddit – a news aggregator. She then processed everything she had read, hundreds of thousands of tales of terror, extracted patterns, and started to generate terrifying stories.

To find out which of them best met their objective, above all at the start, which was frightening whoever read them, Shelley used people’s feedback: the likes and RTs that each new tale that she began received.

Thus, every so often the algorithm sends either a tweet or a short thread with which it starts a new story that anyone can follow and respond to simply by answering any of the messages that end with the hashtag #yourturn. Shelley, however, does not answer the messages she receives; only those with the greatest narrative potential, capable of generating a long thread.

“She is capable of learning which narrative threads function best for horror. And she can generate scary scenes that are like nothing that already exists. She has created a completely new type of terror”, Cebrián affirms enthusiastically, adding: “Humans are no longer even needed. Together with Polidori and Lord Byron, the three robots are now capable of writing and gradually improve themselves unaided”.

For the moment, however, and much to the relief of flesh-and-blood horror writers, the tales that the three robots are capable of generating are a maximum of five paragraphs long.

Shelley also does not respond to comments that are racist, sexist, contain insults or are incoherent. And there are plenty of these “There were people who spent their time trolling the project and trying to get Shelley to say things that she did not have to say. The desire to destroy is innate in human beings, just like the desire to create. Evil is always just around the corner”, points out Cebrián, who confesses that he is a fan of the horror genre.

The Shelley project made its debut at Halloween 2017, and forms part of a trilogy in which Cebrián, together with researchers Pinar Yanardag and Iyad Rahwan, also from the Scalable Cooperation group at Media Lab (MIT), aim to explore in what way humans and machines collaborate, what obstacles exist in that relationship and, above all, whether artificial intelligence is capable of provoking primary emotions in human beings such as fear, using cooperation strategies.

“Creating a visceral emotion such as fear continues to be one of the pillars of human creativity. The challenge is especially important at a time when we are asking what the limits are for artificial intelligence” say the three researchers on the project’s website.

“In recent years there is much talk about artificial intelligence being a threat for human beings. We want to explore to what point this is true and be one step ahead of any possible ill-intentioned use of this technology. If somebody wanted to use artificial intelligence to instigate fear in society, to propagate ideas with the aim of terrorising: could they? The answer is yes, but with qualifications”, Cebrián considers.

The first experiment in this sense was launched in 2016, when also for Halloween they published Nightmare Machine, a robot capable of generating haunted faces and places. Like Shelley, this machine of horrors is based on deep learning. The researchers first of all trained the system, feeding it with faces of celebrities such as Brad Pitt, landscapes and monuments such as the Eiffel Tower, and a corpus of supposedly terror-causing images such as zombies and cities filled with toxic waste or haunted.

Neuschawnstein Castle, transformed with Nightmare Machine

The mixed to different degrees the two types of images and showed the result to humans, who voted via the project website what image they found most frightening (The Brad Pitt zombie is very popular).

Thus, in the end, the system had thousands of digitally-generated faces that, thanks to the voting of over two million people, could classify and choose which were the scariest. It is interesting that the algorithm also learned what was more frightening in each country or for men and women. “There are cultures in which IA is not capable of learning what is scary, such as the Asian cultures, where the nightmare machine does not function very well because they have a vision of horror that is completely different to our own”, notes Cebrián.

To check the real effectiveness of this nightmare machine, they conducted an experiment where they used a psychometric test to measure the levels of anxiety of people who were participating and confirmed which 10 faces or places that were most scary also caused most anxiety among volunteers. “AI was capable of detecting the extreme emotions of people and of provoking them”, points out Cebrián, adding: “So, if somebody wanted to use manipulated images to frighten people, could they? The answer is yes”.

The nightmare machine is not the first attempt at using AI to cause fear. The IBM Watson supercomputer helped to create the trailer of the science fiction film ‘Morgan’ (2016). For this, the algorithm analysed hundreds of trailers of terror films and then processed the entire feature film to identify the best horror scenes. Finally, it isolated 10 moments, some 6 minutes of video, which was put together by a human editor to create a coherent story. AI cut the process down to barely 24 hours when, generally, film trailers usually took between 10 days and a month.

Now about to make its debut is the third and final part of this trilogy formed by Shelley and Nightmare Machine. It will be on 1 April this year, which in many countries is April Fool’s Day. “We will close the trilogy with Norman, in honour of Norman Bates, an AI that will be capable of frightening us in the most psychological way”, Cebrián comments.

For the time being, it seems that machines are capable of scaring us. What will those who handle them do with that, in an era when thousands of bots circulate around the social media networks spreading false news and manipulated images, capable of all types of instantaneous reactions. And perhaps what is more important, what will we, society, do with that?

“Thanks to these types of experiments, we can now detect, for example, when an online entity of this nature – the notorious bot – has been created, and understand better how they work and what their limits are,” says Cebrián. “Sometimes you have to do evil things to be able to see the limit of doing evil.”

Journalist specialising in science and digital culture. She is currently a contributor to La Vanguardia, Muy Interesante, Quo México, Historia y Vida and Mètode, and has also worked for the newspapers Público and Avui.