How Google's AI taught itself to create its own encryption

As machine learning becomes ubiquitous, robots will be tasked with handling increasingly more sensitive and private data. In order to help protect this personal information, computer scientists at Google have developed neural networks that teach themselves how to encrypt the information they process.

A team from Google Brain, the organisation's deep learning research project, taught neural networks how to encrypt and decrypt messages. In a research paper published online the scientists created three neural networks: Alice, Bob, and Eve.

Advertisement

Each was assigned its own job. Alice was taught to send secret, encrypted, messages to Bob. Bob was required to try to decrypt them. Finally, Eve, was taught to decypher the messages, without being given the secret key provided to the other neural networks.

None of the networks involved in the study were taught cryptographic algorithms and as such did not develop sophisticated systems. They were, however, able to convert plain text messages into encrypted messages.

Read next

It's only a matter of time before criminals turn AIs against us

ByMiles Brundage and Shahar Avin

"The learning does not require prescribing a particular set of cryptographic algorithms, nor indicating ways of applying these algorithms: it is based only on a secrecy specification represented by the training objectives," the researchers Martín Abadi and David Andersen wrote in their paper.

Machine learning versus AI: what's the difference?

After 15,0000 simulations, the two neural networks with the cryptographic key (Alice and Bob) were able to send and decrypt messages in a secured way. Throughout the process Eve was not able to fully decrypt messages.

Advertisement

After teaching the algorithms to protect data, the researchers then tried to answer the question of whether artificial intelligence could learn what should be protected by encryption. To do this Abadi and Anderson created another neural network: Blind Eve. The neural network was only aware that information had been sent, but had no way to access it. Eve's error rate was lower than that of 'Blind Eve' but as time progressed Eve was "not able to reconstruct any more information" about the ciphered information "than would be possible by simply knowing the distribution of values" of the ciphered messages.

Pedro Domingos, a professor from the University of Washington's computer science department and AI author told WIRED the research is "useful" but it is "not clear what the purpose of learning to encrypt is".

"Looking beyond this paper, though, adversarial learning is a very interesting topic, because learning in the real world is increasingly often done against adversaries, and because adversarial formulations can lead to better learning," he said.

Advertisement

Overall, the Google researchers said that neural networks could be trained to protect certain information and "also for attacks".

"While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis," the paper concluded.