Google's AIs learn how to encrypt their own messages
Neural networks could create encryption that becomes stronger as you hack it
A Google AI system learned how to build its own encryption key, and to make it stronger after being attacked.
Google's deep learning unit, Brain, built two neural networks, 'Alice' and 'Bob', to test whether they could create their own encryption algorithms and communicate without their messages being intercepted.
Alice sent Bob an encrypted message consisting of 16 zeroes and ones, and Bob decrypted it while Eve was intercepting this information and also trying to decrypt the message, according to the New Scientist.
Alice was able to learn from its failed attempts that were decrypted by Eve, and alter the encrypted message to prevent Eve from decoding it the following time.
The results of this experiment saw the neural networks successfully create a new encryption key advanced enough that even Eve could not break it. The way in which the encryption was devised by the two neural networks is so complex that even the researchers struggle to understand it.
These findings could be important in future, where neural networks could help AIs create encryptions that learn and become stronger as hackers try to break them, ultimately making them ideal for cybersecurity.
The IT Pro guide to Windows 10 migration
Everything you need to know for a successful transitionDownload now
Managing security risk and compliance in a challenging landscape
How key technology partners grow with your organisationDownload now
Software-defined storage for dummies
Control storage costs, eliminate storage bottlenecks and solve storage management challengesDownload now
6 best practices for escaping ransomware
A complete guide to tackling ransomware attacksDownload now