Google's AIs learn how to encrypt their own messages
Neural networks could create encryption that becomes stronger as you hack it
A Google AI system learned how to build its own encryption key, and to make it stronger after being attacked.
Google's deep learning unit, Brain, built two neural networks, 'Alice' and 'Bob', to test whether they could create their own encryption algorithms and communicate without their messages being intercepted.
Alice sent Bob an encrypted message consisting of 16 zeroes and ones, and Bob decrypted it while Eve was intercepting this information and also trying to decrypt the message, according to the New Scientist.
Alice was able to learn from its failed attempts that were decrypted by Eve, and alter the encrypted message to prevent Eve from decoding it the following time.
The results of this experiment saw the neural networks successfully create a new encryption key advanced enough that even Eve could not break it. The way in which the encryption was devised by the two neural networks is so complex that even the researchers struggle to understand it.
These findings could be important in future, where neural networks could help AIs create encryptions that learn and become stronger as hackers try to break them, ultimately making them ideal for cybersecurity.
What you need to know about migrating to SAP S/4HANA
Factors to assess how and when to begin migrationDownload now
Your enterprise cloud solutions guide
Infrastructure designed to meet your company's IT needs for next-generation cloud applicationsDownload now
Testing for compliance just became easier
How you can use technology to ensure compliance in your organisationDownload now
Best practices for implementing security awareness training
How to develop a security awareness programme that will actually change behaviourDownload now