Google's AIs learn how to encrypt their own messages
Neural networks could create encryption that becomes stronger as you hack it
A Google AI system learned how to build its own encryption key, and to make it stronger after being attacked.
Google's deep learning unit, Brain, built two neural networks, 'Alice' and 'Bob', to test whether they could create their own encryption algorithms and communicate without their messages being intercepted.
Alice sent Bob an encrypted message consisting of 16 zeroes and ones, and Bob decrypted it while Eve was intercepting this information and also trying to decrypt the message, according to the New Scientist.
Alice was able to learn from its failed attempts that were decrypted by Eve, and alter the encrypted message to prevent Eve from decoding it the following time.
The results of this experiment saw the neural networks successfully create a new encryption key advanced enough that even Eve could not break it. The way in which the encryption was devised by the two neural networks is so complex that even the researchers struggle to understand it.
These findings could be important in future, where neural networks could help AIs create encryptions that learn and become stronger as hackers try to break them, ultimately making them ideal for cybersecurity.
Preparing for long-term remote working after COVID-19
Learn how to safely and securely enable your remote workforceDownload now
Cloud vs on-premise storage: What’s right for you?
Key considerations driving document storage decisions for businessesDownload now
Staying ahead of the game in the world of data
Create successful marketing campaigns by understanding your customers betterDownload now
Solutions that facilitate work at full speedDownload now