Google's AIs learn how to encrypt their own messages
Neural networks could create encryption that becomes stronger as you hack it
A Google AI system learned how to build its own encryption key, and to make it stronger after being attacked.
Google's deep learning unit, Brain, built two neural networks, 'Alice' and 'Bob', to test whether they could create their own encryption algorithms and communicate without their messages being intercepted.
Alice sent Bob an encrypted message consisting of 16 zeroes and ones, and Bob decrypted it while Eve was intercepting this information and also trying to decrypt the message, according to the New Scientist.
Alice was able to learn from its failed attempts that were decrypted by Eve, and alter the encrypted message to prevent Eve from decoding it the following time.
The results of this experiment saw the neural networks successfully create a new encryption key advanced enough that even Eve could not break it. The way in which the encryption was devised by the two neural networks is so complex that even the researchers struggle to understand it.
These findings could be important in future, where neural networks could help AIs create encryptions that learn and become stronger as hackers try to break them, ultimately making them ideal for cybersecurity.
Top 5 challenges of migrating applications to the cloud
Explore how VMware Cloud on AWS helps to address common cloud migration challengesDownload now
3 reasons why now is the time to rethink your network
Changing requirements call for new solutionsDownload now
All-flash buyer’s guide
Tips for evaluating Solid-State ArraysDownload now
Enabling enterprise machine and deep learning with intelligent storage
The power of AI can only be realised through efficient and performant delivery of dataDownload now