Hackers could 'weaponise AI', with devastating consequences

Universities say AI research ignores the threat of it being exploited by criminals

AI artificial intelligence

Advancements in artificial intelligence could be exploited by criminals in order to launch automated cyber attacks, manipulate public opinion through fake videos, or weaponise commercial drones, researchers warned yesterday.

A joint report from policy experts from Cambridge, Oxford and Yale universities, as well as military experts, claimed that attacks using AI by rogue states and criminal groups poses an imminent threat to both physical and political security.

The 98-page 'Malicious Use of Artificial Intelligence' report also calls for policymakers and researchers to examine and prepare for the possibility of the technology being manipulated, and to take greater care when publishing notes on their findings.

Despite the potential benefits of AI to society, there are concerns that the technology is being developed with little regard to public safety, particularly when it comes to machine learning and training computers to be as intelligent as humans, according to the report.

"We all agree there are a lot of positive applications of AI," said Miles Brundage, a research fellow at Oxford's Future of Humanity Institute, speaking to Reuters. "There was a gap in the literature around the issue of malicious use."

The report, which examined academic research into the development of AI, identified digital, physical and political securities as being the most vulnerable to criminal activity.

One future threat highlighted is the use of AI to launch and coordinate cyber attacks at a scale and sophistication that's currently unfeasible. Systems trained with machine learning will be capable of identifying the weakest targets, automatically evading detection, and adapting to efforts to shore up defences in order to maintain an attack, the report added.

"More sophisticated AI hacking tools may exhibit much better performance both compared to what has historically been possible and, ultimately (though perhaps not for some time), compared to humans," the paper stated. One mooted scenario could be the use of an army of computer systems imitating human-like click patterns and site navigation in order to overwhelm a service in the manner of a DDoS attack. This is already possible to a large extent through botnets led by, for instance, Mirai malware.

When it comes to physical security, there's a growing concern that off-the shelf commercial drones could be turned into makeshift missiles, or engineered to deliver explosives or hazardous material to a target.

"Distributed networks of autonomous robotic systems, cooperating at machine speed, provide ubiquitous surveillance to monitor large areas and groups and execute rapid, coordinated attacks," the paper said.

The report follows the findings of a workshop in 2017, in which researchers predicted that AI would be used to create realistic video or images of famous figures in order to spread propaganda.

Evidence of this has already been seen in the wild, most notably the rise of so-called 'deepfakes', the use of machine learning to create pornographic material using the faces of famous celebrities.

There's also the likelihood that the spread of fake news across social media will be coordinated by AI in the near future, at a far greater scale than the campaigns alleged to have operated during the US presidential election.

The report also questions whether researchers have a responsibility to ensure their findings are tested for security holes before any information is presented to the wider community.

The report has called for four changes to the way AI research is conducted, starting with greater collaboration between policymakers and technical researchers to investigate, prevent and mitigate potential malicious uses of AI. Secondly, it argues that more focus should be placed on the harmful applications of AI when considering research priorities.

It also recommended that researchers create a list of best practices for handling AI security threats, as well as expanding the range of voices and experts involved in future AI discussions.

Dave Palmer, director of technology at cyber security firm Darktrace, believes that although the report highlights a real threat, the defenders have the "home turf advantage".

"The value of AI for cyber defence lies in its ability to gather lots of subtle pieces of information and draw intelligent conclusions from them," says Palmer. "It learns the normal 'pattern of life' for every user and device on the network, and uses this evolving understanding to detect the earliest indicators of emerging cyber-threats."

"Critically, self-learning technology only gets better with time - the more normal activity it sees, the more refined and nuanced its understanding becomes. This report should be seen as a wake-up call for organizations to adopt AI defence now, so that they can be confident that they will be able to detect and fight back against even the most unpredictable attacks."

Image: Shutterstock

Featured Resources

The complete guide to changing your phone system provider

Optimise your phone system for better business results

Download now

Simplify cluster security at scale

Centralised secrets management across hybrid, multi-cloud environments

Download now

The endpoint as a key element of your security infrastructure

Threats to endpoints in a world of remote working

Download now

2021 state of IT asset management report

The role of IT asset management for maximising technology investments

Download now

Recommended

MarqVision detects counterfeit products with deep learning and AI
intellectual property

MarqVision detects counterfeit products with deep learning and AI

18 Sep 2020
The IT Pro Podcast: Attack of the AI hackers
artificial intelligence (AI)

The IT Pro Podcast: Attack of the AI hackers

14 Aug 2020
MIT develops AI tech to edit outdated Wikipedia articles
artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020

Most Popular

Microsoft CEO warns of video call fatigue
video conferencing

Microsoft CEO warns of video call fatigue

7 Oct 2020
How Liberty navigated a site relaunch during a pandemic
Sponsored

How Liberty navigated a site relaunch during a pandemic

8 Oct 2020
Best MDM solutions 2020
mobile device management (MDM)

Best MDM solutions 2020

21 Oct 2020