Hackers could 'weaponise AI', with devastating consequences

AI artificial intelligence

Advancements in artificial intelligence could be exploited by criminals in order to launch automated cyber attacks, manipulate public opinion through fake videos, or weaponise commercial drones, researchers warned yesterday.

A joint report from policy experts from Cambridge, Oxford and Yale universities, as well as military experts, claimed that attacks using AI by rogue states and criminal groups poses an imminent threat to both physical and political security.

The 98-page 'Malicious Use of Artificial Intelligence' report also calls for policymakers and researchers to examine and prepare for the possibility of the technology being manipulated, and to take greater care when publishing notes on their findings.

Despite the potential benefits of AI to society, there are concerns that the technology is being developed with little regard to public safety, particularly when it comes to machine learning and training computers to be as intelligent as humans, according to the report.

"We all agree there are a lot of positive applications of AI," said Miles Brundage, a research fellow at Oxford's Future of Humanity Institute, speaking to Reuters. "There was a gap in the literature around the issue of malicious use."

The report, which examined academic research into the development of AI, identified digital, physical and political securities as being the most vulnerable to criminal activity.

One future threat highlighted is the use of AI to launch and coordinate cyber attacks at a scale and sophistication that's currently unfeasible. Systems trained with machine learning will be capable of identifying the weakest targets, automatically evading detection, and adapting to efforts to shore up defences in order to maintain an attack, the report added.

"More sophisticated AI hacking tools may exhibit much better performance both compared to what has historically been possible and, ultimately (though perhaps not for some time), compared to humans," the paper stated. One mooted scenario could be the use of an army of computer systems imitating human-like click patterns and site navigation in order to overwhelm a service in the manner of a DDoS attack. This is already possible to a large extent through botnets led by, for instance, Mirai malware.

When it comes to physical security, there's a growing concern that off-the shelf commercial drones could be turned into makeshift missiles, or engineered to deliver explosives or hazardous material to a target.

"Distributed networks of autonomous robotic systems, cooperating at machine speed, provide ubiquitous surveillance to monitor large areas and groups and execute rapid, coordinated attacks," the paper said.

The report follows the findings of a workshop in 2017, in which researchers predicted that AI would be used to create realistic video or images of famous figures in order to spread propaganda.

Evidence of this has already been seen in the wild, most notably the rise of so-called 'deepfakes', the use of machine learning to create pornographic material using the faces of famous celebrities.

There's also the likelihood that the spread of fake news across social media will be coordinated by AI in the near future, at a far greater scale than the campaigns alleged to have operated during the US presidential election.

The report also questions whether researchers have a responsibility to ensure their findings are tested for security holes before any information is presented to the wider community.

The report has called for four changes to the way AI research is conducted, starting with greater collaboration between policymakers and technical researchers to investigate, prevent and mitigate potential malicious uses of AI. Secondly, it argues that more focus should be placed on the harmful applications of AI when considering research priorities.

It also recommended that researchers create a list of best practices for handling AI security threats, as well as expanding the range of voices and experts involved in future AI discussions.

Dave Palmer, director of technology at cyber security firm Darktrace, believes that although the report highlights a real threat, the defenders have the "home turf advantage".

"The value of AI for cyber defence lies in its ability to gather lots of subtle pieces of information and draw intelligent conclusions from them," says Palmer. "It learns the normal 'pattern of life' for every user and device on the network, and uses this evolving understanding to detect the earliest indicators of emerging cyber-threats."

"Critically, self-learning technology only gets better with time - the more normal activity it sees, the more refined and nuanced its understanding becomes. This report should be seen as a wake-up call for organizations to adopt AI defence now, so that they can be confident that they will be able to detect and fight back against even the most unpredictable attacks."

Image: Shutterstock

Dale Walker

Dale Walker is the Managing Editor of ITPro, and its sibling sites CloudPro and ChannelPro. Dale has a keen interest in IT regulations, data protection, and cyber security. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.