Systems 'attacking each other by accident' the greatest risk of military AI

New research warns against novel attacks such as 'data poisoning' and hosting AI apps on insecure machines

Terminator Artificial Intelligence

Militaries across the world should urgently work to avoid the "unanticipated interaction" between individual AI systems, an electronic rights organisation has warned.

To avoid the catastrophic risks of failed AI deployment, nations should foster international agreements and prioritise the development of new technology outside of the 'kill chain', according to research published by the Electronic Frontier Foundation (EFF).

Targeted at the defence community, the white paper titled 'The Cautious Path to Strategic Advantage' also outlined key danger areas including the fallibility of machine learning, the vulnerability of AI systems to hacking, and the unpredictability of reinforcement learning systems.

"We are at a critical juncture," the paper's author Peter Eckersley wrote.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"AI technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and Manipulation.

"The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run - or they could quickly push us in the opposite direction."

The decision to publish a white paper on the potential dangers of military AI came in the wake of the 'Project Maven' furore that struck Google earlier this year. After mounting pressure from thousands of employees, Google withdrew from a controversial Pentagon-led project in which its technology was used to enhance drone performance.

The company subsequently published an ethical code for AI, and promised its technology would never be used to develop weapons.

The main concern highlighted in the research was the tendency for neural networks underpinning machine learning systems to be subject to novel attacks in future, such as 'data poisoning', with far more research needed to fully understand how to identify, and defend against, such activity.

The white paper also warned, due to the balance of power in cybersecurity favouring attackers over defenders, that AI applications may be running on insecure platforms. This heightens the risks of AI systems, such as autonomous weapons, being manipulated by malicious actors.

Advertisement - Article continues below

But the author's greatest concern was the prospect for failures in the systems already deployed - autonomous weapons or smart command and control centres for instance - to spark fresh conflicts, or escalate existing conflicts, by accident.

Eckersley warned that cascading failures in AI technology used by systems for target selection, fire control, or response to incoming aircraft and missiles, may lead to accidental engagements between automated systems.

Among the paper's recommendations were a higher priority to be placed on defensive cybersecurity, and increased funding to boost AI research so any new risks that may arise from deploying such technology can be fully grappled with.

"AI has been the subject of incredible hype in recent years," Eckersley continued.

Advertisement
Advertisement - Article continues below

"Although the field is making progress, current machine learning methods lack robustness and predictability and are subject to a complex set of adversarial attacks, problems with controllability, and a tendency to cause unintended consequences.

"The present moment is pivotal: in the next few years either the defense community will figure out how to contribute to the complex problem of building safe and controllable AI systems, or buy into the hype and build AI into vulnerable systems and processes that we may come to regret in decades to come."

Featured Resources

Transform the operator experience with enhanced automation & analytics

Bring networking into the digital era

Download now

Artificially intelligent data centres

How the C-Suite is embracing continuous change to drive value

Download now

Deliver secure automated multicloud for containers with Red Hat and Juniper

Learn how to get started with the multicloud enabler from Red Hat and Juniper

Download now

Get the best out of your workforce

7 steps to unleashing their true potential with robotic process automation

Download now
Advertisement

Recommended

Visit/security/28170/what-is-cyber-warfare
Security

What is cyber warfare?

20 Sep 2019
Visit/security/354156/google-confirms-android-cameras-can-be-hijacked-to-spy-on-you
Security

Google confirms Android cameras can be hijacked to spy on you

20 Nov 2019
Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/security/vulnerability/354309/patch-issued-for-critical-windows-bug
vulnerability

Patch issued for critical Windows bug

11 Dec 2019
Visit/cloud/microsoft-azure/354230/microsoft-not-amazon-is-going-to-win-the-cloud-wars
Microsoft Azure

Microsoft, not Amazon, is going to win the cloud wars

30 Nov 2019
Visit/operating-systems/microsoft-windows/354297/this-exploit-could-give-users-free-windows-7-updates
Microsoft Windows

This exploit could give users free Windows 7 updates beyond 2020

9 Dec 2019
Visit/data-insights/big-data/354311/google-reveals-uks-most-searched-for-terms-in-2019
big data

Google reveals UK’s most searched for terms in 2019

11 Dec 2019