Systems 'attacking each other by accident' the greatest risk of military AI

New research warns against novel attacks such as 'data poisoning' and hosting AI apps on insecure machines

Terminator Artificial Intelligence

Militaries across the world should urgently work to avoid the "unanticipated interaction" between individual AI systems, an electronic rights organisation has warned.

To avoid the catastrophic risks of failed AI deployment, nations should foster international agreements and prioritise the development of new technology outside of the 'kill chain', according to research published by the Electronic Frontier Foundation (EFF).

Targeted at the defence community, the white paper titled 'The Cautious Path to Strategic Advantage' also outlined key danger areas including the fallibility of machine learning, the vulnerability of AI systems to hacking, and the unpredictability of reinforcement learning systems.

"We are at a critical juncture," the paper's author Peter Eckersley wrote.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"AI technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and Manipulation.

"The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run - or they could quickly push us in the opposite direction."

The decision to publish a white paper on the potential dangers of military AI came in the wake of the 'Project Maven' furore that struck Google earlier this year. After mounting pressure from thousands of employees, Google withdrew from a controversial Pentagon-led project in which its technology was used to enhance drone performance.

The company subsequently published an ethical code for AI, and promised its technology would never be used to develop weapons.

The main concern highlighted in the research was the tendency for neural networks underpinning machine learning systems to be subject to novel attacks in future, such as 'data poisoning', with far more research needed to fully understand how to identify, and defend against, such activity.

The white paper also warned, due to the balance of power in cybersecurity favouring attackers over defenders, that AI applications may be running on insecure platforms. This heightens the risks of AI systems, such as autonomous weapons, being manipulated by malicious actors.

Advertisement - Article continues below

But the author's greatest concern was the prospect for failures in the systems already deployed - autonomous weapons or smart command and control centres for instance - to spark fresh conflicts, or escalate existing conflicts, by accident.

Eckersley warned that cascading failures in AI technology used by systems for target selection, fire control, or response to incoming aircraft and missiles, may lead to accidental engagements between automated systems.

Among the paper's recommendations were a higher priority to be placed on defensive cybersecurity, and increased funding to boost AI research so any new risks that may arise from deploying such technology can be fully grappled with.

"AI has been the subject of incredible hype in recent years," Eckersley continued.

Advertisement
Advertisement - Article continues below

"Although the field is making progress, current machine learning methods lack robustness and predictability and are subject to a complex set of adversarial attacks, problems with controllability, and a tendency to cause unintended consequences.

"The present moment is pivotal: in the next few years either the defense community will figure out how to contribute to the complex problem of building safe and controllable AI systems, or buy into the hype and build AI into vulnerable systems and processes that we may come to regret in decades to come."

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Recommended

Visit/security/28170/what-is-cyber-warfare
Security

What is cyber warfare?

20 Sep 2019
Visit/security/internet-security/354417/avast-and-avg-extensions-pulled-from-chrome
internet security

Avast and AVG extensions pulled from Chrome

19 Dec 2019
Visit/security/354156/google-confirms-android-cameras-can-be-hijacked-to-spy-on-you
Security

Google confirms Android cameras can be hijacked to spy on you

20 Nov 2019
Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/business-strategy/mergers-and-acquisitions/354602/xerox-to-nominate-directors-to-hps-board-reports
mergers and acquisitions

Xerox to nominate directors to HP's board – reports

22 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020