Systems 'attacking each other by accident' the greatest risk of military AI
New research warns against novel attacks such as 'data poisoning' and hosting AI apps on insecure machines
Militaries across the world should urgently work to avoid the "unanticipated interaction" between individual AI systems, an electronic rights organisation has warned.
To avoid the catastrophic risks of failed AI deployment, nations should foster international agreements and prioritise the development of new technology outside of the 'kill chain', according to research published by the Electronic Frontier Foundation (EFF).
Targeted at the defence community, the white paper titled 'The Cautious Path to Strategic Advantage' also outlined key danger areas including the fallibility of machine learning, the vulnerability of AI systems to hacking, and the unpredictability of reinforcement learning systems.
"We are at a critical juncture," the paper's author Peter Eckersley wrote.
"AI technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and Manipulation.
"The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run - or they could quickly push us in the opposite direction."
The decision to publish a white paper on the potential dangers of military AI came in the wake of the 'Project Maven' furore that struck Google earlier this year. After mounting pressure from thousands of employees, Google withdrew from a controversial Pentagon-led project in which its technology was used to enhance drone performance.
The company subsequently published an ethical code for AI, and promised its technology would never be used to develop weapons.
The main concern highlighted in the research was the tendency for neural networks underpinning machine learning systems to be subject to novel attacks in future, such as 'data poisoning', with far more research needed to fully understand how to identify, and defend against, such activity.
The white paper also warned, due to the balance of power in cybersecurity favouring attackers over defenders, that AI applications may be running on insecure platforms. This heightens the risks of AI systems, such as autonomous weapons, being manipulated by malicious actors.
But the author's greatest concern was the prospect for failures in the systems already deployed - autonomous weapons or smart command and control centres for instance - to spark fresh conflicts, or escalate existing conflicts, by accident.
Eckersley warned that cascading failures in AI technology used by systems for target selection, fire control, or response to incoming aircraft and missiles, may lead to accidental engagements between automated systems.
Among the paper's recommendations were a higher priority to be placed on defensive cybersecurity, and increased funding to boost AI research so any new risks that may arise from deploying such technology can be fully grappled with.
"AI has been the subject of incredible hype in recent years," Eckersley continued.
"Although the field is making progress, current machine learning methods lack robustness and predictability and are subject to a complex set of adversarial attacks, problems with controllability, and a tendency to cause unintended consequences.
"The present moment is pivotal: in the next few years either the defense community will figure out how to contribute to the complex problem of building safe and controllable AI systems, or buy into the hype and build AI into vulnerable systems and processes that we may come to regret in decades to come."