Should we be worried about robot bug hunters?

Series of locks on binary code with one unlocked

Security researchers: you're the latest target on the list of jobs lost to robots.

There's no need to panic, as artificial intelligence systems are only just getting started chasing hackers, with a competition sponsored by Darpa and held at DEF CON.

The Cyber Grand Challenge handed seven teams a thousand-core server to run their programmes, and gave them eight hours to find and patch as many flaws as possible.

In total, they found 650 and built 420 patches to address the flaws, with a team from Carnegie Mellon University named the winner, with its "Mayhem" picking up the $2 million prize to be used to keep working on the system.

"A spark was lit today," said Darpa program director Mike Walker, according to Reuters. "We have proven that autonomy is possible."

It did not take just eight hours to pull together the programmes, though. The qualifying events have been held over the past three years, and teams came from security firms, universities and others with expertise. The finalists were each handed $750,000 to help them prepare for the DEF CON challenge.

The aim of the competition is to build AI that can speed up the security industry's response to flaws and attacks, with Walker saying before the event that the average time for vulnerabilities to be noticed is 312 days - and then it still needs to be patched.

Darpa's wants to bring that down to minutes. "Cyber grand challenge is about bringing autonomy to the cyber domain," he said. "What we hope to see is proof that the entire security life cycle can be automated."

The next challenge for the security systems may be tougher: they're up against human contestants for the next competition, held later today.

Are security companies worried?

Bug hunters are not actually about to lose their jobs - in fact, security researchers already use automation to track malware.

"F-Secure is using automated analysis of malware extensively," said security expert Mikael Albrect. "We receive about 500,000 suspicious files, and an equal amount of suspicious URLs, every day. Every one need to be analysed and classified as safe or malicious (or flagged for examination by a human if the machine isn't certain)."

"It became obvious some 15 years ago that this work can't be done manually by researchers, so extensive investment in machine learning and AI was needed," he added. "Our primary reason to utilise automated methods is to cope with the volume and need to respond rapidly to new threats."

He added that such automation is not a job threat. "Even if people talk about AI, the systems are not yet intelligent in the sense that they would understand the malware threat and be able to adopt to more fundamental changes in the threat landscape," Albrect said.

Luis Corrons Granel, a security researcher with Panda, agreed there's little concern about AI taking jobs in the industry - instead, it's a good way to extend their capabilities. "Many vulnerabilities are found by security researchers using a number of automated tools," he said. "With a full-automated approach that means that more and more vulnerabilities can be found and fixed. Security researchers will still have to find new vulnerabilities [and] focus on more complex ones."

That said, he admitted that over the past 20 years, we would have needed many more security researchers if not for automation. "Back then [twenty years ago], we had a number of people reverse engineering malicious code and writing signatures," he said. "If we still were working in the same way, we'd need hundreds of thousands of engineers. It turns out that we have created tools to do most of the job, and these tools have been designed by our security researchers."

That means that already the bulk of work is done by automated systems directed by security experts. "If you look at it from the perspective, that if we hadn't done this we would have hired hundreds of thousands you may think that jobs have been destroyed, but being realistic no company could afford that - unless customers are willing to pay with their soul for a security solutions," he added.

Safety protocol needed?

There are concerns about AI security automation beyond jobs for bug hunters. The Electric Frontier Foundation (EFF) warned that the AI project may need a "safety protocol".

"We think that this initiative by DARPA is very cool, very innovative, and could have been a little dangerous," said a blog post by Nate Cardozo, Peter Eckersley and Jeremy Gillula, noting the challenge is "about building automated systems that can break into computers!"

While they praised the idea for extending security capabilities, they warned it can naturally also be used by criminals. "We are going to start seeing tools that don't just identify vulnerabilities, but automatically write and launch exploits for them," they wrote.

That does not mean the project should be halted, simply that researchers should be aware of the potential risks.

"At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers," they added. "Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands."