AI is too risky for hackers, says former GCHQ boss

Robert Hannigan suggests that the technology isn't worth the trouble for state-sponsored attackers

The former head of GCHQ, Robert Hannigan

Robert Hannigan, the former head of GCHQ, has said that there is very little evidence of artificial intelligence (AI) being used in cyber crime or terrorism.

Hannigan was speaking at an event hosted by the London Office For Rapid Cybersecurity Advancement (LORCA), where he delivered a keynote on the so-called 'myths' and 'buzzwords' around AI in cyber security. 

In his opinion, while AI has transformed many aspects of modern life, it is yet to prove all that useful to state-sponsored hackers. He suggested there were not enough benefits to outweigh the "trouble" of investing in the technology for malicious purposes.  

"The cyber industry is great at scare stories, and I've read lots and lots of scare stories about criminal groups and even terrorists using AI, and to be honest, I've seen virtually no evidence for this at all, with a couple of exceptions," Hannigan said. "I would say that I think it's again a confusion with automation."

He added that AI would likely form a part of a hackers arsenal in the near future, but right now it simply presented too much "risk". As an example, he cited the SolarWinds hack, which he said was sophisticated but also appeared to be "hand-curated". 

"You can understand why the attackers might have wanted to do that, in order to hide themselves," Hannigan said. "And doing it at the scale, and going to the trouble of doing it through AI would probably be at high risk for them."

From there the subject of AI in cyber security flipped, with Hannigan expressing concerns about the security of AI. He said the issue was "high on everyone's list" because technologies such as driverless cars and automated medical diagnostics were rapidly becoming the norm. 

"The data is a huge vulnerability, and there have been lots of studies on so-called data poisoning, adversarial models, which basically say, we can trick the machine into misdiagnosing, for example, an MIT study on chest X rays," he said. 

"And if you have a malicious actor, or even an accidental actor, it is perfectly possible to see how data poisoning or incorrectly categorised data can lead the machine to do something completely wrong with potentially very serious consequences."

Featured Resources

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Security best practices for PostgreSQL

Securing data with PostgreSQL

Download now

Transform your MSP business into a money-making machine

Benefits and challenges of a recurring revenue model

Download now

The care and feeding of cloud

How to support cloud infrastructure post-migration

Watch now

Recommended

Hackers leak data from dark web marketplace
cyber security

Hackers leak data from dark web marketplace

9 Apr 2021
Hackers are using fake messages to break into WhatsApp accounts
instant messaging (IM)

Hackers are using fake messages to break into WhatsApp accounts

8 Apr 2021
Hackers sell $38 million in gift cards on Russian marketplace
hacking

Hackers sell $38 million in gift cards on Russian marketplace

7 Apr 2021
Personal data of 533 million Facebook users found on hacking forum
data protection

Personal data of 533 million Facebook users found on hacking forum

5 Apr 2021

Most Popular

Microsoft is submerging servers in boiling liquid to prevent Teams outages
data centres

Microsoft is submerging servers in boiling liquid to prevent Teams outages

7 Apr 2021
Hackers are using fake messages to break into WhatsApp accounts
instant messaging (IM)

Hackers are using fake messages to break into WhatsApp accounts

8 Apr 2021
Alienware’s new gaming laptop is a kick in the teeth for Intel’s new CEO
Hardware

Alienware’s new gaming laptop is a kick in the teeth for Intel’s new CEO

8 Apr 2021