What is offensive AI and how do you protect against it?

Offensive AI is on the rise and organisations need to put appropriate defences in place if they are to fend off attacks

Much of what we hear about artificial intelligence is focused around its use in business processes, for example its ability to analyse vast quantities of data at speeds we humans can’t manage, and so help us make better decisions. But AI is also increasingly apparent as an attack tool, where it is often referred to as offensive AI. 

It was always going to happen 

It’s often said that the ‘bad actors’ are one step ahead of those wanting to protect systems, and that they use all the tools they can get their hands on to achieve their goals. So it won’t be a surprise to hear they are using AI too, or that there was a certain inevitability about the emergence of offensive AI. 

As Bryan Betts, principal analyst at Freeform Dynamics tells IT Pro: “The use of smart tools to automate the attack process was inevitable. For instance, if a human attacker has to spend a lot of time trying different routes into a target network, adapting after each attempt and deciding what to try next, why not teach that process to a piece of software?”

It isn’t just speed that offensive AI delivers, it’s also flexibility. An offensive AI can attack many different targets at the same time, spreading its tentacles around and giving bad actors a wide reach.  

Humans can’t handle it alone

Humans can’t fight this kind of fast, broad and deep type of attack on their own. A report by Forrester The Emergence of Offensive AI, produced for DarkTrace, found that 79% of firms said security threats have become faster over the last five years, and 86% said the volume of advanced security threats had increased over the same time period. 

As organisations digitise more of their work processes, the size of the ‘attack surface’ grows and it becomes increasingly difficult for human surveillance to keep an eye on everything. The Forrester research found it takes 44% of organisations more than three hours to discover there’s been an infection, fewer than 40% can remove the threat in under three hours, and fewer than a quarter can return to business as usual in less than three hours.

Offensive AI has the potential to push those statistics in the wrong direction, and the way to fight it is with AI that’s built to work in the organisation’s favour. This is known as defensive AI.

Fighting AI with AI

Just as the appearance of offensive AI was inevitable, so the development of defensive AI was always going to happen. Daulet Baimukashev, data scientist from the Institute for Smart Systems and Artificial Intelligence (ISSAI) at Nazarbayev University, Kazakhstan, tells IT Pro: “Defensive AI can use machine learning methods to learn about the normal and anomalous behaviour of the system by analysing large inputs of data, and can figure out new types of attacks and continuously improve its accuracy.”

So, defensive AI can work on its own initiative not just to identify attacks, but also to repel them. Baimukashev explains: “Defensive AI can evolve to a system that autonomously tackles various cyber-attacks. This reduces the workload for human operations and increases the efficiency of dealing with large numbers of cyber-attacks.”

The point about reducing workload for humans is vital, given the scope, range and capabilities of offensive AI, and the need for speed in finding and disabling any successful attacks. Forrester’s report revealed that as well as being concerned about the scale and speed of offensive AI attacks, 66% of cybersecurity decision makers felt that offensive AI could carry out attacks that no human could imagine. If you can’t imagine something, you can’t be prepared for when it happens. 

Keeping the humans in the loop

Despite the clear need to automate defence, and the ability of defensive AI systems to find and disable offensive AI attacks, Bryan Betts tells IT Pro that humans will still always have a role to play, saying: “I suspect the key for the defenders will be how well they can keep skilled humans in the loop, letting the machines deal with the data sifting and the routine fixes, and adapting to attacks via defensive upgrades, while the humans monitor the AI's decision-making and help build its learning.” Just like many other implementations of AI then, defensive AI helps humans by doing a lot of the heavy lifting, takes some actions autonomously, learns as it goes along, reports to humans and helps humans – and organisations – achieve their goals.

Featured Resources

BCDR buyer's guide for MSPs

How to choose a business continuity and disaster recovery solution

Download now

The definitive guide to IT security

Protecting your MSP and your customers

Download now

Cost of a data breach report 2020

Find out what factors help mitigate breach costs

Download now

The complete guide to changing your phone system provider

Optimise your phone system for better business results

Download now

Recommended

New report highlights the need for diversity in cyber security recruitment
cyber security

New report highlights the need for diversity in cyber security recruitment

28 Apr 2021

Most Popular

KPMG offers staff 'four-day fortnight' in hybrid work plans
flexible working

KPMG offers staff 'four-day fortnight' in hybrid work plans

6 May 2021
16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

29 Apr 2021
How to move Windows 10 from your old hard drive to SSD
operating systems

How to move Windows 10 from your old hard drive to SSD

30 Apr 2021