Researchers show how hackers can easily 'clog up' neural networks

The Maryland Cybersecurity Center warns deep neural networks can be tricked by adding more 'noise' to their inputs

A female monkey and her child classified by AI

Researchers have discovered a new method of attack against AI systems that aims to clog up a network and slow down processing, in a style similar to that of a denial of service attack.

In a paper being presented at the International Conference on Learning Representation, researchers from the Maryland Cybersecurity Center have outlined how deep neural networks can be tricked by adding more "noise" to their inputs, as reported by MIT Technology Review.

It specifically targets the growing adoption of input-adaptive multi-exit neural networks, which are designed to reduce carbon footprint by passing images through just one neural layer to see if the necessary threshold to accurately report what the image contains has been achieved.

In a traditional neural network, the image would be passed through every layer before a conclusion is drawn, often making it unsuitable for smart devices or similar technology that requires quick answers using low energy consumption.

The researchers found that by simply adding more complication to images, such as slight background noise, poor lighting, or small objects that obscure the main subject, the input-adaptive model views these images as being more difficult to analyse and assigns more computational resources as a result.

Related Resource

10 keys to AI success in 2021

The challenges and rewards of AI

10 keys to AI success - whitepaper from DataRobotDownload now

The researchers experimented with a scenario whereby hackers had full information about the neural network and found it could be used to max out its energy stores. However, even when the simulation assumed attackers had only limited information about the network, they were still able to slow down processing and increase energy consumption by as much as 80%.

What's more, these attacks transfer well across different types of neural networks, according to the researchers, who also warned that an attack used for one image classification system is enough to disrupt many others.

Professor Tudor Dumitraş, the project's lead researcher, said that more work was needed to understand the extent to which this kind of threat could create damage.

"What's important to me is to bring to people's attention the fact that this is a new threat model, and these kinds of attacks can be done," Dumitraş said.

Featured Resources

Defeating ransomware with unified security from WatchGuard

How SMBs can defend against the onslaught of ransomware attacks

Free download

The IT expert’s guide to AI and content management

How artificial intelligence and machine learning could be critical to your business

Free download

The path to CX excellence

Four stages to thrive in the experience economy

Free download

Becoming an experience-based business

Your blueprint for a strong digital foundation

Free download

Recommended

Nigerian cyber criminals target Texas unemployment system
cyber security

Nigerian cyber criminals target Texas unemployment system

27 May 2021
Hackers use open source Microsoft dev platform to deliver trojans
Security

Hackers use open source Microsoft dev platform to deliver trojans

14 May 2021

Most Popular

What are the pros and cons of AI?
machine learning

What are the pros and cons of AI?

8 Sep 2021
Apple patches zero-day flaw abused by infamous NSO exploit
exploits

Apple patches zero-day flaw abused by infamous NSO exploit

14 Sep 2021
Google takes down map showing homes of 111,000 Guntrader customers
data breaches

Google takes down map showing homes of 111,000 Guntrader customers

2 Sep 2021