Scientists build drones capable of detecting violence in crowds

But the algorithm gets less accurate the more people it tries to track

Scientists have trained drones to recognise violent behaviour in crowds using AI.

In a paper called Eye in the Sky, researchers from Cambridge University and India's technology and sciences institutes detailed how they fed an alogirthm videos of human poses to help their camera-fitted drones detect people committing violent acts.

The researchers claim the system boasts a 94% accuracy rate at identifying violent poses,and works in three steps: first the AI detects humans from aerial images, then it uses a system called "ScatterNet Hybrid Deep Learning" to interpret the pose of each detected human and finally the orientation of the limbs in the estimated pose are numbered and joined up like a coloured skeleton to identify individuals.

The algorithm used by the AI is trained to match five poses the researchers have deemed violent, which are classified as strangling, punching, kicking, shooting and stabbing.

Volunteers acted out the poses to train the AI, but they were generously spaced out and used exaggerated movements whilst acting out attacks. The report explains that the larger the crowd, and the more violent individuals within it, the less accurate the AI becomes.

"The accuracy of the Drone Surveillance System (DSS) decreases with the increase in the number of humans in the aerial image. This can be due to the inability of the FPN network to locate all the humans or the incapability of the SHDL network to estimate the pose of the humans accurately," the researchers wrote. "The incorrect pose can result in a wrong orientation vector which can lead the SVM to classify the activities incorrectly."

When one violent individual is in the crowd, the system is 94.1% accurate, which reduces to 90.6% with two, down to 88.3% for three, 87.8% for 4 and 84% for five violent individuals. By those figures, tracking violence in widespread incidents like the 2011 riots would currently be unworkable.

AI-powered recognition software is already in use among law enforcement bodies, despite fears that it is not accurate enough.

Both the Metropolitan Police and South Wales Police were accused of using dangerously inaccurate facial recognition technology by privacy campaign groups last month.

The groups revealed the Met had a failure rate of 98% when using facial recognition to identify suspects at last year's Notting Hill Carnival and that South Wales Police misidentified 2,400 innocent people and stored their information without their knowledge.

Image: Shutterstock

Featured Resources

BIOS security: The next frontier for endpoint protection

Today’s threats upend traditional security measures

Download now

The role of modern storage in a multi-cloud future

Research exploring the impact of modern storage in defining cloud success

Download now

Enterprise data protection: A four-step plan

An interactive buyers’ guide and checklist

Download now

The total economic impact of Adobe Sign

Cost savings and business benefits enabled by Adobe Sign

Download now

Recommended

MarqVision detects counterfeit products with deep learning and AI
intellectual property

MarqVision detects counterfeit products with deep learning and AI

18 Sep 2020
The IT Pro Podcast: Attack of the AI hackers
artificial intelligence (AI)

The IT Pro Podcast: Attack of the AI hackers

14 Aug 2020
MIT develops AI tech to edit outdated Wikipedia articles
artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020

Most Popular

16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

16 Sep 2020
16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

16 Sep 2020
The Xbox Series X shows how far the cloud still has to go
Cloud

The Xbox Series X shows how far the cloud still has to go

25 Sep 2020