Microsoft AI can detect security flaws with 99% accuracy

Developers can use the mechanism to establish whether bugs are security-related and assign a severity rating

Microsoft has released an artificial intelligence (AI)-powered tool to help developers categorise bugs and features that need to be addressed in forthcoming releases.

The software giant’s machine learning system classifies bugs as security or non-security with a 99% accuracy, and also determines whether a bug is critical or non-critical with a 97% accuracy rating.

With ambitions to build a system with a level of accuracy as close as possible to a security expert, Microsoft fed its machine learning model with bugs labelled as security and non-security. Once this was trained, it could then label data that was not pre-classified. 

“Every day, software developers stare down a long list of features and bugs that need to be addressed,” said Microsoft’s senior security program manager Scott Christiansen, and data and applied scientist Mayana Pereira. 

“Security professionals try to help by using automated tools to prioritize security bugs, but too often, engineers waste time on false positives or miss a critical security vulnerability that has been misclassified.

“At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning.”

Because the system needs to be as accurate as a security expert, security professionals approved training data before this was fed into the machine learning model. Once the model was operational, they were brought back to evaluate the model in production.

The project began with data science and the collection of all data types and sources to evaluate quality. Security experts were then brought in to review the data and confirm the labels assigned were correct. 

Related Resource

Shifting toward Enterprise-grade AI

Resolving data and skills gaps to realise value

Download now

Data scientists then chose a modelling technique, trained the model, and evaluated performance. Finally, security experts evaluated the model in production by monitoring the average number of bugs and manually reviewing a random sample.

The mechanism uses a step-step machine learning model operation; first learning how to classify between security and non-security bugs and then to apply a severity rating.

As a result of the level of accuracy, Microsoft now believes it’s catching more security vulnerabilities before they are exploited in the wild.

Development teams can read details in a published academic paper, with the machine learning methodology set to be open-sourced through GitHub in the coming months. 

Featured Resources

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Evaluate your order-to-cash process

15 recommended metrics to benchmark your O2C operations

Download now

AI 360: Hold, fold, or double down?

How AI can benefit your business

Download now

Getting started with Azure Red Hat OpenShift

A developer’s guide to improving application building and deployment capabilities

Download now

Recommended

How LogPoint uses MITRE ATT&CK
Whitepaper

How LogPoint uses MITRE ATT&CK

15 Jan 2021
Weekly threat roundup: Microsoft Defender, Adobe, Mimecast
vulnerability

Weekly threat roundup: Microsoft Defender, Adobe, Mimecast

14 Jan 2021
Mimecast admits hackers accessed users’ Microsoft accounts
Security

Mimecast admits hackers accessed users’ Microsoft accounts

13 Jan 2021
What is public key infrastructure (PKI)?
Security

What is public key infrastructure (PKI)?

12 Jan 2021

Most Popular

IT retailer faces €10.4m GDPR fine for employee surveillance
General Data Protection Regulation (GDPR)

IT retailer faces €10.4m GDPR fine for employee surveillance

18 Jan 2021
Should IT departments call time on WhatsApp?
communications

Should IT departments call time on WhatsApp?

15 Jan 2021
BT faces £600m class-action lawsuit for 'overcharging'
Policy & legislation

BT faces £600m class-action lawsuit for 'overcharging'

18 Jan 2021