Microsoft launches open source tool Counterfeit to prevent AI hacking
Businesses can use preloaded attack algorithms to test their machine learning systems
Microsoft has launched an open source tool to help developers assess the security of their machine learning systems.
Microsoft’s red team have used Counterfit to test its own AI models, while the wider company is also exploring using the tool in AI development.
Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment.
It can assess AI models hosted in various cloud environments, on-premises, or in the edge. Microsoft also promoted its flexibility by highlighting the fact that it’s agnostic to AI models and also supports a variety of data types, including text, images, or generic input.
“Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft said.
“This tool is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems.”
The three key ways that security professionals can deploy Counterfit is by pen testing and red teaming AI systems, scanning AI systems for vulnerabilities, and logging attacks against AI models.
Transforming business operations with AI, IoT data, and edge computing
A Pathfinder report on the ROI of AI, IoT, and edge computingDownload now
The tool comes preloaded with attack algorithms, while security professionals can also use the built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools for testing purposes.
Optionally, businesses can scan AI systems with relevant attacks any number of times to create baselines, with continuous runs as vulnerabilities are addressed, helping to measure ongoing progress.
Microsoft developed the tool out of a need to assess its own systems for vulnerabilities. Counterfit began life as a handful of attack scripts written to target individual AI models, and gradually evolved into an automation tool to attack multiple systems at scale.
The company claims it’s engaged with a variety of its partners, customers, and government entities in testing the tool against machine learning models in their own environments.
BCDR buyer's guide for MSPs
How to choose a business continuity and disaster recovery solutionDownload now
The definitive guide to IT security
Protecting your MSP and your customersDownload now
Cost of a data breach report 2020
Find out what factors help mitigate breach costsDownload now
The complete guide to changing your phone system provider
Optimise your phone system for better business resultsDownload now