How to use machine learning and AI in cyber security

A white robotic hand holding a lock in front of a yellow background

Cyber criminals are constantly seeking new ways to perpetrate a breach, but thanks to artificial intelligence (AI) and its subset machine learning, it's becoming possible to fight off these attacks automatically.

The secret is in machine learning's ability to monitor network traffic and learn what's normal within a system, using this information to flag up any suspicious activity. As the technology's name suggests, it's able to use the vast amounts of security data collected by businesses every day to become more effective over time.

At the moment, when the machine spots an anomaly, it sends an alert to a human usually a security analyst to decide if an action needs to be taken. But some machine learning systems are already able to respond themselves, by restricting access for certain users, for example.

Is artificial intelligence replacing humans in security?

Talk of automation and artificial intelligence is typically associated with job losses, but for the security industry, machine learning is being deployed to complement existing expertise, rather than replace it.

These systems are not designed to work autonomously, but instead handle the tasks that otherwise distract human workers from doing their jobs effectively. For example, AI is great at crunching numbers that can then be used in further analysis, a task in which humans are still very much needed.

However, according to Moonpig’s head of cyber security, Tash Norris, AI data analysis can also provide other benefits. Speaking as part of the IT Pro Panel, he said that “analysts will naturally look for correlations they've seen before, or that they expect to see”.

“A true implementation of AI should be able to draw 'unbiased' correlations, bring more value from the datasets you have.”

The panellists agreed that the most sensible place to deploy AI and machine learning systems is in the broad category of detection and response functions, including tasks like SIEM, SOAR, and EDR. By automating these more manual processes, staff can be freed up to work on more dangerous threats, using AI as a force multiplier to extend the capabilities of a security team.

Dave Palmer, director of technology at Darktrace, says: "Having machine learning allows companies to prioritise more effectively. We don't take human risk decision making out, but we allow tactical fire-fighting so security teams can do the work on their own timescales."

The Cambridge-based AI startup has recently collaborated with Microsoft to provide AI-enhanced cyber security to organisations transitioning to the cloud. The partnership focuses on addressing security challenges in the “critical areas” of email security, data integration, as well as simplified and streamlined security workflows. This includes Microsoft’s Azure hosting Antigena Email, which uses Darktrace’s artificial intelligence technology to stop the most advanced email threats, with the product also being listed on the Azure Marketplace.

Darktrace director of Email Security Products, Dan Feinat, warned that the AI startup witnesses “attackers impersonate CEOs or compromise vendors’ accounts to send out targeted, topical emails that look legitimate” on a daily basis.

“As these attacks get more sophisticated, employee education and awareness are not enough. The answer lies in technology,” he added.

RELATED RESOURCE

Preparing for AI-enabled cyber attacks

MIT technology review insights

FREE DOWNLOAD

Stuart Laidlaw, CEO of UK cyber security startup Cyberlytic, also advocates using machine learning to reduce a security analyst's workload. "It's about cutting through the noise: these guys are swamped in their day jobs and they can't respond to everything. We use machine learning to do the triage."

Where machine learning shows the greatest potential is in interpreting the output of many different expert systems and pulling it all together, says Gene Stevens, co-founder of cloud security firm ProtectWise. "Humans spend a lot of time trying to rationalise it. Machine learning is good at taking these patterns and organising the data so a human can get a highly consolidated view into the traffic moving across the network."

Machine learning can also be useful for user behaviour analysis. For example, Jamal Elmellas, CTO at Auriga Consulting, says: "If someone logs in every day at 08:55 and that changes to 01:00, the system will flag this as suspicious behaviour."

How to deploy machine learning in cyber security

As the technology continues to develop, so too does the number of viable use cases.

One such case is anomaly detection, which is being transformed by automation. This is largely due to the relative ease of applying the technology to the task, as you can get a system started with fairly minimal training.

"You provide it with a stream of data and flag up things that look unusual," says Steven Murdoch, a security architect at the VASCO Innovation Centre in Cambridge. "This can then be used for intrusion protection."

Machine learning is also available at a low cost: like cloud, the products can often be used on a free trial basis. In addition, says Laidlaw, companies such as Amazon Web Services (AWS) offer an AI component. "Some solutions just plug in and you can throw a couple of data scientists at it to discover anomalies."

Palmer advises: "Get a feel for how it fits into your business. AI as a field is very inclusive; books and training courses are available online."

Of course, as with any new technology, there are some pitfalls you will need to navigate. Not every expert is convinced that machine learning has a bright future in cyber security, as cyber criminals can also use AI to attack companies. This includes hackers potentially tricking a defensive system and turning it against its owners.

Machine learning also has its limitations. Charl van der Walt, chief security strategy officer at SecureData, says many cyber-attacks won't fit the patterns machine learning is trained to recognise. "The adversary is agile and is changing all the time. So, it's hard to find data sets where there is an adversarial pattern."

Using data to make accurate predictions is the number one challenge, says Dr Yifeng Zeng, head of the machine intelligence research group at Teesside University. In addition, he says: "Using machine learning, companies claim they can deal with previous attacks, but how will they deal with new ones? The important thing about cyber security is predicting a future attack. So, how do we use the previous data to identify unexpected patterns?"

The future of machine learning in cyber security

Despite the challenges, cyber security experts believe machine learning is here to stay. As the technology improves, it's possible programmes will emerge that understand when they are under attack and can take measures to protect themselves.

Meanwhile, according to Palmer: "The ways human beings respond to different types of attacks and how they investigate them is something machines can study. They could, for example, make suggestions such as, 'people in your situation took these steps next' acting as a coach or sounding board in a contextually useful way."

In addition, it has been suggested that machine learning systems will soon be deployed in order to deceive the adversary, rather than just using it to predict what's bad.

"This entails artificially reshaping your environment to make it a moving target and encouraging adversaries to be chasing lots of red herrings," according to Van der Walt.

This could include creating fake targets for the adversary such as files and systems that look real but aren't. "That's a different way of thinking about machine learning: deception as a defensive strategy."

Back to the present day, how can AI and machine learning form part of a company's cyber security strategy? It has a lot of potential but the technology can't be a company's only method of security; it's one part of an overall defence. For now, Laidlaw advises: "Know where your crown jewels are, and protect what is most valuable, using AI as part of that."

This article was originally written by Kate O'Flaherty and has been updated several times since initial publication.

Kate O'Flaherty

Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.