In-depth

How to use machine learning and AI in cyber security

New technologies can augment your security team's response and may even be able to actively deceive attackers

A robotic hand holding a lock

Cyber criminals are constantly seeking new ways to perpetrate a breach but thanks to artificial intelligence (AI) and its subset machine learning, it's becoming possible to fight off these attacks automatically.

The secret is in machine learning's ability to monitor network traffic and learn what's normal within a system, using this information to flag up any suspicious activity. As the technology's name suggests, it's able to use the vast amounts of security data collected by businesses every day to become more effective over time.

At the moment, when the machine spots an anomaly, it sends an alert to a human usually a security analyst to decide if an action needs to be taken. But some machine learning systems are already able to respond themselves, by restricting access for certain users, for example.

The human element

While talk of AI and automation often brings with it fears of mass redundancy, in the sphere of security machine learning is being used within several different areas to complement, rather than replace, traditional measures such as firewalls.

Despite their increasing ability to perform without human intervention, the systems aren't meant to replace security analysts. On the contrary, they're intended to crunch vast amounts of data to free up analysts for more complex tasks.

However, according to Moonpig’s head of cyber security, Tash Norris, AI data analysis can also provide other benefits:

Speaking as part of the IT Pro Panel earlier this year, he said that “analysts will naturally look for correlations they've seen before, or that they expect to see”.

“A true implementation of AI should be able to draw 'unbiased' correlations, bring more value from the datasets you have.”

The panellists agreed that the most sensible place to deploy AI and machine learning systems is in the broad category of detection and response functions, including tasks like SIEM, SOAR, and EDR. By automating these more manual processes, staff can be freed up to work on more dangerous threats, using AI as a force multiplier to extend the capabilities of a security team.

Dave Palmer, director of technology at Darktrace, says: "Having machine learning allows companies to prioritise more effectively. We don't take human risk decision making out, but we allow tactical fire-fighting so security teams can do the work on their own timescales."

The Cambridge-based AI startup has recently collaborated with Microsoft to provide AI-enhanced cyber security to organisations transitioning to the cloud. The partnership focuses on addressing security challenges in the “critical areas” of email security, data integration, as well as simplified and streamlined security workflows. This includes Microsoft’s Azure hosting Antigena Email, which uses Darktrace’s artificial intelligence technology to stop the most advanced email threats, with the product also being listed on the Azure Marketplace.

Darktrace director of Email Security Products, Dan Feinat, warned that the AI startup witnesses “attackers impersonate CEOs or compromise vendors’ accounts to send out targeted, topical emails that look legitimate” on a daily basis.

“As these attacks get more sophisticated, employee education and awareness are not enough. The answer lies in technology,” he added.

Related Resource

Preparing for AI-enabled cyber attacks

MIT technology review insights

AI icon against a laptop icon on a yellow background - whitepaper from DarktraceDownload now

Stuart Laidlaw, CEO of UK cyber security startup Cyberlytic, also advocates using machine learning to reduce a security analyst's workload. "It's about cutting through the noise: these guys are swamped in their day jobs and they can't respond to everything. We use machine learning to do the triage."

Where machine learning shows the greatest potential is in interpreting the output of many different expert systems and pulling it all together, says Gene Stevens, co-founder of cloud security firm ProtectWise. "Humans spend a lot of time trying to rationalise it. Machine learning is good at taking these patterns and organising the data so a human can get a highly consolidated view into the traffic moving across the network."

Machine learning can also be useful for user behaviour analysis. For example, Jamal Elmellas, CTO at Auriga Consulting, says: "If someone logs in every day at 08:55 and that changes to 01:00, the system will flag this as suspicious behaviour."

Introducing machine learning

As the range of use cases continues to grow, how can companies start to introduce the technology? It's relatively simple: when used for anomaly detection, it's not necessary to train the machine learning system to a great extent initially.

"You provide it with a stream of data and flag up things that look unusual," says Steven Murdoch, a security architect at the VASCO Innovation Centre in Cambridge. "This can then be used for intrusion protection."

Machine learning is also available at a low cost: like cloud, the products can often be used on a free trial basis. In addition, says Laidlaw, companies such as Amazon Web Services (AWS) offer an AI component. "Some solutions just plug in and you can throw a couple of data scientists at it to discover anomalies."

Palmer advises: "Get a feel for how it fits into your business. AI as a field is very inclusive; books and training courses are available online."

However, as with any new technology, there are potential pitfalls to take into account. Some experts are cynical about machine learning's potential, pointing out that cyber criminals can use the technology to attack companies. In addition, it could be possible to trick the machine learning systems used for security.

At the same time, the technology itself has limitations. Charl van der Walt, chief security strategy officer at SecureData, says many cyber-attacks won't fit the patterns machine learning is trained to recognise. "The adversary is agile and is changing all the time. So, it's hard to find data sets where there is an adversarial pattern."

Using data to make accurate predictions is the number one challenge, says Dr Yifeng Zeng, head of the machine intelligence research group at Teesside University. In addition, he says: "Using machine learning, companies claim they can deal with previous attacks, but how will they deal with new ones? The important thing about cyber security is predicting a future attack. So, how do we use the previous data to identify unexpected patterns?"

The future

Despite the challenges, cyber security experts are predicting a bright future for machine learning. As the technology improves, it's possible programmes will emerge that understand when they are under attack and can take measures to protect themselves.

Meanwhile, according to Palmer: "The ways human beings respond to different types of attacks and how they investigate them is something machines can study. They could, for example, make suggestions such as, 'people in your situation took these steps next' acting as a coach or sounding board in a contextually useful way."

In addition, it has been suggested that machine learning systems will soon be deployed in order to deceive the adversary, rather than just using it to predict what's bad. "This entails artificially reshaping your environment to make it a moving target and encouraging adversaries to be chasing lots of red herrings," according to Van der Walt. This could include creating fake targets for the adversary such as files and systems that look real but aren't. "That's a different way of thinking about machine learning: deception as a defensive strategy."

Back to the present day, how can AI and machine learning form part of a company's cyber security strategy? It has a lot of potential but the technology can't be a company's only method of security; it's one part of an overall defence. For now, Laidlaw advises: "Know where your crown jewels are, and protect what is most valuable, using AI as part of that."

This article was originally written by Kate O'Flaherty and has been updated several times since initial publication.

Featured Resources

The ultimate law enforcement agency guide to going mobile

Best practices for implementing a mobile device program

Free download

The business value of Red Hat OpenShift

Platform cost savings, ROI, and the challenges and opportunities of Red Hat OpenShift

Free download

Managing security and risk across the IT supply chain: A practical approach

Best practices for IT supply chain security

Free download

Digital remote monitoring and dispatch services’ impact on edge computing and data centres

Seven trends redefining remote monitoring and field service dispatch service requirements

Free download

Recommended

Acer Taiwan falls victim to cyber attack
hacking

Acer Taiwan falls victim to cyber attack

18 Oct 2021
Marsh McLennan reveals its cyber risk analytics center
risk management

Marsh McLennan reveals its cyber risk analytics center

15 Oct 2021
£100 contactless payment limit could place shoppers at risk, warn industry experts
Policy & legislation

£100 contactless payment limit could place shoppers at risk, warn industry experts

15 Oct 2021
Hackers used MSHTML exploit a week before patches were ready
zero-day exploit

Hackers used MSHTML exploit a week before patches were ready

14 Oct 2021

Most Popular

Best Linux distros 2021
operating systems

Best Linux distros 2021

11 Oct 2021
HPE wins networking contract with Birmingham 2022 Commonwealth Games
Network & Internet

HPE wins networking contract with Birmingham 2022 Commonwealth Games

15 Oct 2021
What is cyber warfare?
Security

What is cyber warfare?

15 Oct 2021