Novel social engineering attacks soar 135% amid uptake of generative AI

Abstract image showing an envelope floating above digital blocks to symbolise phishing emails
(Image credit: Getty Images)

Researchers from Darktrace have seen a 135% increase in novel social engineering attack emails in the first two months of 2023.

The cyber security firm said the email attacks targeted thousands of its customers in January and February 2023, an increase which it said matches the adoption rate of ChatGPT.

The novel social engineering attacks make use of “sophisticated linguistic techniques”, which Darktrace said include increasing text volume, sentence length, and punctuation in emails.

Darktrace also found there’s been a decrease in the number of malicious emails that are sent with an attachment or link.

The firm said that this behaviour could mean that generative AI, including ChatGPT, is being used by malicious actors to construct targeted attacks rapidly.

“Email is the key vulnerability for businesses today. Defenders are up against sophisticated generative AI attacks and entirely novel scams that use techniques and reference topics that we have never seen before,” said Max Heinemeyer, chief product officer at Darktrace.

“In a world of increasing AI-powered attacks, we can no longer put the onus on humans to determine the veracity of communications they receive. This is now a job for artificial intelligence.”

Survey results indicated that 82% of employees are worried about hackers using generative AI to create scam emails which are indistinguishable from genuine communication. It also found that 30% of employees have fallen for a scam email or text in the past.

Darktrace asked survey respondents what the top-three characteristics are that suggest an email is a phish and found:

  • 68% said it was being invited to click a link or open an attachment
  • 61% said it was due to an unknown sender or unexpected content
  • Poor use of spelling and grammar was chosen by 61% too

In the last six months, 70% of employees reported an increase in the frequency of scam emails. Additionally, 79% said that their organisation’s spam filters prevent legitimate emails from entering their inbox.

87% of employees said they were worried about the amount of their personal information online which could be used in phishing or email scams.

Defending AI social engineering attacks

Email services have always been one of the primary vectors through which attackers can breach an organisation.

RELATED RESOURCE

The near and far future of ransomware business models

What would make ransomware actors change their criminal business models?

FREE DOWNLOAD

One of the most common ways to install malware on a victim's machine would be to embed malicious code inside a Microsoft Office document, such as an Excel file.

Microsoft has recently implemented a number of measures to help minimise the abuse of its software in phishing attacks. Most notably in 2022, it disabled VBA macros - the abused component which facilitated the automatic loading of malware via tampered Office documents.

The decision was greeted warmly, but the company didn't escape criticism. Some said the industry had been calling for such action to be taken against VBA macros for years, and that Microsoft could have prevented an untold number of attacks if it had acted faster.

More recently, it took the decision to block emails sent from potentially vulnerable Exchange servers.

Microsoft Exchange servers have been abused by hackers for years to launch highly convincing email campaigns, such as those involving email hijacking - using genuine email addresses to continue previous chains to increase the feeling of legitimacy.

The threat of AI to cyber security has been feared for some time and extends beyond just generative AI.

AI-driven malware, for example, was conceptualised years ago - malware that could install and analyse a specific environment, changing its payload to exploit its host most effectively. In reality, such attacks have been few and far between.

There are also fears around what deepfake technology could achieve in the phishing space. One possible attack could see a CEO's likeness abused to send video and/or audio instructions to employees in the finance department, for example, encouraging them to make payments to accounts under the attackers' control.

The latest work from Intel and its FakeCatcher system has aimed to develop a tool to detect deepfakes analysing the bloodflow in faces.

At present, Intel told IT Pro that it has a 96% success rate in identifying deepfake footage, and the technology could be embedded within video conferencing software to prevent deepfake phishing and social engineering attacks in the near future.

Zach Marzouk

Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.