Why deepfakes could threaten everything from biometrics to democracy

A smartphone showing the download screen for FaceApp, an AI face-swapper
(Image credit: Shutterstock)

This article originally appeared in Issue 9 of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here.

Deepfakes, also known as synthetic media, are spreading. Today, deepfake technology is most commonly used to create more realistic fake images or videos, but it can also be used to develop fake biometric identifiers such as voice and fingerprints.

Most of us will have likely watched a film, tv show or advert that uses this technology, and we’ve probably all come across a ‘deepfaked’ photo or video – either knowingly or unknowingly – on social media. Some of us may even have played around with creating our own deepfakes using apps that let you superimpose your face onto that of your favourite actors.

“Until recently you needed the sophisticated technology of a Hollywood studio to create convincing deepfakes. Not anymore. The technology has become so advanced and readily available that one guy in his bedroom can create a very realistic deepfake,” says Andrew Bud, ceo and founder of fintech firm iProov. “A lot of people are using it for entertainment content, plus there’s legitimate firms whose entire business is creating synthetic video and audio content for advertising or marketing purposes.”

The dark side of deepfakes

But deepfake technology also has a dark side. For some time now it’s been used to create photos or videos to spread misinformation and influence public opinion or political discourse, often by attempting to discredit individuals or groups.

“Recent history has shown a proliferation of attacks to manipulate democratic elections and destabilise entire regions,” says Marc Rogers, VP of cybersecurity at technology firm Okta and co-founder of international cyberthreat intelligence group The CTI League. “The implication being a deepfake from a trusted authority could artificially enhance or destroy public confidence in a candidate, leader or perception of a public issue – such as Brexit, global warming, COVID-19 or Black Lives Matter – to influence an outcome beneficial to a malicious state or actor.”

IDC senior research analyst, Jack Vernon, notes: “With the US presidential election drawing closer, this will be an obvious arena in which we may see them deployed.”

Deepfake pornography is another rapidly growing phenomenon, often used as blackmail, while another risk comes from criminals using faked biometric identifiers to carry out fraud.

“One notable example took place last year when attackers used deepfake technology to imitate the voice of a UK ceo in order to carry out financial fraud,” Rogers highlights.

It’s unsurprising, then, that last month the Dawes Centre for Future Crime at UCL published a report citing deepfakes as the most serious artificial intelligence (AI) crime threat. Ranked in order of concern, the technology was rated the most worrying use of AI in terms of its potential applications for crime or terrorism.

Who’s most at risk from deepfake crime?

Bud believes the areas most at risk from deepfake crime include the banking industry, governments, healthcare and media.

“Banking’s definitely at risk – that’s where the opportunity for money laundering is greatest. The government is also at risk: Benefits, pensions, visas and permits can all be defrauded. Access to someone’s medical records could be used against them and social media is at risk of weaponisation. It’s already being used for intimidation, fake news, conspiracy theories, destabilisation and destruction of trust.”

Experts say we can expect things to get worse before they get better, as the quality of deepfakes is only likely to improve. This will make it harder to distinguish which media is real, and the technology may get better at fooling our security systems.

Fighting back

The good news is the technology industry is fighting back, and we’re seeing deepfake detection technology emerge from a number of research fields, says Nick McQuire, senior vice president at Enterprise Research.

RELATED RESOURCE

IT Pro 20/20: The future of augmentation

The ninth issue of IT Pro 20/20 looks at our changing relationship with augmentation technology

FREE DOWNLOAD

“This is an area we’ve long predicted would emerge because firms like Microsoft, Google and Facebook are looking at ways to use neural networks and generative adversarial approaches (GANs) to analyse deepfakes to detect statistical signatures in their models.”

There are many initiatives to identify deepfakes, “for example the FaceForensics++ and Deepfake Detection Challenge (DFDC) dataset,” says Hoi Lam, a member of the Institution of Engineering and Technology’s (IET) Digital Panel.

Then there’s facial recognition cross-referencing, which is increasingly being used by video hosting services. “Various techniques are also being explored that implement digital watermarking,” explains Matt Lewis, research director at NCC Group. “This can help prove the origin and integrity of content creation.”

A number of the big tech firms have begun to promote tools in this area. Microsoft, for example, recently unveiled a new tool to help spot deepfakes and in August Adobe announced it would start tagging Photoshopped images as having been edited in an attempt to fight back against misinformation.

GCHQ also recently acknowledged deepfakes as a cybersecurity priority, launching a research fellowship set to delve into fake news and misinformation and AI. “New technologies present fresh challenges and this fellowship provides us with a great opportunity to work with the many experts in these fields,” a spokesperson said.

Businesses are also starting to understand the risk from deepfakes and implementing new technologies designed to detect fraudulent biometric identifiers. Banks in particular are ahead of the game, with HSBC, Chase,Caixa Bank and Mastercard just some of those who’ve signed up to a new biometric identification system.

We’re in an arms race

As malicious actors innovate to stay a step ahead of security teams, technologists being drawn into an arms race, and the work to identify deepfakes is ongoing.

“As security teams innovate new technology to identify deepfakes, techniques to circumvent this will proliferate and unfortunately serve to make deepfake creation more realistic and harder to detect,” notes Rogers. “There’s a feedback loop with all emerging technologies like these. The more they generate success the more that success is fed back into the technology, rapidly improving it and increasing its availability.”

RELATED RESOURCE

2020 Cyber Threat Intelligence (CTI) survey

How to measure the effectiveness of your CTI programme

FREE DOWNLOAD

While the technologists fight the good fight, the other important tool in the war against devious deepfakes is education.

The more aware the public is of the technology the more they’ll be able to critically think about their media consumption and apply caution where needed, says Nick Nigram, a principal at Samsung NEXT Europe. “After all, manipulation of media using technology is nothing new,” he concludes.

Keri Allan

Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.