Researchers uncover new exploits in voice-powered assistants like Amazon Alexa or Google Assistant

'Voice squatting' and 'voice masquerading' are new methods attackers can use to steal users' information

Researchers have discovered two new vulnerabilities in voice-powered assistants, like Amazon Alexa or Google Assistant, that can allow attackers to steal sensitive information.

Dubbed 'voice squatting' and 'voice masquerading', these exploits allow threat actors to take advantage of the way Virtual Personal Assistants (VPAs) embedded in smart speakers process voice commands, exploiting users' misconceptions about how they work.

In the first security analysis of the VPA ecosystem, researchers from Indiana University, the Chinese Academy of Science, and the University of Virginia demonstrated how VPAs could be tricked by simple homophones - words that sound the same but have different meanings.

The white paper outlined an example of 'voice squatting' - in which the voice assistant could mistake a command such as "Alexa, open Capital One" with that invoking "Capital Won" should an attacker create a malicious app with a similarly-sounding name.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

Featured in a blogpost on Malwarebytes Labs, a company offering antivirus software, 'voice squatting' is described as a method which exploits the way a skill or action is invoked.

Indeed, the researchers demonstrated this in a real-world example by registering five new skills with Amazon designed to emulate the widely-popular Sleep and Relaxation Sounds. These fake skills, which passed Amazon's vetting process, used similar invocation names and were found to have been invoked by a high proportion of users.

'Voice Masquerading' meanwhile is a method which involves a malicious skill impersonating a legitimate skill to either trick a user into reading out personal information or account credentials or to listen in on conversations.

The two methods identified were 'in-communication skill switch', which takes advantage of the false assumption that smart assistants readily switch from one skill to another once users invoke a new one, and 'faking termination', in which a malicious app can exploit some skills' ability to self-terminate upon registering a command such as "goodbye" by remaining active and running in the background.

"With the importance of the findings reported by the study, we only made a first step towards fully understanding the security risks of VPA IoT systems and effectively mitigating such risks," the research concluded, adding: "Further research is needed to better protect the voice channel, authenticating the parties involved without undermining the usability of the VPA systems."

"Smart assistants and IoT, in general, are still fairly new tech, so we expect improvements in the AI, and the security and privacy efforts within this sector," Malwarebytes wrote in its blog. "Both Amazon and Google have claimed they already have protections against voice squatting and voice masquerading.

Advertisement - Article continues below

"While it is true that the researchers had already met with both firms to help them understand these threats further and offer them mitigating steps, they remain skeptical about whether the protections put in place are indeed adequate."

As voice-powered assistants are increasingly deployed, flaws have increasingly been featured in reports - with perhaps the most high-profile incident this year involving Alexa randomly laughing without prompt.

A Google spokesperson told IT Pro this was an area Google takes very seriously, providing a link to its policies for actions on Google.

Under deceptive behaviour, the policy says: "We don't allow Actions that attempt to deceive users. Actions must provide accurate disclosure of their functionality and perform as reasonably expected by the user.

Advertisement
Advertisement - Article continues below

"Actions must not attempt to mimic system functionality or warnings of any kind. Any changes to device settings must be made with the user's knowledge and consent and be easily reversible by the user."

A spokesperson from Amazon said: "Customer trust is important to us, and we conduct security reviews as part of the skill certification process.We have mitigations in place to detect this type of skill behavior and reject or remove them when identified."

Advertisement - Article continues below

Picture: Shutterstock

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
Visit/policy-legislation/general-data-protection-regulation-gdpr/354577/data-protection-fines-hit-ps100m
General Data Protection Regulation (GDPR)

Data protection fines hit £100m during first 18 months of GDPR

20 Jan 2020