A silent ultrasonic attack can commandeer your smartphone or smart speaker

American researchers have developed an attack called “trojan almost inaudible by ultrasound” that can launch silent attacks against devices with voice assistants, such as smartphones and smart speakers.

The team demonstrated attacks against modern voice assistants found on millions of devices including Apple’s Siri, Google Assistant, Microsoft’s Cortana and Amazon’s Alexa, displaying the ability to send malicious commands to these devices.

The fundamental principle that makes it both effective and dangerous is that the microphones of smart devices can respond to waves close to ultrasound that the human ear cannot, thus carrying out the attack with minimal risk of exposure.

This attack can be embedded in websites that play multimedia content or videos from YouTube. Enticing targets to visit these sites or play malicious content on trusted sites is a relatively simple case of social engineering.

The researchers claim that the attacks can be carried out using two different methods:

  • The first occurs when a device is both the source and the target of the attack. For example, an attack could be launched on a smartphone by playing an audio file that causes the device to perform an action, such as opening a garage door or sending a text message.
  • The second occurs when the attack is launched by a device with a speaker on another device with a microphone, such as a website on a smart speaker.

 

“If you play YouTube on your smart TV, that smart TV has a speaker, right? The sound of malicious NUIT commands will become inaudible and can also attack your mobile phone and communicate with your Google Assistant or Alexa devices.”explained one of the authors.

Read Also:  Priyanka Chopra's 'Tiger' took 8 years to make and is now hitting theaters on this special day

“This can even happen on Zoom during meetings. If someone activates the microphone, they can embed the attack signal to hack their phone that is next to their computer during the meeting.”

Chen explained that the speaker the attack is launched from must be above a certain volume level for the attack to work, while malicious commands only last for 0.77 seconds.

The researchers tested 17 popular devices with voice assistants and found that all of them can be attacked using any voice, even robot-generated ones. except for Apple’s Siri, which requires emulating or stealing the target’s voice to accept commands.

Chen also advised users to closely monitor their devices for microphone activations, which have dedicated on-screen indicators on iOS and Android smartphones.

Finally, using headphones instead of speakers to listen to something or output sound effectively protects against similar attacks.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here