Amazon Alexa is Hacked, Again; The Security of Users’ Personal Info Is Questioned

Everybody seems to be smitten by new technological elevations, namely voice-enabled digital devices and virtual assistants. But a recent report manifests how “smart” technology could be shockingly blind to security threats. As per reports, researchers discovered a flaw in Amazon’s Alexa virtual assistant that enabled them to eavesdrop on consumers with smart devices – and automatically transcribe every word said, which further a text transcript is generated.

As per Checkmarx Researchers, they were able to manipulate code within a built-in Alexa JavaScript library (ShouldEndSession) to pull off the hack. The JavaScript library is tied to Alexa’s orders to stop listening if it doesn’t hear the user’s command properly. Checkmarx’s tweak to the code simply enabled Alexa to continue listening, no matter the voice request order.

The incident puts a question mark on privacy and security around voice-enabled digital services, such as Alexa. Reports suggest that Researchers from Zhejiang University developed proof-of-concept attacks to illustrate how an attacker could exploit inaudible voice commands posing potential threats to data security of users. As per reports, a sample of user’s voice can be collected in various ways, including

making a spam call

recording person’s voice from a physical proximity of the speaker

mining for audiovisual clips online

compromising cloud servers that store audio information

Voice-Assistant: Threat Thread Continues

The vulnerability isn’t just confined to Amazon Alexa. Researchers at the University of Alabama at Birmingham revealed that voice recognition technology is susceptible to attack with voice samples cloned from audio found in online videos (e.g. industry and YouTube) and even videos held on private cloud accounts.

To cite an example, it’s not hard to pick up a locked iPhone 4S, press the “home” button to launch Siri, and gain access to the phone through voice-activated commands and even a no-brainer(with an ill slant) today can do that. As a matter of fact, the massive market out there known for dealing with harvested emails add up to the woes. Voice-activated commands can be maliciously used to intercept and monitor calls. Further, this can be used to illicitly access mails and other private data which are put up for sale by spammers all around the globe. Given the fact that voice recognition is easier to hack than other bio-metric authentication, it can covertly nullify the security system.

More of A Glitch, Maybe

Accidental triggering of wake-up commands in such devices is fairly common but once awake, this assistant records everything and sends an encrypted version of it to back-end servers.

A Threat To Corporate World

few other reports claim that in case of Siri, it can be hacked from a distance of 16 feet by potential hackers. Headphone cords can be used as an antenna, exploiting its wire to convert surreptitious electromagnetic waves into electrical signals that appear to the phone’s operating system to be audio coming from the user’s microphone. A paper published by IEEE states that the possibility of inducing parasitic signals on the audio front-end of voice-command-capable devices could raise critical security impacts. Researchers at ANSSI reported that voice-assistants like Apple Siri or Google Assistants could be sent commands to download apps through malware, send phishing emails or browse through malicious websites.

Spoken interfaces have been described as the next paradigm shift of the IT Industry. But think about how it creates a privacy risk in corporate environments where your phone may be activated in error and what you thought was a private conversation is sent to Google in the cloud. This definitely is a doomsday scenario version.

However as always with software, all vulnerabilities can be tackled by proper configuration and deciding the extent to which information must be linked to a device.