Malware could steal data from iPhones using Siri

A pair of computer scientists based in Europe have found a security vulnerability in the iPhone 5 series of smartphones that could be exploited by malicious software and compromise a user’s personal information.

And the gatekeeper that makes this possible is Siri, they report in the January issue of IEEE Computer. The security flaw relies on steganography—the practice of hiding a message within another message.

It’s related to cryptography (and oftentimes used jointly), but whereas cryptography is the concealment of a message’s contents, steganography hides the fact that a secret message is being sent at all. Classic examples include embedding a message in a digital photo. But the computer scientists involved in the iPhone exploit have also found ways to hide messages using the network protocols of Skype calls, Google searches, Bit Torrent, and Wi-Fi.

“For steganography in computer systems” explains one of the Siri steganography inventors, Warsaw University of Technology computer scientist Wojciech Mazurczyk, “it’s important to take and embed secret data into a carrier in such a way that it will look like it was not modified. It needs to look like normal network traffic.” Smartphones could be a prime target for a steganographic attack. They are bloated with personal and sensitive information and increasingly targeted by malware. What’s more, steganographic threats are bigger in instances where applications offload data to a cloud server, such as voice-based applications like Siri, says Mazurczyck.

So he and his colleague Luca Caviglione, from the National Research Council of Italy in Genova, decided to see if they could create a steganographic attack on the iPhone or iPad. Mazurczyk says their goal—the latest in a string of other steganography projects he has worked on—was to “get the attention of the security community. Current security systems are not able to counter steganography very well.” The result is called iStegSiri.

“It’s especially difficult in communications networks,” he says, “because there are different types of traffic, different types of protocols. Each different service is related to one or a number of protocols, and each of them can be used for information hiding.”

On an iPhone, Siri digitally records what the user is saying in the microphone, forms it into packets, sends the packets off to a remote cloud server operated by Apple, and receives a text response that’s relayed to the user. iStegSiri converts a secret message—perhaps data that some bit of malware on your phone has gleaned—into an audio sequence that mimics the alternation of voice and silence found in a typical spoken directive. When the message is sent to the cloud server, a third party unbeknownst to the user or to Apple could inspect the conversation and apply a decoding scheme to extract the data.

The researchers acknowledge that there are plenty of limitations to this trick. For one, in its current iteration, iStegSiri requires deep access to Siri so it only poses a risk to jailbroken iOS devices—those iPhones and iPads in which standard proprietary constraints have been removed, giving the user access to the operating system’s root kit. It also requires access to network traffic that Siri sends to Apple’s cloud servers.

The researchers have withheld the specifics about iStegSiri to keep it out of the criminal hands. Mazurczyk emphasizes that the aim is to bring attention to the vulnerabilities inherent in voice-based applications. (He and Caviglione have not confronted Apple directly with iStegSiri.) The researchers suggest the best way to counter this kind of security hole are solutions that act on the side of the server, such as dropping any connections to the server that involve suspicious audio patterns that deviate from typical language behaviors.