In a blog post late Monday, network security firm FireEye claims to have discovered a new iOS "flaw" that allows a nefarious app to log touch events and button presses in the background, then send the data off to a remote server.

FireEye's background monitoring proof-of-concept. | Source: FireEye

First spotted by ArsTechnica, the post describes a proof-of-concept that FireEye researchers say can collect and transmit potentially sensitive information while running in the background.

From what can be gleaned from FireEye's blog, the supposed "flaw" takes advantage of iOS' built-in multitasking components, suggesting the attacking app must first be vetted and installed on an affected device to access legitimate APIs. Barring the side-loading of an app with private APIs, such as those certified for internal distribution through Apple's remote management solution, the app would have to successfully sneak by the App Store review process in order to work.

To this end, FireEye claims to have developed "approaches to bypass" Apple's app review process, but does not detail the workarounds.

Note that the demo exploits the latest 7.0.4 version of iOS system on a non-jailbroken iPhone 5s device successfully. We have verified that the same vulnerability also exists in iOS versions 7.0.5, 7.0.6 and 6.1.x. Based on the findings, potential attackers can either use phishing to mislead the victim to install a malicious/vulnerable app or exploit another remote vulnerability of some app, and then conduct background monitoring.

The monitoring app can reportedly record all input events in the background, including on-screen touches and physical button actuation like the home button, Touch ID and volume controls.

Further, FireEye notes that disabling iOS 7's background app refresh feature will not block said monitoring app from collecting and disseminating data. The firm offers the example of music apps that were granted access to background processes in earlier versions of iOS.

According to ArsTechnica, a now-removed blog post from FireEye claimed the firm had "successfully delivered a proof-of-concept monitoring app through the App Store that records user activity and sends it to a remote server. We have been collaborating with Apple on this issue."

FireEye's discovery, if it can be deemed as much, comes as Apple is being scrutinized over an SSL security flaw found recently in both iOS and OS X. The so-called "goto fail" error potentially opens the door for hackers to surreptitiously intercept data meant to be encrypted.

So do I have this right? Getting an app that leverages this "flaw" would require jailbreaking (or else you self-installing the app as a developer/enterprise deployment on your own device)? How is that news? You can do all kinds of questionable things by jailbreaking/side loading.

So what is seemingly newsworthy is not the background thing at all, but rather a second, unrelated flaw, in Apple's review process. If such a flaw exists (sounds plausible, but they are very vague) then--and only then--could such an app actually get onto normal users' phones?

And they've backtracked, removing their formerly-posted claim to have gotten an instance of this onto the App Store?

C'mon journalists and fact-checkers, I thought you were better than this! Until I see real evidence, this sounds like FireEye putting out misleading info to drum up publicity.

So do I have this right? Getting an app that leverages this "flaw" would require jailbreaking (or else you self-installing the app as a developer/enterprise deployment on your own device)? How is that news? You can do all kinds of questionable things by jailbreaking/side loading.

So what is seemingly newsworthy is not the background thing at all, but rather a second, unrelated flaw, in Apple's review process. If such a flaw exists (sounds plausible, but they are very vague) then--and only then--could such an app actually get onto normal users' phones?

And they've backtracked, removing their formerly-posted claim to have gotten an instance of this onto the App Store?

C'mon journalists and fact-checkers, I thought you were better than this! Until I see real evidence, this sounds like FireEye putting out misleading info to drum up publicity.

I agree with what nagromme says, there are too many unknowns here for us to be sure that we are legitimately at risk. Of course there is an enterprise program which allows businesses to create their own apps and sideload them (legally) onto their own iOS devices to issue to employees, and in this case, it is possible for companies to deploy apps that are not secured through the app store. Although, I think it would be somewhat unlikely for such an app, being compromised, to make its way into the wild. We have to be wary of these kinds of reports, they may, as nagromme said, be trying to drum up some publicity for FireEye. In a way, though, attacks on iOS and Mac apps is a bit flattering, suggesting that Apple's products are in the mainstream now, and have a large enough installed base that crackers find it a worthy platform to write malware for. Of course, though, I am sure that Apple is increasingly aware of this, and will plug security holes in short order after they are discovered.

So what is seemingly newsworthy is not the background thing at all, but rather a second, unrelated flaw, in Apple's review process. If such a flaw exists (sounds plausible, but they are very vague) then--and only then--could such an app actually get onto normal users' phones?

Apple's app review process isn't what you think. Apple only sees the binary, it's very easy to sneak code past. Just look how many tethering apps got through.

Unlike the SSL problem which is so major conspiracy theorists think it deliberate, this is nonsense. An app needs to piggy back on another app and then it can catch where you touched the screen which isn't a keystroke catcher but a position on screen catcher.

then it can catch where you touched the screen which isn't a keystroke catcher but a position on screen catcher.

The keys are in the exact same place on the iPhone. And even if they weren't you could easily do analysis to find where they were. For example on the iPad, the e key is going to be in one of two columns (split and non-spilt) and you could easily tell by the spacing of the keys or the lack of center key presses. It would be easy to deduce the horizontal rows from the range of key presses.

It is definitely a flaw if an app can capture all events when it isn't in the foreground though the severity is mitigated by the fact that it does require a malicious app be loaded.

From what can be gleaned from FireEye's blog, the supposed "flaw" takes advantage of iOS' built-in multitasking components...

This not just a simple flaw in the iOS code.

Quote:

Originally Posted by s.metcalf

Just great...

C'mon Apple. I thought you were better than this!

Hogwash. While I'm no programmer, I can appreciate that an operating system like iOS involves millions of lines of code, managing a highly complex system of components. Given this complexity of the hardware and software, there is no way any smartphone system can be 100% bug-free, regardless of the quantity or quality of coders involved.

Personally, I am happy any time a flaw is discovered before it can actually be put to use. This gives Apple the chance to plug the hole before harm is done.

Key part. I'm more than confident that Apple is always trying to keep the filth out of the app store. If I want my data and identity compromised, I'll use an Android phone.

Repeat after me. When you submit an app to the app store, you only send in a binary. Apple cannot determine the logic of your program unless they reverse engineer your binary which is very difficult to do. This is like me giving you the iTunes binary and asking if you can find any hidden code in it.

You have to understand that the main way iOS ensures security is through limited app permissions, which has been breached here. The main purpose of app review is to check for things like porn and in app purchases.

News articles will word it like it's a keylogger and so people assume it exploits bugs but it's more about functionality choices Apple made. I don't know why they allow background apps to monitor screen touches but iOS 7 has a way to turn it off as long as the app isn't playing music - maybe they were allowing for custom gestures or just gave 3rd parties the same level of functionality their own background apps have. There was a malware example for iOS and Android that used this technique:

To exploit this, someone would have to download a malicious app and leave it open in the background while you did something interesting in another app. Apple can isolate touches to the current app and just pass gestures to their own OS background apps. These guys say they're working with Apple on a fix.