now to me, this gets to one of the hearts of the matter. when people say there's no malware for the iphone, they're only talking about non-jailbroken phones. the pertinent difference between a normal iphone and a jailbroken iphone is that normal iphones can only install apps from the app store. the app store is a so-called walled garden where all the apps go through a screening process to keep out undesirable programs.

so what people really mean when they say no malware for the iphone is that there's no malware in the app store. this is an important distinction, because the iphone ecosystem (and by extension, the threat landscape) extends beyond the app store. when chris di bona attempted to downplay the threat malware played to android devices by pointing to google's efforts to keep their android marketplace clean, a number of folks were quick to point out that the android ecosystem extended beyond google's android marketplace, so it seems strange that people would forget the same line of reasoning applies to the iphone as well.

one other thing (well, the only other thing, really) that mikko said was:

and you know what? why should they allow them when there's apparently "No malware for iPhones"? whether or not there is malware for the iphone, apple doesn't want people to think there is. there is this (rather old) idea that computers can be as easy to use as an appliance (like a toaster). this idea is actually very appealing. it promises computers that just work, computers that don't get malware, computers that are easy and safe and worry free. that promise is part of the secret sauce behind apple's marketing, but if they allowed AV products in then it would dispel the illusion of the appliance computer and apple's products would lose their lustre. it's very convenient, then, that AV vendors are willing to be complicit in apple's marketing by repeating the claim that there's "No malware for iPhones".

but such unqualified claims are, as mikko has revealed, not technically true. it's not that there's no malware for iphones, it's that there's no malware in the iphone app store.

but wait, is that really true? is there no malware in the app store at all? i'm not sure that's true when we've recently been made aware of apps in the app store that collect and send personal information to a remote server without the user's knowledge or consent. but it's about time i turned my attention towards the much more verbose and nuanced discussion that sean sullivan and i had on the subject. perhaps he can shed light on why these personal info stealing apps shouldn't be considered malware. while mikko didn't question the classification of flexispy as malware, sean informed me that f-secure no longer calls it malware.

that's right - in spite of the fact that it is designed and marketed as a tool for spying on other people, it is not classified as spyware or malware because it was given an installation interface - meaning that the attacker has to have physical control of the phone for at least as long as it takes to install an app. now, on the desktop this might be a meaningful mitigating factor, but on mobile devices where physical access is so much easier to achieve? come on...

why exactly that stops it from being malware in general or spyware in particular in the context of mobile device security i still can't fathom, but sean offered up two things by way of explanation. one being a concern over being sued... by malware vendors. this rationale is something i heard from dr. solomon years and years ago, but i have to admit i had hoped that the industry had become less spineless in the interim. i guess that was too much to hope for. google may stand up to the government on behalf of it's users (perhaps not always, and perhaps it doesn't always succeed, but it has tried), but apparently anti-malware vendors only stand up for their users when there's zero risk they'll be challenged.

Software that self-installs on a computer, enabling information to be gathered covertly about a person's Internet use, passwords, etc.

apparently it's not enough that the software spies on you in order for it to be called spyware, it has to "self-install" as well. now i'm sure i must be missing something, because this definition seems to exclude anything where the victim is socially engineered into installing the software (it's hard to call it self-installing if the victim is the one installing it). it also seems to exclude anything that utilizes the particular trojan horse case where the software actually does perform the function it claims to, so the payload is additional functionality instead of strictly misrepresented functionality. a game that also steals passwords, a text editor that also sniffs network traffic, webcam software that just happens to send the video stream to a second undisclosed location in addition to the intended recipient - all of these are examples of software that ought to be called spyware but which the victim actually knowingly installs (because the undesirable functionality is unreported) and thus fails to meet the "self-install" criteria. this is precisely the type of situation users of the photo sharing iphone app called path faced.

now, sean also pointed me towards the anti-spyware coalition's risk model description document. i had hoped it would help me to learn more about this "self-install" concept that sean assured me was part of an industry agreed upon standard definition. things didn't turn out that way, since the term "self-install" doesn't appear in that document, but the topic of installation and distribution do figure prominently in the contexts of both risk factors and consent factors. unfortunately this document from 2007 appears once again to be geared to desktop computing rather than mobile computing. that's probably not too surprising considering it's 5 years old now, but it does highlight the age old problem of letting context into the classification process. mobile devices are easier to gain illicit physical access to, as well as being shared more freely (and more frequently) in social circumstances by their owners. the issue of consent at the point of install has far less significance as a risk mitigation for mobile devices. furthermore, the issue of consent at the point of install pretty clearly drops the ball in the case of trojans because it's not necessarily fully informed consent.

as the risk model description document demonstrates, somewhere along the line the industry gave up on basing it's classification system on functional definitions. sean insists that this is a "stricter process" but i think it's more correct to say that it utilizes more criteria than a functional definition system would. utilizing more criteria doesn't always lead to a stricter process because not all criteria are created equal and, at least in the case of the risk model description document, some of those criteria are used to create exceptions (which are generally not the hallmark of a strict process).

one of the last things sean wondered is how could the AV industry possibly use my (supposedly) broader definition(s) and not be accused of FUD.
now, aside from the fact that the industry is already accused of FUD
(and worse) pretty much regardless of what they do, i think it's
important to spell out one of the key differences between a functional
definition and the kind of definitions that sean sees in use.
definitions that include contextual evaluation are judgements, they
engender choice and leave room for agendas. a functional definition has
no judgement, it is purely descriptive of the functional capabilities of
what is being classified. you can no more be blamed for saying software that spies is spyware than you can for saying water is wet or the sky is blue. there's no silver bullet to make accusations go away, but if you take judgement out of the equation it should render those accusations baseless.

so why is all of this important? because it appears that we've somehow stumbled upon a way in which malware can be classified as "riskware" instead of malware. nobody hears about the riskware classification, nobody cares. they hear "No malware for iPhones" and they shut the rest out because that's all they needed to know (or at least according to traditional notions of malware that should have been all they needed to know). classifying malware as something other than malware seems to be what's enabling people to make the "No malware for iPhones" claim, like some kind of terminological shell game. "No malware for iPhones" makes people think the devices are safe and worry free, but there are risks, and not just for those who jailbreak."No malware for iPhones" is creating a false sense of security and with the revelations that have been made about apple's abject failure to lock down a particular type of personal information and the near ubiquitous exploitation of that failure by app developers, it seems like the stuff of snake-oil.

i tend to think that when people face risks they want to know about them rather than be told there's nothing to worry about, and i tend to think that when those risks come in the form of software that acts against the user's interests, informing the user is the AV industry's job. some people don't want that to happen, they want their own interests to take precedent. if the AV industry allows that to happen through inaction (or worse, facilitates it) then they don't deserve the reputation they have for protecting the user. the industry may not be able to put AV software on iphones yet, but they can certainly do a better job of raising awareness of the risks than going around telling people there's "No malware for iPhones". maybe when public awareness is raised apple will change their ways.

22
comments:

Vess
said...

A few remarks:

1) For jailbroken iPhones, there is not just some obscure spyware. At a couple of viruses exist for them.

2) You can't really compare the iPhone and the Android security models. A non-jailbroken iPhone can only download from the Apple Store. A non-rooted Android device can download from anywhere (not just from the Android Market). Although this ability is not turned by default out-of-the-box, it can be turned on by just checking a checkbox in the settings. Apps on the Apple Store are, supposedly, reviewed by humans in source code before being allowed there. Anybody can upload whatever they want to the Android Market, if they become a developer - just costs $25 and can be done pretty anonymously. Google removes malicious apps from the Market only after they have been found to be malicious. Basically, the Apple model is much more restrictive - which makes it somewhat safer. This is not necessarily a good thing - I'd take malevolent freedom over benevolent dictatorship any time - but it does provide for better security.

3) It is perfectly possible to get malicous apps on the Apple Store. Somebody did it once just to prove that it was possible. Apple banned him forever as a developer. It's also possible to do it so that even a person reading the source code will be unable to understand what the app does until the right conditions are fulfilled for the app to do it. Google "clueless agents" for an excellent paper on the subject. There has been malware like that in the PC world, I wouldn't be surprised to see it in the mobile world some day soon.

4) Theoretically, it might be possible to jailbreak an iPhone without the knowledge of its owner - e.g., just by accessing a specially prepared Web page. This would require the use of an exploit. Of course, Apple will patch the hole, but still... Combine this idea with the previous one and it makes it possible to smuggle a malicious app into the official Apple Store which, at some point of time, starts jailbreaking the iPhones it is installed on, downloading additional malware, etc.

5) Apple's official line for not allowing AV apps is that "they would interfere with the telephony, if they are constantly active" (which they have to be, in order to provide on-access scanning). Most likely, this is a load of crap (remember the story when Apple instructed their tech support not to help people infected with DNSChanger - or even to confirm that they are infected - because it would spoil the "no malware for the Mac" image), but fact is that Apple took a lot of time before they allowed "normal" apps (i.e., not ones developed by them and pre-installed on the iPhone) to use multi-tasking.

6) When the AV vendors have pockets as deep as Google, then we can stop worrying about some two-bit spyware vendor suing us. :D

7) I am not going to get involved into the argument of what is malware, because it is a pointless one. Viruses can be defined formally - a virus is a program that replicates. Programs either replicate or they do not and this is an objectively observable fact. Any other kind of malware (e.g., spyware, Trojan horse, etc.) involves such fuzzy temrs like "damage", "intent", "unwanted", etc. - which are impossible to define formally.

2) i'm not comparing security models, merely highlighting a property they have in common - that neither one is actually limited to official software distribution channel. one may be more permissive of alternative channels than the other, but both have alternative channels.

3) i think i've heard about this sort of thing before, but i'll keep "clueless agents" in mind.

4) i seem to recall a website that could jailbreak iphones just by visiting and operating a particular UI control on the page to indicate consent. i'm sure the UI control was not necessary.

5) i do indeed recall apple's treatment of the dnschanger issue. it seems to me that apple treats malware issues as marketing issues and will only behave properly when the issues are dragged into the cold light of day for all to see.

6) do no AV vendors have deep enough pockets? i find it hard to believe that symantec doesn't. users want protection, they expect AV vendors to help, and if ALL (or a not insignificant proportion) of the vendors classified something bad as malware then the two-bit spyware vendors can't realistically do anything about it. so long as you're all united you have a defense - it's only when you swim alone that you're at risk. co-operation between vendors to fight this kind of legal threat doesn't seem any more unreasonable than the co-operation that already goes on to take down botnets or deal with other threats.

7) the definitions that appear to be in use may currently have that fuzzy property, but that doesn't mean they have to. sending personal information to a remote server is an objectively observable fact too.

Re #4: Yes, there was such a site. It wasn't malicious - it was dedicated to jailbreaking iPhones and clearly stated what it did. Still, it was possible to jailbreak the iPhone by just accessing a Web page. I'm surprised that nobody made a malicious version of this. Meanwhile, Apple has patched the hole used by that particular exploit.

Re #6: Trust me, Symantec's pockets are nowhere near as deep as Google's. :D But, really, there are lots of sensitive (monetary) issues involved and it's not just the threat of the spyware vendor suing us. Remember the issue of CarrierIQ? Having in mind what it could do, it wasn't difficult to classify it as spyware. But it came pre-installed on many phones. Some anti-virus software (e.g., Lookout) is also pre-installed on many phones. Can you imagine what would happen if the user runs his pre-installed AV program and it tells him that his just-bought phone has malware on it? The support centers of the carrier will be swamped with calls. What do you think the carrier will do? Remove CarrierIQ, which was developed by their request and does something they need - or remove some obscure AV program which they don't understand and with whose reports they don't agree anyway? This would lead to significant financial loss to the AV vendor. It is not surprising, then, that practically no general purpose AV scanner is reporting CarrierIQ as malware. Instead, the AV companies released separate stand-alone apps, dedicated to detecting CarrierIQ. The user has to explicitly download them. This is just an example, of course - there are many sensitive issues involved. But, basically, the bottom line wins every time - not the interests of the user, except when the two coincide.

Re #7: Trust me, we don't want to go there. What is "personal information"? What if the user doesn't mind it being send, in exchange to a particular service provided? What if the user is asked for permission first? What if the asking is done in a way that doesn't make it clear to the user what exactly will happen? What if it is sufficiently clear to some people and not to others (there are plenty of stupid people, after all)? All these things are impossible to define precisely.

Re #6: that's an interesting hypothetical situation. how about if we add the hypothetical situation where ALL mobile AV detected CarrierIQ as malware. then removing Lookout AV would only delay the inevitable tidal wave of support calls. you think the carriers would only test with the AV that came pre-installed if that AV threw up an alarm? they'd be incredibly shortsighted if they did. i tend to think that if ALL mobile AV detected CarrierIQ as malware the carriers wouldn't get rid of Lookout, they'd get rid of CarrierIQ - that would be the only way to avoid the support calls.

Re #7: those are some good questions to ask, but most of them are pertinent to the issue of classifying something as a trojan, not necessarily spyware. i don't believe spyware is a proper subset of the trojan set. i believe there are legitimate cases to be made for spying (such as monitoring the computer use of prison inmates) where the installation of the software is done with full consent of the computer's owner - but i don't think that precludes it from being called spyware.

the question of what is personal information is interesting but let me ask you the opposite - what ISN'T personal information? it seems to me that all the data a user might place on a device is their personal data. there are circumstances where that data may also be classified as corporate data but that doesn't automatically mean it isn't also the device owner's personal data. the only data a user might put on a device that might not fall under the heading of personal is data s/he may not have a legitimate claim of ownership to - but software is poor at making such ownership determinations, so i can think of no data entered by the user that shouldn't be treated as personal.

Google Latitude can be used to track somebody's phone. I guess we should call that "spyware" because it "spies".

Spy-tools don't install themselves.

With all due respect to Vess, we didn't call CarrierIQ malware at all. Because it's not.

75+ million samples in our back end, but it's all just "malware"… I'm glad doctors don't identify pathogens so loosely.

Libel laws. We obey the law, which is actually rather strict in Finland/the EU. We don't avoid lawsuits because we are "spineless". We avoid them because we are a responsible law abiding company. (And we also have responsibility to our share holders.) Or perhaps it would make more sense to fight a pointless battle of semantics? What a waste of resources. Instead, we simply prompt our customers that "riskware" has been detected, and allow them to remove it.

Your claim that people hear "there's no malware" for iPhones, and then form the conclusion that all is well is unsupported by evidence. Security and privacy are similar but different, and it seems that most of what you would claim is malware demonstrates unwanted behavior that may (or may not) be a privacy issue.

"1) For jailbroken iPhones, there is not just some obscure spyware. At a couple of viruses exist for them."

Actually… the correct classification is *WORMS*. There are some worms that affected iPhones. There are NO viruses for iOS, jailbroken or otherwise.

most interestingly (to me), you don't seem to think the different computer use-cases inherent to the smartphone platform change the significance of what have traditionally constituted consent factors for the desktop platform.

"75+ million samples in our back end, but it's all just "malware"… I'm glad doctors don't identify pathogens so loosely."

well if they call them all "pathogens" then i guess they do. that's kind of the point for having an umbrella term that covers all the bad stuff, is it not?

"Libel laws. We obey the law, which is actually rather strict in Finland/the EU."

when you use definitions that describe the function of a thing, then classification by those definitions represents a statement of fact and cannot be libel.

of course, i am not a lawyer so your mileage may vary. it could be you're in a jurisdiction where a claim doesn't have to be false in order to qualify as libel. in which case, i suggest lobbying for a change in the law, because being trapped in a system where speaking the truth is a punishable offense is an untenable position to be in.

"Instead, we simply prompt our customers that "riskware" has been detected, and allow them to remove it."

right, in the public sphere you say there's nothing bad that can get on the phone but in the private sphere you say 'nevermind what we said before, you've got a baddie'.

a responsible company doesn't wait until someone becomes a victim before informing them that there are risks out there.

"Your claim that people hear "there's no malware" for iPhones, and then form the conclusion that all is well is unsupported by evidence."

then please present this evidence. i would like to see studies showing that iphone users are recognizing the risks that apps represent in spite of the oft' trumpeted claim that there's No malware for iPhones. i would like to see evidence that iphone users are exercising caution in the app store that can't be explained away by simple frugality.

"Actually… the correct classification is *WORMS*. There are some worms that affected iPhones. There are NO viruses for iOS, jailbroken or otherwise."

i'm sure you know that worms meet cohen's formal definition for virus. i imagine that's why some people consider them to be a subset of the viral set.

Trojan horse programs install themselves with the help of an organic component called "the user" whom has been socially engineered into believing that they are installing something else.

I shall clarify/correct myself: Spy-tools are not unknowingly installed.

It's *your* claim, you support it.

Lobby to change libel law? Umm, yeah… that would be a productive use of resources, right? (Plus, they actually further social justice.) No, thanks, I think it's better if potential unwanted software is detected as riskware, our customers are alerted, and they can remove it. They don't really care how it's technically classified.

Cohen's formal definition? Oh good grief, if you don't even want to accurately define the differences of the iOS malware that has existed (for jailbroken iPhones) …

then fine, you win, everything is a virus, and there's iPhone malware under every rock and stone. #sarcasm :-)

unknowingly with respect to whom? the device owner or the person who happens to have control of the device at this particular moment?

there's plenty of malware that gets installed with someone's knowledge, but that doesn't mean that someone is qualified to give consent.

"It's *your* claim, you support it."

i assume you're referring to the claim about how people interpret No malware for the iPhone. i believe the conclusion of safety logically follows from the acceptance of that statement as truthful/factual. i see no other logical conclusion a normal person could arrive at - however you mentioned evidence to the contrary, so i would like to know more about that.

"Lobby to change libel law? Umm, yeah… that would be a productive use of resources, right? (Plus, they actually further social justice.)"

lobby to change it if it allows the truth to be classified as libel. if you think punishing people for telling the truth furthers social justice then that's an entirely different conversation.

"No, thanks, I think it's better if potential unwanted software is detected as riskware, our customers are alerted, and they can remove it."

and how would that work on a platform that doesn't allow anti-virus software?

"They don't really care how it's technically classified."

right, the only care when they're given a false sense of security.

"Cohen's formal definition? Oh good grief, if you don't even want to accurately define the "

how is it not accurate to classify worms as viral malware (and thus, a virus)? it's my understanding that this is de rigueur in the industry.

Re #6: You've got to be kidding. All AV producers can't even agree on the names of the things their program detects, let alone on detecting a particular thing.

Oh, and carriers test only whether the phones they give to the user actually work - not whether they are infected or whether the apps on them resport something some users might not like.

And don't forget - CarrierIQ was developed because the carriers wanted it. Even if all AV programs objected to it, the carriers wouldn't just remove it - they would want it changed so that the "conflict" is avoided.

Re #7: Spyware is just as impossible to define objectively, because its definition relies on fuzzy terms like "personal data", "permission", "user consent" and so on.

All the data on the user's phone is definitely not "personal data" that "shouldn't be sent out". The phone's cell location is such data. If it isn't sent out, the phone won't work. The phone number the user is calling is such data. If it isn't sent out, the phone won't work. And so on and so on. It's useless to argue this, Kurt. Trust me, I've been there. These things are impossible to define objectively.

Sean: With all due respect, CarrierIQ is not installed by the user, does not announce its presence to the user, does not leave the user the option not to use it, actively hides from the user, the user cannot remove it easily, and it sends to somebody else information which meany users consider private. This is definitely spyware in my book - but, as I said, these things are impossible to define objectively, so arguing the point is, well, pointless. Besides, it doesn't really matter whether you call it "malware", "spyware", "riskware" or "potentially unwanted application". In all cases, from the point of view of the user it is "stuff I don't want on my phone" and most users would expect their AV program to detect such stuff. The fact that many AV vendors released stand-alone apps to detect it means that they know perfectly well what the user needs and wants; it's just that monetary reasons prevent them from implementing detection of this thing in their main product.

(Kurt - the ability to comment for this blog SUCKS!!! The CAPTCHA keeps annoying me, I can't log in via OpenID, there is a limit on the number of characters I can post per message...)

Re: worms. Oh, God, not this one! (Kurt, in case you didn't know, the easiest way to start a fight at a gathering of AV people is to ask "what is a worm".)

There are 3 schools of thought:

1) Worms are viruses that explicitly use the network to replicate themselves. "Explicitly" means that they are network-aware and not, say, just happen to infect accross the network because they infect everything listed in the PATH and a network drive just happened to be listed there. This is the school of thought I subscribe to. According to it, there are viruses for the jailbroken iPhones. Yes, all these viruses are worms, but just because this is the case, to claim that there are no viruses for these devices is ridiculous.

2) Worms are a class of self-replicating malware different from viruses. They do not infect other programs; they are self-contained and spread by themselves. I find this definition incorrect. After all, we have a perfectly good term for viruses that attach themselves to other programs - we call them "parasitic viruses". So, self-contained viruses are simply "non-parasitic viruses"; there is no need to call them "worms". Furthermore, by this definition one would classify as "worms" things that have been historically classified as viruses - e.g., boot sector viruses. They don't infect other programs parasitically; they are self-contained. And before Sean comes up with the silly argument that they infect the existing program in the boot sector - no, they do not. They move it elsewhere. They infect the place, not the program. They don't "attach" themselves to a program. Besides, there are boot sector viruses that do not do even that; they simply contain the functionality of the original program so they do not need to preserve it in any way. And if something is not preserved (as it no longer exists), it obviously cannot be infected parasitically. The same goes for overwriting viruses that do not preserve any part of the host. We don't call these "worms", either.

3) Worms are programs that replicate without requiring any user intervention. Like, you know, you have to actually run (manually) a virus-infected program for the virus to run and replicate further; this is user intervantion. Worms are "self-instantiated" self-replicating programs like Blaster, etc. I disagree with this too; according to it very few self-replicated programs would be worms and, besides, why wouldn't worms be viruses even by this definition?! Furthermore, what is user intervention? What if a parasitic virus happens to infect a program that is run automatically by a cron job? Does that automatically change it from a "virus" to a "worm"? What nonsense!

Anyway, all this confusion occurred because the first historically well-known worm, the Morris worm, has all the three properties - uses the network, is self-contained and spreads automatically. Everybody agrees that it is a worm. But different researchers have picked different properties as indicators for "worminess", thus the confusion.

Anyway. Debating this, too, is a pointless waste of time. For me, worms are viruses and therefore there are viruses for the jailbroken iPhones. You are free to disagree (and be wrong). :P

@Vess:Re #6: it doesn't have to be ALL the vendors, just a not insignificant portion of them. that way it can be argued that detection of the item in question is more or less an industry standard practice.

AV producers may not be able to agree what to call the things they detect, but there is a huge overlap in what they detect so they are certainly able to agree on something.

and if the carriers get CarrierIQ changed, then good for them - hopefully the modified version would be better behaved, lest detection gets added for v2.

Re #7: i didn't say all the data on the user's phone, i said all the data the user put on their phone. the user doesn't put the cell location on the phone.

furthermore, sending out data that the user explicitly asks it to send out (like when dialing a number) is a different matter. just as XCOPY making a copy of itself when instructed by the user doesn't qualify as self-replication, sending data the user instructs the software to send doesn't qualify as spying.

Re worms: i'm actually aware there are multiple definitions. i recall debating them on USENET (in fact that was one of the things i enjoyed most about USENET, and similar types of debates are one of the things i've enjoyed about your comments here over the years).

Re commenting: i'm sorry the experience hasn't been more positive. i'll see what i can do about those problems, but i'm not sure i'll be able to do anything about some of them without going to a 3rd party comment system (and i've yet to see one of those that doesn't annoy the heck out of me).

"lobby to change it if it allows the truth to be classified as libel." The truth? :-)

Umm, there are courses devoted to the subject, and I've read more Plato than I ever wanted to as a result. But that's neither here nor there.

Regarding, libel law in Finland, there is actually an example of a blogger that wrote something such as: “the principal of such and such school is doing a bad job.”

Now… as an American, I can tell you, such a statement sounds to me like an honest opinion, and therefore, it is a true statement of that subjective opinion, i.e., the truth.

But… the blogger was found guilty of libel! Because here in Finland (and much of Europe), you can't just go around a publish your own particular version of the truth.

The ex-prime-minister also blocked the publication of his former mistress's tell-all book using same laws/protections.

That kind of free speech limitation outraged me when I first moved here, but now, I only need to watch some American news feeds and I shake my head at how uncivilized the debate has become. So these days, I think perhaps the strict libel laws are of some value.

Regarding, the your idea that people hear there's no malware and therefore assume all is well: I think that I could throw a rock and hit an article about overreaching apps. So I really don't see much evidence that folks have a false sense of security.

Or maybe they do have a false send of security, but they don't have a false sense of privacy. The conclusions that I reach from user comments is that, malware = security issue, and apps = privacy issue. Just because somebody says there isn't a security problem doesn't mean that people aren't concerned about privacy problems.

Vess: I totally agree that it doesn't matter what it's called as long as we block it.

Re: CarrierIQ. It was (factory) installed on operator branded phones. LOTS of software come on those things that aren't installed by the user, and the devices come with a long, long, long contract with many, many limitations.

Caveat emptor.

I never contracted a phone when I lived in the USA. I always bought an unlocked phone and used pre-paid services.

Something of note regarding CarrierIQ is that they didn't have a single European operator customer. That's because they use standardized networks, and the operators can get all the same info (and more) that CarrierIQ collected from the network side. It's a testament to the pathetic state of some American telecom “patchworks” that even made a client side solution such as CarrierIQ an option.

Should we detect the operators ability to track/monitor their customers from the network side as "malware"? I don't think so…

If the software had been badly designed so as to allow other malicious software to piggyback, then there would be some risk.

@Sean:clearly, "the truth" was a poor choice of words. it sounded good but i meant "the facts". recall that i've mentioned several times a classification system based on functional definitions.

the blogger in your example made claims that involved judgments (X is doing a bad job). would the outcome have been the same if the blogger had published just facts without judgments? X did this, X failed to communicate that, etc. if the blogger could still have been found guilty even if s/he reported only objectively observable facts without passing judgment or editorializing, then there is a serious problem with the justice system.

likewise, if you can be sued for libel for classifying something as X purely based on it's observable functions then there's also something seriously wrong. you NEED to be able to inform people about the function of things, and you often need to be able to use a kind of terminological shorthand for conveying that information.

with regards to the false sense of security, just because you can throw a rock and find info on bad apps doesn't mean that's what most people see. you can find such things because you know enough to look. your confirmation bias favours seeing such examples, but another person's confirmation bias can favour something completely different, and when a person comes from a place of not having knowledge their confirmation bias is most likely to align with what major media and marketing tells them - which is that these devices are safe and worry free. apple has spent millions fostering precisely that image.

by the way, using user comments as a measure has some very big biases in it as well. comparatively very few people actually leave comments - they are a vocal minority who frequently have more awareness of the issues than a person who doesn't leave (or read) comments.

"Should we detect the operators ability to track/monitor their customers from the network side as "malware"? I don't think so…"

of course not - an 'ability' isn't software. it's not a physical or logical thing you can point at and say "that there is X".

"No CAPTCHA: Yes! :-)"

yeah, and now i have a deluge of spam comments. for the time being it's better to accept that i'm going to receive such spam and just deal with it (they never get published, so it's purely a behind the scenes problem), but maybe during quieter periods it would make more sense to re-enable the CAPTCHA.

"Facts" are open to interpretation, and the conclusions drawn from facts may (and do frequently) differ.

FlexiSPY is a tool. It is possible to knowingly, legally, use and install it. Sure, that's not how it is marketed, but if we just analyze the software itself, it is a tool.

And calling it malware is akin to calling it "evil-ware". Or at least, that is what such a vendor will claim in court. If we classify it as riskware, it's akin to calling it "caution-ware".

Our customers will be warned about it if it's installed, and they'll be able to remove it. Why isn't that good enough?

I think of "spy-tools" as I do guns. Guns kill, and gun manufactures often use sleazy marketing tactics, therefore, guns are evil, right? But it really isn't that simple. People use guns to kill, and the gun itself is just a tool. A gun can be used to protect as well as murder.

The core of FlexiSPY is a monitoring tool. They originally managed to get that component signed by Symbian because it was included it a Parental Control child monitoring application. I think that version of the software is still being sold for that purpose.

The functions of the software are the same. Basically the same software, different product, sold through different channels. Still malware in your opinion?

Re: Confirmation bias… I'm well versed on the topic. http://youarenotsosmart.com/2010/06/23/confirmation-bias/

I'm not throwing rocks at tech sources. And it isn't just news articles. I come across this stuff frequently when I read academic studies, EU reports, etc.

Apple may well have spent millions fostering the image that all is well, but none of the segmentation research that I've seen supports the idea that the majority of people think this is so. They are concerned about security and privacy. They may see iOS and being more security, but they still worry about privacy.

It all ties into "cloud" stuff. We're working on cloud content services with our ISP/mobile operator partners. I've seen lots, and lots, and lots of consumer segmentation research. All of it supports that people are concerned with privacy issues.

That there is or isn't "malware" is a separate debate. And there isn't any malware for iPhones.

On Android, it's possible to download (via 3rd party market), install, and grant permission to an app that will use your phone number to subscribe you to an SMS subscription service. Nothing like that exists on iPhone and I think that is worth noting. So Mikko's "no malware" statement has value.

@Sean:yes, interpretations of facts can differ - that doesn't mean that all interpretations are subjective, however.

under a functional classification system the facts of a thing's functional behaviour are interpreted differently than under a contextual classification system. the difference is due to using different criteria. the criteria in a functionally defined classification system are not subjective - a piece of code either performs a function or it doesn't.

to be sued on the basis of such a classification means that you can be sued for a statement of facts without adding your interpretation, because a functional classification is just a shorthand for a functional description.

"FlexiSPY is a tool."

all software is a tool, even malware.

"If we classify it as riskware, it's akin to calling it "caution-ware"."

i'm well aware of this, and frankly i don't care what it's called when the product alerts on it - the fact that it alerts the user to it's presence in such a way as to suggest that they might not want it is good enough in the context of scanner operation. but there's more to security than running anti-malware apps - a lot more. one of those things is a pervasive awareness of security during regular device operation and the claim that there is No malware for iPhones works against that awareness.

"Our customers will be warned about it if it's installed, and they'll be able to remove it. Why isn't that good enough?"

because that sets the anti-malware software up as the only security measure. the first defense should always be the user's own behaviour. that behaviour is molded by their mental models of the device, the ecosystem, the threat landscape, etc. when someone says there is No malware for iPhones they are misrepresenting the ecosystem and the threat landscape; and when the person saying it is an authority on the subject, the negative impact on the formulation and maintenance of those mental models is even greater.

"The functions of the software are the same. Basically the same software, different product, sold through different channels. Still malware in your opinion?"

it's still spyware in my opinion. i accept that there are legitimate uses for spyware (not to mention other things that might normally be considered bad), but i don't think that should change the classification. we don't call guns something different when they're used properly by LEOs, for example.

"Apple may well have spent millions fostering the image that all is well, but none of the segmentation research that I've seen supports the idea that the majority of people think this is so."

really? it's worked remarkably well for their desktop market. the legion of apple fanboys are notorious for their blind faith in the security of the platform. the fact that iphone users aren't in an uproar of the lack of security software for the platform suggests to me that they don't consider there to be any software threats.

"They may see iOS and being more security, but they still worry about privacy."

if they're concerned about privacy without being concerned about the agents which could violate their privacy then their mental model is inadequate.

they're concerned about the privacy of the data they knowingly give to service providers. they're concerned about what happens to the data after it gets into those big databases. most never considered the possibility that data they never intended to share was being put in big databases. the issue with path wouldn't have blown up the way it did if this was something that had already been present in people's minds.

@Sean:"On Android, it's possible to download (via 3rd party market), install, and grant permission to an app that will use your phone number to subscribe you to an SMS subscription service. Nothing like that exists on iPhone and I think that is worth noting. So Mikko's "no malware" statement has value."

i believe it's already been established that you CAN get software from alternative sources onto an iphone (it's just not as easy as it is with android), and that malware for that platform does in fact exist.

well, i finally ran afoul of the 4096 character limit. my research suggests that this is due to a defect that allowed the creation of undeleteable comments and apparently google chose this work around rather than actually fix the problem. hopefully actually fixing the problem is still in the works.

it is a bit annoying, but notepad++ gives enough information to easily break one's comments into chunks of the right size.

besides questionable apps in the app store, there aren't a lot of attacks on unbroken iphones (at least not a lot of documented attacks). that's why security experts can still get away with claiming the iphone is malware-free; if there was a lot of evidence to the contrary then such claims would be much harder to get away with.

in all likelihood you didn't get malware by clicking on a bogus link, but that doesn't mean it couldn't happen. apple still has their heads buried in the sand about such possibilities, so there aren't a lot of tools available for looking for problems in iphones.

Great post Kurt, I can't believe I hadn't come across this earlier. The fight appears to be one of public perception and I have been trying to educate various people to some of the truisms about mobile malware, rather than people just believing all the rhetoric.