Posted
by
kdawson
on Monday May 17, 2010 @11:31PM
from the so-simple-a-kiddie-could-do-it dept.

Trailrunner7 sends along a ThreatPost.com piece that begins "The pace of innovation on mobile phones and other smart wireless devices has accelerated greatly in the last few years. ... But now the attackers are beginning to outstrip the good guys on mobile platforms, developing innovative new attacks and methods for stealing data that rival anything seen on the desktop, experts say. This particular attack vector — introducing malicious or Trojaned applications into mobile app stores — has the potential to become a very serious problem, researchers say. Tyler Shields, a security researcher at Veracode who developed a proof-of-concept spyware application for the BlackBerry earlier this year, said that the way app stores are set up and their relative lack of safeguards makes them soft targets for attackers. ... 'There are extremely technical approaches like the OS attacks, but that stuff is much harder to do,' Shields said. 'From the attacker's standpoint, it's too much effort when you can just drop something into the app store. It comes down to effort versus reward. The spyware Trojan approach will be the future of crime. Why spend time popping boxes when you can get the users to own the boxes themselves? If you couple that with custom Trojans and the research I've done, it's super scary.'"

They already sign the code, some of the app stores even require business documents before you're allowed to put anything up.

Having source is a plus but this is commercial software we're talking about, you don't have the source for the 2 things you mentioned, Reader and Flash. Besides that, having the source isn't guaranteed to protect you, companies have been obfuscating the hell out of source code for a while now. All they really need to do is get users to install the binary first, and then it's a waiting game to see if anyone actually reads the source and finds the evil lines, if they ever do. By then, millions of users have installed the app or the updated app (the first version doesn't need to be malicious) and had their info stolen, etc.

Well I wouldn't want to build Flash or Reader from scratch so what I said is true. Source is optional for yum but of course it can be required by the repository.

The nice thing about yum is you use it to update the system packages, and third parties can use the same system to update their software. All they have to do is drop a file in/etc/yum.d and their "app store" is visible to all the package installation tools.

Since Apple has an apparently arduous approval process for their app store, I'm assuming that they guarantee everything against this sort of foolishness.
I didn't bother to read the 92 page EULA that went along with it, but they're an honorable company, right?

Since Apple has an apparently arduous approval process for their app store, I'm assuming that they guarantee everything against this sort of foolishness.

And I sense that we've discovered the next year's Underhanded C Contest [xcott.com] thema."Design a piece of code that looks like a genuine mobile funny game, but in fact turn the smartphone into a zombie node of a powerful and evil bot-net...""Bonus point if your game actually passes Apple's App Store certifications".I can really see it coming:-D

I think source code availability might actually make it easier for someone to write a trojan. Without it they would have to write a program from scratch that looks like a legitimate program. If they can get the source code all they have to do is make some small modifications, release it under a different name for free, and by the time people realize what's going on the damage is already done.

How exactly would this stand up to the scrutiny of Debian or Red Hat
or Canonical for any appreciable amount of time? Somebody has to actually hand maintain the packages
in those repositories. Software doesn't just get willy-nilly thrown onto the servers.

The screening process is on the binary, it is very hard to detect some crappy code that is intended to cause a buffer overflow.

That would still limit you to userland exploits, but it would definately allow some malicious code to be injected through a server request that could access phonebook/etc and then send it back home all without the naughty code ever existing in the application that was submitted to Apple.This code would be all but invisible since the timebomb and malicious payload are controlled re

Well, for the iPhone app store, where's your motivation? How do you profit from it? You have to come up with fake credentials while submitting the app, you have to be sneaky enough the screeners don't notice, your app has to bust out of a fairly tight sandbox, then it has to do something that benefits you more than the risk of getting caught and the effort of development, and you can't count on it persisting since as soon as anyone notices, Apple pulls it not only from the store but also pulls the keys so i

I poked around the internets a bit and only found a mention or two for iPhone trojans. These trojans were ONLY on jailbroken iPhones, not un-jailbroken ones that are using the iPhone App Store. As far as I know there have never been any "banker" trojans in the iPhone App Store.

This article seems to be riding the coattails of the iPhone's popularity by throwing it in the mix with other platforms that have had "banker" trojans. If they have evidence of an iPhone App Store trojan I'd love for them to directly mention it rather than being vague and doing a lot of hand-waving.

Actually, if you read through the linked article(s), you'd find out that it's two banks that put out alerts. Digging deeper, the developer put out around 50 apps that Google pulled when notified by one of the banks. What the apps actually did is in question. All the banks knew was that they didn't produce the apps that purportedly accessed their services. And that caused concern.

So if they weren't malicious, why do them? From the article:

"Lots could be going on here," he said. "09Droid may simply have been trying to cash in by offering apps that do nothing but provide a shortcut to the online bank's site, which the user could reach himself in the browser."

Under that scenario, 09Droid was out for a quick buck -- literally -- by charging users 99 cents for applications that, while harmless, only added a shortcut icon to the phone's desktop.

Android's Market tells you exactly what an app can and can't access before you install it. In order to access certain classes of API, the app has to include this access in its manifest file or the API's aren't available. Examples include location (there are two tiers: rough network-based, and precise GPS based), phone (again, two tiers: phone state [usually to do things like pause music when the phone rings], and the ability to place/receive calls), network access, storage (read or modify SD card contents

yeah something combining android's manifest and blackberry's application permissions screen would be really nice... They each have half of the puzzle. BB lets you block permissions by application to certain functions (like gps, phone, etc) but it is not smart enough to know which of those things the app might try to do.

One app I use to mitigate this is Droidwall. It is an app for rooted phones which uses ipchains to allow or deny apps access to the network. Even if an app demands Internet access, it won't be able to send packets in or out unless Droidwall is configured to allow it.

Of course, if an app is installed and nobody checks permissions, it can send/receive using SMS or MMS, but that is a different story altogether.

Well, this isn't quite as serious as Bank Trojans, but Storm8 is infamous for stealing phone numbers from their customers. And this is with the all-mighty App Store in place.

Which any app on any other platform, save Android, can do. In fact, Apple has a right to pull phone number stealing apps off the market for using "private APIs" because there is (or was) no API to get the phone number.

But if you have a BlackBerry, Windows Mobile or Symbian phone, the phone number's an API call away. The "except Android"

After all, these APIs have been around for years, yet only the iPhone has started the whole steal-private-data thing that every other phone could've done for a long time now.

For the same reason PCs get all the viruses: they have the most naive and least technically sophisticated customer base. Apple knew the iPhone would be a juicy target for malware, far more so than previous smartphones. Application signing and remote revocation is the one thing they did right, which is why Android Market does the same an

I don't agree. Sure, it's acceptable to have a walled garden, and to even make it the case that by default you can only wander the carefully groomed paths in that space. But if you want to peek over the wall, or even exit the garden, you should be permitted to. Sure, raise a few warning "Oh no's, nobody can tell you whether these apps out there have thorns or not," screens. But don't prevent me from leaving or else what you have is actually a carefully tended prison (it's even called jailbreaking when yo

I don't agree. Sure, it's acceptable to have a walled garden, and to even make it the case that by default you can only wander the carefully groomed paths in that space. But if you want to peek over the wall, or even exit the garden, you should be permitted to. Sure, raise a few warning "Oh no's, nobody can tell you whether these apps out there have thorns or not," screens. But don't prevent me from leaving or else what you have is actually a carefully tended prison (it's even called jailbreaking when you exit the approved area).

Why enter the walled garden and complain that you can't peek over the hedge, when you have an alternative right next door (Android) that you didn't choose?

Apple is free to do whatever they want with their walled garden, and you are free to go elsewhere. So, why not just encourage people to go to the solution which isn't a walled garden, rather than trying to break down the walls you know aren't coming down?

when you have an alternative right next door (Android) that you didn't choose?

Actually I did choose it. I had an iPhone, and once Android became competitive (version 2.1), I bought a Nexus One.

The problem is that just because I make an informed choice doesn't mean the average consumer is going to. Software freedom (including and especially freedom of choice) is good for the industry at large because it fosters competition. Apple is currently betting that it has enough market share to remove software fr

If a consumer is not capable of making an informed choice between the iPhone and Android then lets hope they choose the iPhone because they wont be capable of making informed decisions about what apps to install.

Even though I've already abandoned Apple, it's their belief that enough people won't do this that they can retain their clout. The industry as a whole is damaged as a result. Further it sets the precedent that a software company can dictate what other software you run on the same device for business reasons rather than for technical ones (i.e. we're not talking software incompatibility, we're talking rejection because they say so). Apple is the first, if they succeed, you can guarantee that other companies will be looking to shut out their competition simply by refusing to let you run the competition's software. The entire thing is creating an atmosphere of anti-competitiveness.

You're actually 2 decades late. Nintendo did this on the NES back in the 80's, with a lock-out chip [wikipedia.org]. Only Nintendo approved (and licensed) software could be loaded and run, at least without 'jailbreaking' the cartridge to circumvent this. Note: the world of open environments has not collapsed yet.

That said, we're talking about a cell phone, which never had the ability to run user software before anyway. If they want to do the same thing on a PC, then I would begin to worry.

For especially sensitive apps (eg, banking), knowledgeable people will generally understand that you should stick to the official app store.

Fixed

In all seriousness, there are some classes of users that are better served by walled gardens. I know several iPhone users would would download 'cute' apps that are actually malware. They should stay in the garden for their own protection.

Then there's another class of users. I know enough to avoid downloading malware, but I don't want to take the time to review e

I don't agree. Sure, it's acceptable to have a walled garden, and to even make it the case that by default you can only wander the carefully groomed paths in that space. But if you want to peek over the wall, or even exit the garden, you should be permitted to.

Okay, take that argument apply it to any other store, like Gamestop. Sure it's acceptable that Gamestop only carries certain products, but you should be able to break out of that walled garden. What does that mean? That Gamestop and Apple should be forced to carry other products in their stores? That OS developers like Sony, Nintendo, and Apple should be forced to modify their OS's to support other ways to install and run software?

If you don't want to be locked into Apple approved apps, don't buy an iPhone.

As much as we hate Apple's walled-garden approach to an app store, having a central authority with a kill switch for any app, plus limited multitasking ability, plus developers tied to using the app store's preferred programming language and tools are all things that stand in the way of a would be trojan spyware author. As Apple claims, jailbreaking your iPhone could all "the enemy" to do what they want with it, and that could crush poor little American Telegraph and Telephone Co.'s network.

Google touts openness, and Microsoft touts the power of a free-market of commercial software, both of which provide nice benefits to the consumer, but also to the hacker who wants to compromise user privacy. Has anybody looked into the Facebook apps on these platforms?

As much as we hate Apple's walled-garden approach to an app store, having a central authority with a kill switch for any app, [etc....] are all things that stand in the way of a would be trojan spyware author.

Perhaps, but if you cast your net a little wider, you'll realise that the main thing required is a viable process. Autocratic centralised control is just one of a number of different and equally effective means of managing security for end users. Debian, Ubuntu, Fedora and countless other community-maintained repositories have historically sustained a commendable level of security in their vast software collections. They've built up so much trust, in fact, that the trust itself has become a peculiar kind of strength [imagicity.com].

The only way the three systems you mentioned would detect a rogue package update, would be from open-source coders reviewing the original codebase. Maintainers don't often examine code -- often, they are even incapable of it.

So what do you get when that update comes from (A) a closed-source application, or (B) a solo-programmed OSS project? You get hell, that's what you get.

Also, a bit of perspective. The last I heard (years ago), Debian had 17,000 packages. How many do you think the iPhone has?

There's a web of trust backed up with digital signatures. So if someone finds a trojan in some code in the repository they can track back where it came from. It's actually happened once or twice and the response was incredible.

Debian, Ubuntu, Fedora and countless other community-maintained repositories have historically sustained a commendable level of security in their vast software collections.

Actually they've had numerous problems and failed to provide a viable option to extend that functionality to commercial software offerings. Canonical, in fact, is working on cloning the Apple store by adding a similar feature to the new Ubuntu package manager, due in the next release.

As much as we hate Apple's walled-garden approach to an app store, having a central authority with a kill switch for any app, plus limited multitasking ability, plus developers tied to using the app store's preferred programming language and tools are all things that stand in the way of a would be trojan spyware author.

Know what would really stand in their way? Not having mobile devices. Then they'd have a hard time doing anything malicious with it since we wouldn't even own them. Oh wait, yeah, we wouldn't

Android has on-device security which let the user know, in simple English what the application will do ("can access your contacts", "uses services that cost you money (SMS, makes phone calls)", "will access the internet") so when you download a fart application that wants access to your contacts and to the internet you have to figure out something isn't right.

As much as we hate Apple's walled-garden approach to an app store, having a central authority with a kill switch for any app,

But that isn't so useful as Apple's walled garden approach has forgone local security in favour of gateway only security, once you've gotten past the censors you have a free reign. Enterprises have known for some time that gateway only security is a complete and utter failure. You need both gateway and local security, which Android provides both although the gateway security is entirely voluntary (but enabled by default).

There have already been data miners for the Iphone that have gotten past Apple's ever watchful censors including at least one fake banking application (BOA, IIRC). This isn't including data miners like Arsebook.

Ultimately gateway and local security is preferred for end users, one should have a choice whether to use the gateway or not but local security is an absolute must, especially on a mobile device. Despite how good you think your gateway is it is fundamentally flawed.

There have already been data miners for the Iphone that have gotten past Apple's ever watchful censors including at least one fake banking application (BOA, IIRC).

Link, please. Because I remember hearing that fake banking apps were a problem on Android. I certainly never heard that one was out in the app store for the iPhone, and I think that would have been pretty big news.

Android has on-device security which let the user know, in simple English what the application will do ("can access your contacts", "uses services that cost you money (SMS, makes phone calls)", "will access the internet") so when you download a fart application that wants access to your contacts and to the internet you have to figure out something isn't right.

Malware authors are nowhere near as stupid as you make them out to be.

The Application SandboxFor security reasons, iPhone OS restricts an application (including its preferences and data) to a unique location in the file system. This restriction is part of the security feature known as the application’s “sandbox.” The sandbox is a set of fine-grained controls limiting an application’s access to files,

Why the FUCK (in line with today's other ACLU article) can't I have this feature in a modern OS? Linux? Windows?chroot is not good enough, IMHO. jails are closer, but still not good enough. I'm not sure on SELinux... I don't want virtualization - I want application sandboxing!

Maybe system hooks to a supervisor module to prompt me for a password whenever the app tries to break the sandbox (system or network documents, maybe)...

Seriously, this is the next wave of OS protections from malware - where are th

But that isn't so useful as Apple's walled garden approach has forgone local security in favour of gateway only security, once you've gotten past the censors you have a free reign.

Don't you think it would be better to, you know, do any research on a topic before making such assertive and blatantly wrong statements? If anything, Apple's sandboxing is more restrictive than Google's.

There have already been data miners for the Iphone that have gotten past Apple's ever watchful censors including at least one fake banking application (BOA, IIRC).

Citation please. I've seen only trojans distributed to jailbroken iPhones, not through the store. Additionally, having a central store allows Apple the option of revoking the ability of such applications to function on all non-jailbroken iPhones everywhere.

Except taking that quasi-mac and just dumping the Big Brother approach works equally well.

All of the justifications for the fascist nonsense depend entirely on ignoring all of the well engineered alternatives to Windows and pretending like they either don't exist or don't have the same vulnerabilities.

In order to elevate the new messiah, the cult needs to deny the old one.

Wow. I was going to download some apps from one of those app stores. I can't believe I nearly exposed my phone to something even more dangerous than anything on my PC. In future, I am going to just limit myself to downloading whacky screensavers for my Windows system, because that is totally unlike downloading an app for my phone.

Seriously, I can't believe the gall of those attention-seeking media whores who call themselves security experts. Years after we have been able to download applications for phones, some nitwit finally realises that one of those apps could be harmful. All they have to do is blow the danger out of all proportion and wait for the stupid media to lap up the story.

"But this time it is different - instead of downloading the app from a website, you get them from an app store!" Yeah, right.

The real power behind the Apple vetting process has nothing to do with what Apple does, it's what Apple has: Your bank routing #, social, full name, address...and yes, they have all this of mine.

So if a fly by night app store that lets anyone submit apps without any process and may not collect this information for all app submitters has an app with a virus - they remove it. Apple could quite possibly notify the authorities of your location.

Are you seriously implying that someone writing malicious code for the App store can't come up with even one fake identity good enough to fool apple? Seriously? Because that's just wishful thinking at it's finest.

I don't have the slightest idea how apple vets that information, or if they even do. What I do know is that for 25$ on any number of websites, I can buy a full identity, including all the above info and a lot more.

I don't know if it's that bad. If Bank of America creates an App that lets me access their bank, I might use it (assuming I had an iPhone). I think it is reasonable to assume that Apple would not let anyone but Bank of America create the Bank of America app. If there is another app that asks for my bank account info, I'm going to be really suspicious. So there is some security built into the app store, even if they don't verify every line of code.

That is bullshit. They not only check for malware, they even check for privacy violations and use of unfinished API's that may break in a future OS release. The whole app platform was designed for approvals.

You can't say iPhone is doing it wrong because it's not open on one day and then say it's just as vulnerable to malware as Android the next. We know Apple is not as vulnerable because they have not had any malware through 2 years of a billion downloads and over 200,000 apps, while Android Market has served malware with significantly fewer apps and downloads. And most of Apple's users do not know WTF "malware" is, which is why they do it this way.

We know Apple is not as vulnerable because they have not had any malware through 2 years of a billion downloads and over 200,000 apps, while Android Market has served malware with significantly fewer apps and downloads.

That we know of. Maybe an app has already swiped everyone's info secretly. We don't know.

That is next to impossible. Consider an app that backs up your SMSs to gmail. There's one for android, I don't know if this is 'allowed' on the iPhone. Anyway, it has a perfectly legitimate reason to

a) Access your SMSs, phone number etc.b) Access the internet.

There's no way you or Apple can tell whether it will also send those messages to the hacker's own server unless you have the source code (and even then it would be prohibitively expensive for Apple to audit it). If you're thinking "but... wireshark...

Any app on the blackberry requires user intervention before it's allowed to fetch URLs, open raw sockets, read email, dial the phone, get your location, manipulate the address book, or do any other damned thing. And 90% of the APIs require the developer to be vetted through the app signing process. It actually seems much less vulnerable to trojans and spyware than a PC.

I agree with the poster that the economics of attacks is definitely in favor of the Trojan vs. the technical attack. It's scary how many people install junk on their computers, and it's not getting any better. Even I do it sometimes without knowing 100% who's behind some utility or patch that I want. This is the approach that pays off easy too. Why bother trying to sneek into their box when the user's will install your bug for you?

In nature though, some of these parasites actually evolve into beneficial bugs. The take their little bit, but they also do some extra bit for the host. Both sides win, this is symbiosis. Imagine that the SETI@home also defragmented your disks or optimized performance some how in exchange for running on your system, same thing.

Now consider for a second that Conficker patched some security holes after entering the host system....Isn't it doing some little bit of good? Not wanting it on my box, just showing how Conficker's security is also beneficial to the host machine. Their goals align... Consider also, how does Google's goals align with mine when I use online Docs?

I think there will be a real blending here. Trojans will get more beneficial and less intrusive, people will tolerate them because they do something useful, and a new class of free (as in beer) software will evolve.

You can't tell me how wrong Apple is for having a closed store with strict app approvals and how other mobile makers will outdo Apple with their open stores and then wrote a malware-scare article about how app stores are too open and lump Apple in with everyone else. It's one or the other. Everyone else has Jas apps you can install from the Web and Apple has C apps you can't.

Apple has an actual record here. They've been malware-free 100% for 2 years, 200,000 apps, over 1 billion downloads, with consumer users who don't know what malware is, doing 1-click installs.

How you can write an article like this saying "app stores should be more closed" and not mention Apple's is closed is beyond me.

And there has been no native malware on iPhone. Also bullshit.

And although Apple may not strictly guarantee zero malware, they are actively policing every app. To pretend that's like having no cops, as on the other platforms, is ridiculous.

My guess: there's a rather popular hate-the-leader bandwagon among certain geeks. You see this on Reddit a lot, where anything critical of the iPhone or iPad gets modded up immediately whether it's insightful or not.

This author is probably part of that bandwagon, desperately trying to stitch together a premise (open app stores are an opportunity for trojans) and an incorrect conclusion (fear the iPhone!) with no logical connection. Why else use App Store like a proper noun in the title, knowing full-well that most people will immediately assume the iPhone/iPad App Store?

Anyone who's owned a Mac a long time and constantly been lectured by their PC-using friends that "Macs are just as susceptible to viruses" even though no one gets viruses on their Macs while PCs are like leper colonies for malware knows this full well.

But we know that there is data mining going on with the iPhone. There are advertising networks that developers use to handle their in-app ads and those networks have been mining peoples data since 2.0 first came out.

While I completely agree with what you posted, I do have one question: were you actually expecting journalistic integrity from some half-ass "security consultant" who's job primarily consists of yelling OMG THE SKY IS FALLING OMG OMG OMG as loud as they can, to as many people as possible?

This isn't the Washington Post or CNN.com - it's some useless d-bag who's trying to make a name for himself writing on a blog.

I was testing SSH clients for the iPhone so I bought about a half dozen, one of them flat out didn't work (filled out the problem form, no response). One didn't allow you to change the port to something other than 22. Only one app allowed you to import a key. Only one (a different one) allowed you to have more than one key. In other words one was completely broken, one was arguably missing basic functionality and all were missing common functionality. In other words the quality was abysmal.

I also tried to contact them, one had a website listed that was several years out of date and had no contact info (no names, emails, phone numbers, nothing). Not exactly inspiring of trust.

Based on this I can simply say I will not use them, for one thing they don't work terribly well. But mostly because who knows what they do in the background. Perhaps every 50th connection, assuming it is a Tuesday they send your connection details (user name, password, IP, etc.) in an outgoing packet to the bad guy that wrote the app.

I actually regret going with the iPhone (not that the android is much better in this respect). I'm so used to Open Source software having to use a closed source application from a basically unknown source (as opposed to someone who is at least known and ideally has a decent reputation they want to protect) is foreign to me and to be honest a deal breaker.

I'm not a big fan of the Steve Jobsian App Store lockdown policies, but at least inside of that, if an app is discovered to be malicious, Apple can wipe it from everyone's phones I believe without even asking them.

... and even if you can see the source, you still can't trust it. Unless you are an expert in the source's programming language, AND you are willing to spend several dozen hours doing a line-by-line review of all of the source code, most exploits are still going to walk right by you. A "mistake" that opens up a security hole can be very subtle; indeed that's why so many honest developers end up releasing security holes by mistake.

And that's not even counting the second issue: how do you verify that the source code you are reading actually corresponds with the executable your computer is going to run? If you download both source and executable, it could be that the source is clean, but the executable contains a back door. Even if you compile the source code yourself, it could be that the code exploits a bug (or backdoor) in your compiler to implement behavior different from what the source code indicates.

Sounds like what you want is Gentoo: phone edition. Plug in your phone, type emerge --sync && emerge phone-image on the PC, wait overnight while the image compiles, then dd onto/dev/phone. If it crashes, do another emerge --sync and see if emerge phone-image compiles something new, then dd that. Call^W Email work and tell them you'll be late because you're compiling your phone OS again. They'll understand.

It comes down to if you cannot see the source don't trust it. As long as blackhat crooks are out there making closed binaries there will be problems with trojans. If Google is smart they will insist that all code must be visible to operate on the Android OS. Perhaps Rim will follow suit and make sure that all third party binaries are clean. I know this really irks some developers but if your code is clean, unique and has a copyright why are you afraid that others will see it?

Congratuations sir, you both correctly depict the average technology user and incorrectly account for it at the same time.

We both agree that on-device security is necessary. We both agree that a false sense of security is bad. The biggest problem is still between the keyboard and chair.

User downloads Dancing Bunnies app. RIM very granularly lets you set permissions. Eventually, users figure out that the "Always Allow" options let the application run without being nagged to death with prompts. Nevermind that

And when is the last time you looked at every single line of code for a major open-source application and made sure that it was totally and completely safe? Do you just use them, assuming that someone else [developer.com] did it for you [developer.com]?

The fact is that we all trust the developers at some point, it doesn't matter if it is open or closed source. At least with a major author they have a physical presence, buildings, investors, publicly traded, cash in the bank. If they do something underhanded you have stuff you can go after

And when is the last time you looked at every single line of code for a major open-source application and made sure that it was totally and completely safe? Do you just use them, assuming that someone else did it for you?

It doesn't matter if I do it; if it's an important enough
piece of software, somebody has. And if it's really important,
more than a few somebodies. And if it's really really
important, I can pay somebody to do it. And it's not an either/or
problem like you frame it. You may not realize this as you seem to
be a bit of a noob in regards to security but security requires a
multi-layered approach. Having the source is just one. Surely you
aren't foolishly arguing that I'm better off not having it.

At least with a major author they have a physical presence, buildings, investors, publicly traded, cash in the bank.

It doesn't matter if I do it; if it's an important enough piece of software, somebody has. And if it's really important, more than a few somebodies. And if it's really really important, I can pay somebody to do it.

I'd like to introduce you to an important, relevant psychological effect known as the bystander effect [about.com]. The more important that something public is, the GREATER the chance that no one will take care of it because they all just assume "It's so important that someone must have taken care of it."

I'm not saying that open source is insecure, just that you can't automatically assume that it IS secure. Unless you personally look at the code or pay someone trusted to do so, you have to assume that it isn't secure.

is as meaningless as any other word that denotes an absolute yet
objectively unattainable ideal. Check this out, if the source
is open, you have the opportunity to get closer to the ideal than
if the source is closed, i.e. doing the inspection work yourself or
paying someone else to do it.

What you said and what I took issue with was this:

And when is the last time you looked at every single line of code for a major open-source application and made sure that it was totally and completely safe? Do you just use them, assuming that someone else did it for you?

That's a shill statement trying to make the debate look like it's just
a black and white issue. It isn't. Agai