The Virus Bulletin 2012 (VB2012) conference opened in Dallas on Wednesday, with a keynote presentation by Christopher Soghoian taking a look at the murky world of the security exploit industry.

Soghoian has made a high profile name for himself, calling out technology firms for privacy and security issues but used the platform of VB2012's opening address to raise the curtain on the market for selling security vulnerabilities to the highest bidder in a talk entitled "The trade in security exploits: free speech or weapons in need of regulation".

According to Soghoian, famed hacker Charlie Miller was the first to admit selling security exploits to the US government, after discovering an exploitable security hole in the Samba server software.

In a phone call, Miller was told by the government agency that they weren't able to name a price - but that the researcher should name a price instead.

Miller asked for $80,000 which was instantly accepted by the official on the other end of the line. "Oh man, I could have gotten a lot more," Miller remembered in 2007.

Don't feel too sorry for Miller, though. He did manage to furnish a brand new kitchen from the proceeds. A picture of the kitchen was briefly pictured on screen by Soghoian, and I can confirm it looked very fancy.

It didn't take long before legitimate bug bounties were announced by companies such as Mozilla ($500), Google ($500-$1337 for Chrome vulnerabilities) and later Facebook and PayPal , eager that any exploitable security vulnerabilities were reported directly to them rather than exploited by malicious hackers.

And it wasn't just lone individuals earning a living by selling details of exploitable bugs.

Companies like iDefense and ZDI (Zero Day Initiative) would pay $500-20,000 for exclusive details of exploits, effectively acting as middle-men between the bug-discoverers and the companies whose software was at risk, and selling a subscription service to those who wanted information about the bugs in advance.

Other, less well known, companies working in the field included Endgame Systems.

Private emails exposed by the hack showed Endgame Systems saying they had "been very careful NOT to have public face on our company", and the CEO was adamant that they didn't "ever want to see our name in a press release."

Endgame Systems' clients included the US Department of Defense, and their chief scientist was vulnerability researcher Dino Dai Zovi, pictured above holding the "No more free bugs" sign.

Another company, France-based Vupen, is brazen in displaying its lack of interest in playing by the rules that companies like Google would prefer. It's chief executive and lead hacker Chaouki Bekrar has turned his nose up at $60,000 bug bounties offered by the search engine giant, as Forbes reported:

"We wouldn't share this with Google for even $1 million... We don't want to give them any knowledge that can help them in fixing this exploit or other similar exploits. We want to keep this for our customers."

The truth was - big money was involved now. And Soghoian summed up the state of play quite well during his talk:

"Google and Microsoft can't outbid the US government - they will never win a bidding war with the army, navy or NSA"

If an exploitable security vulnerability is discovered today, researchers have a choice of full disclosure (letting the whole world know the complete details, before a fix is available), responsibly informing the software company of the issue (and perhaps receiving a bounty) or selling it to the highest bidder.

And the rewards can be considerable.

A Bangkok-based bug broker called "The Grugq" was even pictured in Forbes magazine, a large pile of cash in a bag by his feet, a martini next to his laptop.

The Grugq puts researchers who discover software bugs in touch with those who want to buy them, generating a reported $1 million in revenue per year and taking 15% commission.

According to Forbes, The Grugq is selective about the transactions he chooses to engage in, refusing to deal "with anything below mid-five-figures these days."

It's important to realise that, however much of an unpleasant taste this might or might not leave in your mouth, none of these people are acting illegally.

They've worked hard, using their skills to discover vulnerabilities in software systems. They are not exploiting these security holes themselves, and they aren't breaking the law.

But, because details of the vulnerabilities do not always end up with the software company capable of fixing them, because customers and users of software like you and me could be left exposed if details of a vulnerability are exclusively sold to a third party, the exploit industry has something of an image problem.

Christopher Soghoian told attendees at the Virus Bulletin conference that the exploit sellers view regulation as limiting their freedom.

But at some point, the US government is going to realise that just maybeother countries might also be interested in buying exploits for their own ends.. and then regulation will surely follow.

Soghoian left the VB2012 audience pondering the question of whether it's possible that the exploit trading industry could possibly regulate itself, avoiding the need for interference by the powers that be.

Personally, I'm not sure that that's going to work.

If the industry attempted to regulate for itself the sale and distribution of exploit information that would not only prove highly unpopular amongst many - who would view it as an attack on their freedom - but will also drive the unregulated sale of exploits, to perhaps unfriendly nations or the criminally-minded, further underground.

7 Responses to "Google and Microsoft can't outbid the US govt - they will never win a bidding war with the NSA"

OK...so political states (like the U.S.) are among the bidders for these exploits. Clearly, many people consider states to be "legitimate" buyers, the presumption being that they're not "the bad guys", and that we're better off if the exploits are in their hands. For my part, I presume no such thing, but set that aside for now.

You're not sure self-regulation is going to work. Neither am I, but then the only other alternative you mention is legally enforced regulation by political states....oops, who happen to be among the bidders for these exploits. Doesn't that kind of peg the conflict of interest meter?

Anti-trust laws, regulation of monopolies, and racketeering laws are presumed to protect citizens from the bad guys, but who protects us from the "protectors"? Legitimizing a conflict of interest by transferring it to a political state doesn't necessarily make it less harmful. In fact, it simply institutionalizes it, making it a permanent facet of the societal structure.

That's supposed to be better than lawless gangsterism, but I'm not sure that the lawful gangsterism of political states will turn out to be much better in the long run.

But there is a third alternative. Truth be told, I trust a profit seeking company like Sophos far more than I trust the inertia-ridden accumulations of maximum authority, minimum responsibility hacks called "governments". Sure...Sophos can make mistakes. But your customers can always fire you if your products and services suck. That's a huge motivation for you NOT to suck.

There's no such motivation for the massive, permanent mechanisms of the state and their conflicting, incomprehensible, unenforceable laws, which endure beyond the lifetimes of those who create them and maintain them. And we can't fire the bureaucracies.

If there's an effective, more-good-than-harm solution to the problem of exploits, it's clear to me that it is far more likely to involve responsive, profit seeking companies like Sophos than the legitimized thuggery served up by political bureaucracies. That view is probably tantamount to heresy in a world addicted to political "solutions", but maybe it's time we faced our addiction and kicked the habit.

You'd better just hope the Peoples Republic of China isn't buying these flaws or you will be squealing for the "legitimized thugs" to come and bail you out because the private sector is quite properly engaged on making profit whereas the legitimized thugs aren't.

So I see the lock is broke on the side door of a bank. But I don't tell the bank, no. I tell someone I don't know, standing on the sidewalk around the corner from the bank (because I thought that person would pay me a lot of money for that information or because I am just stupid, either way). The next day the bank is robbed, a couple of people are injured, several have stress related issues and the robbers decide to also rob the customers that happen to be there. After the robbers leave the police give chase and a pedestrian is killed and the robbers get away.
Am I an accessory to this crime? Of course I am. Should I be prosecuted for my part? Of course I should be.
Go ahead and argue that my example is not the same thing. You find a vulnerability/exploit, decide to sell it to the highest bidder, and then when it is used by that person or group to commit a crime that effects millions of people, you say you have no responsibility in that. Fine, say you didn't know what they were going to do with it. Right, sure you didn't know exactly what or how, but you knew they weren't going to try and come up with a fix for it. I say they lock you up along with the perpetrators as an accessory.
I do think you should be compensated for finding it and I think you should be compensated proportionate to the risk (or even better). But only by the owner/manufacturer of that software/hardware. If they don't care or want to pay, then maybe to a reputable security company. And finally the government. Which could be via a lawsuit against the company after it has been compromised, for knowingly and willingly leaving the vulnerability/exploit out there because they were too cheap to pay/fix it. In other words, sued for negligence.

Bounties of order $100K for a bug opens the door for inside jobs. Convince a developer to add a backdoor in exchange for a piece of the action. The risk/reward ratio approaches a positive position when the money gets to this level.