Dan Geer, speaking at Black Hat, outlined a series of policies he believes will help make the Internet more secure.

Sean Gallagher

LAS VEGAS—In a wide-ranging keynote speech at the Black Hat information security conference today, computer security icon Dan Geer gave attendees a sort of personal top 10 list of things that could be done to make the Internet more secure, more resilient, and less of a threat to personal privacy. Among his top policy picks: the US government should move to “corner the market” on security vulnerabilities by paying top dollar for them and then publish them to the world.

“We could pay 10 times the market price" for zero-day vulnerabilities, Geer said. “If we make them public, we zero the inventory of cyber weapons where it stands.”

The effectiveness of the strategy, he admitted, was “contingent on vulnerabilities being sparse—or at least less numerous.” And he expressed concern that the growth of vulnerabilities created by machine-written code could outstrip the ability of human researchers to keep up—rendering bug-hunting as a form of “security theater.”

Geer expressed a personal interest on reducing his personal exposure to the “dependencies” of the Internet because of his concerns about the rapidly increasing risks to privacy and security. At the same time, he advocated for greater monitoring of Internet traffic for the purpose of forensic examination of security threats—“looking backward for evidence of something you don't know about before has value,” he observed.

A passel of policies

Further Reading

Team of security researchers will strive to make the Internet safer for all.

Geer’s suggestions also included a number of proposed policies aimed at forcing corporations, software vendors, and Internet service providers toward an environment that would be more friendly to both security and privacy. He suggested a mandatory reporting regimen built on the Centers for Disease Control model that would apply to all breaches above an agreed level of severity.

“The US CDC is respected the world around,” he noted.

Geer said CDC is so effective because of the mandatory reporting scheme, its analytical data store, and its ability to send a response team to the source of an outbreak.

“We have well established rules of medical privacy, based on a need-to-know regime—most days, that is. But if you check in with bubonic plague, Ebola, or anthrax, you have zero privacy.”

For less severe breaches, Geer used another model for reporting: the voluntary, anonymized reporting approach similar to the Aviation Safety Reporting System (ASRS) used by the Federal Aviation Administration (FAA) for “near miss” incidents between aircraft. ASRS allows airlines to report accidental safety rule violations voluntarily without facing penalties, demonstrating what the FAA describes as a “positive safety attitude.”

The reason for this approach, he reasoned, was that currently over 75 percent of security breaches are reported by a third-party and not the target of the breach.

“The victim might not even know if the breach is never reported,” Geer underscored.

"Net neutrality is not a panacea"

Geer also recommended fixes for software liability that he believes would correct much of the underlying cause of security vulnerabilities—or at least give companies financial incentive to do more thorough testing.

“Clause one would be, if you deliver code with build able source, your liability is restricted to a refund,” he proposed.

Developers would provide a “bill of materials—what came from who,” so that the licensee could recompile without the parts that they don’t trust.

“All your copyrights are still yours, leaving everything unchanged," he noted. The alternative, he said, was “you are liable for any damage your software causes in normal usage.”

While he added that "normal usage" was something that courts would have to define, there were some obvious things software shouldn't do. "Plugging a USB into a computer should not make your computer part of a botnet. That's not something that an operating system should do as part of normal usage."

Additionally, Geer said, “abandoned” software—such as older versions of operating systems no longer supported by their developers—should be treated the same way as other abandoned property—“either you support it, or you give it over to the public."

The bearded guru also weighed in on network neutrality and its relationship to the health of the Internet.

“Net neutrality is not a panacea,” he said, adding that Internet providers’ ability to claim the protections of being a common telecommunications carrier while avoiding liability for what passes over them allowed them to duck any responsibility for security.

He declared that ISPs needed to be “restricted to a constrained set of choices. You can charge whatever you want, but you are responsible for what you carry; or you can be a common carrier and not monitor traffic. You get one or the other, but you don’t get both.”

Convergence “cold civil war”

Geer also expressed a range of concerns about privacy and government efforts to monitor and control the Internet. He addressed the convergence of the “meatspace” world of government and society with cyberspace.

He said that there is a “cold civil war” underway over how that convergence will happen as governments and institutions fight to exert control over what happens online, and as the Internet engulfs more of commerce and human interaction.

The natural tendency of that convergence, Geer postulated, pushes the physical world to move toward the rules of cyberspace. He pointed to what has happened with newspapers in the age of the Web as an example, and said that the democratizing power of the Internet threatened to reduce the relevance of nation-states.

“People have more power [through the network], and those who used to have power want to act in a countervailing way,” Geer said. “The power that is growing in the network will soon challenge governments’ ability to control.”

One possible result, Geer hypothesized, is that “cyberspace may become more like meatspace,” with government moving to break the Internet into more easily controlled “chunks.”

That sort of balkanization of the Internet has been a feared result of the backlash to US dominance of the Internet following Edward Snowden’s leaking of NSA documents about broad surveillance, but it is also already being seen in countries trying to exert more control over Internet speech—be it Russia’s recent laws requiring individuals’ data be stored in Russia and the regulation of bloggers, China’s “Great Firewall," or Iran’s efforts to censor foreign Internet content.

I was expecting that there would be something terrible in there, given the CIA connection, but everything he proposed sounds good to me. That doesn't mean any of it will happen, of course, but still plenty of good suggestions.

I really hope that the internet does not become more balkanized than it already is. The whole point really is to bring together distant parties toigether that previously could not share information between themselves. If it does get split up, it'll still be useful but no where near as much as it is today.

It's a better policy than buying zero days and hoarding them in a big pile of e-peen; but the 'just buy the zero days' concept seems dogged by the problem that, while the price may be lowish in an environment where zero-day buyers are interested in hoarding them and using them only when it suits their interests(since the overall fix rate is lower and selling to multiple customers is easier), it would rise as the supply fell.

The new gold standard would be 'zero-day never sold 2 USA L@@K!!!', rather than 'zero-day!'; but a customer who wanted zero days for either commercial exploitation or military/intelligence activities would presumably bid up to the expected commercial gain or their computer offensive budget for 'exclusives' that include non-disclosure to the vendor or anyone committed to fixing them.

Something like the charitable practice of of buying and freeing people in conditions amounting to slavery or indentured servitude: it's a very nice idea; but it also establishes a floor price in the market.

It's a better policy than buying zero days and hoarding them in a big pile of e-peen; but the 'just buy the zero days' concept seems dogged by the problem that, while the price may be lowish in an environment where zero-day buyers are interested in hoarding them and using them only when it suits their interests(since the overall fix rate is lower and selling to multiple customers is easier), it would rise as the supply fell.

The new gold standard would be 'zero-day never sold 2 USA L@@K!!!', rather than 'zero-day!'; but a customer who wanted zero days for either commercial exploitation or military/intelligence activities would presumably bid up to the expected commercial gain or their computer offensive budget for 'exclusives' that include non-disclosure to the vendor or anyone committed to fixing them.

Something like the charitable practice of of buying and freeing people in conditions amounting to slavery or indentured servitude: it's a very nice idea; but it also establishes a floor price in the market.

Agreed, but presumably there is also a ceiling on zero days based on the value of the information that can be extracted. As in, if the average zero day nets $600,000 before (a) holes get patched (b) passwords get changed (c) credit card numbers get invalidated, the price for a free-market, mercenary exploit won't go much beyond that.

Personally I think its strange Geer's against balkanization of and national controls on the internet, but the US is supposed to buy up all the exploits at 10x the market price as an international public service. That may end up being cost-benefit positive — depending on, as he noted, just how many of them there will be — but that must look like an even better idea to non-US taxpayers than US citizens.

Following on from that, if purchasing zero-days was worth it, why aren't the companies making these products doing it in the first place? Likely because the liability of companies isn't as big as the price these exploits go for. Which leads to his point about source code and increasing the liability by statue, but then he also wants mandatory reporting of security breeches, which the CDC can do fairly easily because usually a neither a patient nor his doctor is held liable for suffering from a disease.

This sounds like(1) We're going to destroy the market for exploits by paying black hats ten times what they were making before.(2) We're going to encourage transparency by mandating people self-report breaches and holding them liable when they do.i.e. he has a lot of individually plausible solutions that seem to work against each other in sum.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

That's an overly cynical take on it. There are not that many pieces of software that could be remotely exploited and that are widely deployed. If the software is not commonly used enough that attackers would pay good money, the defenders don't need to pay much either to outbid them. And if someone on, say, the Chrome dev team wants to add bugs so a collaborator can report them and split the take, I think that the guy's colleagues are going to notice who has an unusual number of critical vulnerabilities with his name on the commits.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

Where is the reward for the creation of vulnerable software?

SoftwareDeveloperA on low wages creates software with security bug.SoftwareDeveloperA creates variable called SoftwareDeveloperB that is set to SoftwareDeveloperA.SoftwareDeveloperB reports security bug.SoftwareDeveloperB gets paid.SoftwareDeveloperA profits.There is no ???.

Or if you prefer:SoftwareDeveloperA creates software with security bug.SoftwareDeveloperA tells SoftwareDeveloperB about bug.SoftwareDeveloperB reports bug.SoftwareDeveloperB gets paid.SoftwareDeveloperB splits profit with SoftwareDeveloperA.There is no ???.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

Where is the reward for the creation of vulnerable software?

SoftwareDeveloperA on low wages creates software with security bug.SoftwareDeveloperA creates variable called SoftwareDeveloperB that is set to SoftwareDeveloperA.SoftwareDeveloperB reports security bug.SoftwareDeveloperB gets paid.SoftwareDeveloperA profits.There is no ???.

Or if you prefer:SoftwareDeveloperA creates software with security bug.SoftwareDeveloperA tells SoftwareDeveloperB about bug.SoftwareDeveloperB reports bug.SoftwareDeveloperB gets paid.SoftwareDeveloperB splits profit with SoftwareDeveloperA.There is no ???.

And someone has else has pointed out, that's easily discoverable fraud. In fact it's probably a good thing: that way you can easily find any bad and incompetent apples in your organisation and get rid of them easily.

I was expecting that there would be something terrible in there, given the CIA connection, but everything he proposed sounds good to me. That doesn't mean any of it will happen, of course, but still plenty of good suggestions.

In an intelligence agency, there is always the isue of "equities" that rears its head in these debates. This is the debate between offensive capabilities vs. defensive ones.

Given the open nature of the relationship between the presenter and the US C.I.A., I can only guess that the presenter has demonstrated enough of a threat to US interests to win on the defensive side. Given my understanding of the current state of CPE (Customer Premises Equipment), i.e. consumer routers, I feel that this is a calculated decision. Consumer routers are crap when it comes to timely updates of any kind. Given their ability to participate in attacks of various sorts (esp. DOS), this seems like a correct response.

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

It's a better policy than buying zero days and hoarding them in a big pile of e-peen; but the 'just buy the zero days' concept seems dogged by the problem that, while the price may be lowish in an environment where zero-day buyers are interested in hoarding them and using them only when it suits their interests(since the overall fix rate is lower and selling to multiple customers is easier), it would rise as the supply fell.

The new gold standard would be 'zero-day never sold 2 USA L@@K!!!', rather than 'zero-day!'; but a customer who wanted zero days for either commercial exploitation or military/intelligence activities would presumably bid up to the expected commercial gain or their computer offensive budget for 'exclusives' that include non-disclosure to the vendor or anyone committed to fixing them.

Something like the charitable practice of of buying and freeing people in conditions amounting to slavery or indentured servitude: it's a very nice idea; but it also establishes a floor price in the market.

It would be a really good thing if there were a guaranteed way to make money from discovering software exploits that didn't involve selling them to spy agencies and other criminals. Right now we're looking at a classic market failure where exploitable software defects cost a lot of money but don't have a good business model for finding/fixing them, because the effects are so diffused/unpredictable. We don't rely on the free market for national security because market failures make it impractical. Why not do the same for national computer security?

How is the seller going to know that the person bidding on the 'zero-day never sold 2 USA L@@K!!!' exploit isn't actually a buyer for the US disclosure team? It's not like there is a Better Blackhat Bureau they can check. Or even if there was, you'd never be sure that the BBB was trustworthy: it could be like one of those carder forums that actually had the admins turned by the Secret Service. The problems don't stop there: a real genuine blackhat org might be the first buyer, but then someone in THAT organization could sell it out to the US disclosure team. Pretty much the only way to guarantee that exploits don't get reported for a bounty is to not sell them. Anyone who wants to build self-installing malware will have to bring their exploit developers in-house, at increased expense/complexity. Raising the cost of attack is a good thing.

Having the US government pay bounties for exploits, with the intent of fixing them, has the same benefits as vendor-backed bug bounty programs plus expanded coverage for vendors that don't do their own bug bounties. Also expanded coverage for bugs that don't seem to 'belong' to just one piece of software, like browser plugin bugs that interact with browser sandbox bugs. The arguments against it really seem like grasping at straws.

Britain did this back in the day in India with rat head hunting rewards. Indians just made rat farms, killed a bunch, left more to breed, repeat, and sold the dead ones for reward to the British. When the British caught on and stopped the program, the Indians just let all the rats go, and they ended up with a bigger rat problem than when they started.

Governments rewarding people this way never work. People just game the system and take YOUR money.

I was expecting that there would be something terrible in there, given the CIA connection, but everything he proposed sounds good to me. That doesn't mean any of it will happen, of course, but still plenty of good suggestions.

It is only TALK!. What can you expect from an agency that specialise on deception as their prime directive ?. The "something terrible" is out there and they are using it and NOT tellig anyone. HeartBleed comes to mind ...

Purely supposition/opinion, with no evidence: his idea of controlling exploits is laudably utopian, and makes perfect sense in his perfect world. But, I suspect that given his background, he has little experience of the 'real' world, complete with those individuals -both corporate and criminal - who's motivation is profit, and would seek to use such a market to their own advantage. The world is full of examples; defence contractors, public building contractors, the telecoms industry, financial markets.

Nice idea, but impossible to implement.

His thoughts on software however, could be enacted in law. If I buy a car, the manufacturer cannot avoid liability for inherent faults in the design or manufacture. Ditto any consumer goods. Why should software be different?

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

Where is the reward for the creation of vulnerable software?

So what is to stop the requirement of a government enforced NDA (go to jail if you talk) to receive payment, and the CIA and NSA just got a bargin to add more tools to their hacking toolbox???

The exploit can't be made public, it would be a nice thought to make them public, but all you need is hackers paying attention and using the exploit before it is patched. Which will be the public reason for the NDA, if this ever happens.

You can't take anything at face value, especially when it comes to a government which consistently engages in illegel, unconstitutional and definately unethical practices to get what it wants. If you can think of a way something can be abused, I will bet money the someone in the US government has thought of it and mostly likely done it. They usually don't don't ever get prosecuted (DUH, US DOJ going after someone appointed by the same president) or rarely even apologize. they resign and another flunky is appointed to do the same thing. Cynical, not really, try statistical or historical. Governments are not altrustic they all have an agenda regardless of what they say, and if the agenda is not readily aparent, people should be even more concerned.

I have faith in government. Faith that governments run by politicians will always act in the best interest of the politicians, and occasionally, strictly by chance, those actions also are in the interest of the people.

I'm no economics guru, but doesn't that raise the price of 0-Days? If the supply decreases, the demand will increase, and thus will the cost. And as far as I know, I don't think there's an open ended budget line item for buying these up.

Additionally, Geer said, “abandoned” software—such as older versions of operating systems no longer supported by their developers—should be treated the same way as other abandoned property—“either you support it, or you give it over to the public."

This is such an amazing (and obvious) idea that I'm surprised more folks don't advocate it. One of the huge downsides to the idiotic protections given to intellectual property in the US - besides the most obvious, which is stifling innovation - is that so much of it is left to rot. Is that a big deal when the property is a book? No. But it is a whole different situation when it is software or hardware that society depends upon, and which the creator no longer supports (because they don't have a financial incentive to).

This cannot possibly work. The government would be handsomly rewarding software developers for making programming mistakes that result in exploitable vulnerabilities. This will result in an ever increasing number of such vulnerabilities because software developers are only human and like to make easy money as much as aonyone else. What needs to be done is to somehow punish the creation of vulnerabilites, not reward it.

Huh?

SoftwareDeveloperA creates software with security bug.

SoftwareDeveloperB finds the security bug.

SoftwareDeveloperB gets paid for finding bug.

SoftwareDeveloperA gets humiliated/vaguely-embarrassed by bug being released to public.

Where is the reward for the creation of vulnerable software?

SoftwareDeveloperA on low wages creates software with security bug.SoftwareDeveloperA creates variable called SoftwareDeveloperB that is set to SoftwareDeveloperA.SoftwareDeveloperB reports security bug.SoftwareDeveloperB gets paid.SoftwareDeveloperA profits.There is no ???.

Or if you prefer:SoftwareDeveloperA creates software with security bug.SoftwareDeveloperA tells SoftwareDeveloperB about bug.SoftwareDeveloperB reports bug.SoftwareDeveloperB gets paid.SoftwareDeveloperB splits profit with SoftwareDeveloperA.There is no ???.

And someone has else has pointed out, that's easily discoverable fraud. In fact it's probably a good thing: that way you can easily find any bad and incompetent apples in your organisation and get rid of them easily.

How is that easily discoverable? Do the SoftwareDeveloperBs selling zero-days usually demand payment to their verified US bank account or pick online handles based on their real names and/or addresses?

The only thing anybody will know under Geer's plan is the exploit itself, and from that be able to find the guy that checked in the exploitable code, with no differentiation between "fraud" and an honest mistake. You may fire that incompetent apple, but if the zero-day goes for 10x the free market price, there will still be an incentive for many devs to do this, especially if they already know they are being laid off or want out anyway.

Normally simply buying out the stock of the baddies is a bad idea. If you buy all the guns a black market arms dealer has to keep him from selling them to baddies then he just gets more guns with the profits and sells them to baddies anyway -- unless you buy out his stock again...

But in this case? The first unit is the only hard one for black hats to get, and so the near-zero marginal cost for additional inventory for digital goods works against them.

Just for the lolz, I'd like to see a blackhat sue the US Government for piracy if this scheme gets approval.

Mr. Geer is no doubt extremely intelligent and knowledgeable in the field. So, it's not surprising he has some interesting comments.

Among those comments I do not sense any commitment to legal restraint, Constitutional rights or any kind of high moral ground. Instead, he offers we are engaged in a world wide cyber war. That suggests a power struggle in the form of "might makes right".

I am all in favor of balkanization of the internet. I am tired of being bombarded by pings and pranks from all over the world not to mention my own government.

I think to counter the lawlessness of the government centric cyber war we need to develop vast populist distributed personal computing systems wrapped in super tight encryption and as the need arises the ability for the little guy in the trenches to fight back based on the same rules played by the big guys i.e. none.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.