With Millions Paid in Hacker Bug Bounties, Is the Internet Any Safer?

The night before the end of Google’s Pwnium contest at the CanSecWest security conference this year in Vancouver, a tall teen dressed in khaki shorts, tube socks and sneakers was hunkered down on a hallway bench at the Sheraton hotel hacking away at his laptop.

With a $60,000 cash prize on the line, the teen, who goes by the hacker handle “Pinkie Pie,” was working hard to get his exploit for the Chrome browser stabilized before the close of the competition.

The only other contestant, a Russian university student named Sergey Glazunov, had already made off with one $60,000 prize for a zero-day exploit that attacked 10 different bugs.

Finally, with just hours to go before the end of the three-day competition, Pinkie Pie achieved his goal and dropped his exploit, a beauty of a hack that ripped through six zero-day vulnerabilities in Chrome and slipped out of the browser’s security sandbox.

Google called both hacks “works of art,” and within 24 hours of receiving each submission, had patched all of the bugs that they exploited. Within days, the company had also added new defensive measures to Chrome to ward off future similar attacks.

Portrait of a Full-Time Bug Hunter: Abdul-Aziz HaririIt might seem to some that $500 or even $3,000 is a paltry sum to earn for spending days looking for a security hole in software. Even $20,000 for a bug is chump change if you have a genius zero-day on your hands that could sell on the exploit black market for four times that amount.

But, as security researcher Charlie Miller points out, it all depends on where you’re standing. A $1,000 bounty for a researcher in New York won’t go as far as the same amount paid to a researcher in India or even in Indiana. But for some, bug hunting can actually bring in a good wage.

Abdul-Aziz Hariri earned more than enough to live on doing freelance bug hunting, during a period when he couldn’t find a job.

Google’s Pwnium contest is a new addition to its year-round bug bounty programs, launched in 2010, that are aimed at encouraging independent security researchers to find and report security vulnerabilities in Google’s Chrome browser and web properties, and to get paid for doing so.

Vendor bounty programs like Google’s have been around since 2004, when the Mozilla Foundation launched the first modern pay-for-bugs plan for its Firefox browser. (Netscape tried a bounty program in 1995, but the idea didn’t spread at that time.) In addition to Google and Mozilla, Facebook and PayPal have also launched bug bounty programs, and even the crafts site Etsy got into the game recently with a program that pays not only for new bugs, but also retroactively for previously reported bugs, to thank researchers who contributed to the site’s security before the bounty program began.

The Mozilla Foundation has paid out more than $750,000 since launching its bounty program; Google has paid out more than $1.2 million.

But some of the biggest vendors, who might be expected to have bounty programs, don’t. Microsoft, Adobe and Apple are just three software makers who have been criticized for not paying independent researchers for bugs they have found, even though the companies benefit greatly from the free work done by those who uncover and disclose security vulnerabilities.

Microsoft says its new BlueHat security program, which pays $50,000 and $250,000 to security pros who can devise defensive measures for specific kinds of attacks, is better than paying for bugs.

“I don’t think that filing and rewarding point issues is a long-term strategy to protect customers,” Microsoft security chief Mike Reavey said recently.

All of which begs the question: Eight years down the line, have bug bounty programs made browsers and web services more secure? And is there any way to really test that proposition?

Security Science

There’s no scientific method for determining if software is more secure than it used to be. And there’s no way to know how much a bounty program has improved the security of a particular software program, as opposed to other measures undertaken by software makers. Security isn’t just about patching bugs; it’s also about adding defensive measures — such as browser sandboxes — to mitigate entire classes of bugs. The combination of these two make software more secure.

But everyone interviewed for this story says the anecdotal evidence strongly supports the conclusion that bounty programs have indeed improved the security of software. And more than this, the programs have yielded other security benefits that go far beyond the individual bugs they’ve helped fix.

In the most obvious sense, bounty programs make software more secure simply by the fact that they reduce the number of security holes hackers can attack.

“There’s a finite number of bugs in these products, so every time you can knock out a bunch of them, you’re in a better place,” says top security researcher Charlie Miller, who’s responsible for finding a number of high-profile vulnerabilities in Apple’s iPhone and other products.

But one of the biggest indications that bounty programs have improved security is the decreasing number of bug reports that come in, according to Google.

“It’s a hard measurement to take, but we’re seeing a fairly sustained drop-off in the number of incoming reports we’re receiving for the Chromium program,” says Chris Evans, information security engineer at Google who leads the company’s Chromium vulnerability rewards program as well as its new Pwnium contest, launched this year.

Google has its own internal fuzzing program to uncover security vulnerabilities, and the rate at which that team is finding bugs has dropped, too, Evans says. Google recently asked some of its best outside bug hunters why bug reports had declined and was told it was just “harder to find” vulnerabilities these days. Harder-to-find bugs for researchers also means harder-to-find bugs for hackers.

Bounty programs also improve security by encouraging researchers to disclose bugs responsibly — that is, passing the information to vendors first, so that they can release a patch to customers before the information is publicly disclosed. And they help mend the fractious relationship that has long existed between researchers and vendors.

In 2009, Miller and fellow security researchers Alex Sotirov and Dino Dai Zovi launched a “No More Free Bugs” campaign to protest freeloading vendors who weren’t willing to pay for the valuable service bug hunters provided and to call attention to the fact that researchers often got punished by vendors for trying to do a good deed.

Patrick Webster got a taste of this firsthand last year when he reported a website vulnerability to First State Super, an Australian investment firm that managed his pension fund. The flaw allowed any account holder to access the online statements of other customers, thus exposing some 770,000 pension accounts — including those of police officers and politicians.

Webster wrote a script to download about 500 account statements to prove to First State that its account holders were at risk. But First State wasn’t grateful. The company reported him to police, then demanded access to his computer to make sure he’d deleted all of the statements he had downloaded.

First State’s response to Webster’s responsible disclosure makes it clear that not all companies know how to play nice with researchers. But vendors offering bug bounty programs generally include a promise to researchers, like one from Facebook, that says as long as a researcher gives the company reasonable time to respond to a bug report and makes a good-faith effort not to violate user privacy or destroy data while researching their website or application for bugs, “we will not bring any lawsuit against you or ask law enforcement to investigate you.”

“What the bug bounty program is saying is, ‘I’m hoping that the community does the right thing with respect to vulnerabilities in my software, and I want to reward people for doing the right thing,’” says Chris Wysopal, co-founder and CTO of Veracode, a firm involved in the testing and auditing of software code. “So the existence of the bug bounty program gets beyond just ‘I’m trying to secure my applications.’ It’s also ‘I’m trying to have a good relationship with the research community.’”

That outside research community can add a whole new dimension to a company’s security efforts.

Facebook, which launched its bounty program last year, says bounties have improved its security by opening its code to a new set of eyes with different perspectives and skills.

Facebook regularly hires outside consultants to audit its code and augment the review its internal security team already does, so there were some at the company who didn’t think a bounty program would be worth the extra effort it would take to run it. Facebook’s Chief Security Officer Joe Sullivan was among those who weren’t sold on the idea. But the results, he says, have “made me a total convert.”

Bug reports from outsiders have introduced Facebook’s internal security team to new vectors of attack they didn’t know about before, and have helped programmers “improve lots of corners” of the company’s code, he says.

“The advantage of the program is if some new tactic or technique comes out that we don’t know about, we can guarantee that someone that wants to earn a bounty will know about it,” Sullivan says. Researchers who have an incentive to look for bugs are more likely to be up to date on the latest tactics than the contractors they hire to audit their code, he says.

Ryan McGeehan, director of Security CERT response at Facebook, says that in some ways the bounty program has actually outperformed the consultants they hire. “The difference is the scope in what gets assessed. Consultants get hired to look at a specific product. But the bug bounty is wide open,” he says.

“I think a lot of people outside the company are a little nervous about it,” Sullivan says, because they’re not sure they’d be comfortable opening their own infrastructure to hackers. “This is a step that no one else has taken. But at the same time if you look at some of the biggest security issues across the internet over last few years, the vulnerabilities are just as much in the infrastructure — they can be just as harmful in that context as in the product.”

The benefits from a bug bounty program, however, reach beyond the individual vendor who receives and pays for a bug report. Other software makers and service providers can learn from the steps that one company takes to secure its software, thereby increasing security for everyone.

Michael Coates, director of security assurance at the Mozilla Foundation, said they once got a vulnerability report for a web application’s file-upload feature that allowed the researcher to bypass Mozilla’s security defenses to upload a potentially malicious file. The vulnerability had broad implications for the upload features in other web apps.

Mozilla hardened the defensive measures across the board for all of its file-upload features and then, Coates says, “being an open source organization, we pushed that knowledge back out to the community and made it fully available so not only would we benefit, but anyone else building web applications would.”

But there are also more direct ways other software makers can benefit from the bounty program of one vendor.

Microsoft, for example, recently benefited directly from one bug report that Google paid for, after the search giant generously doled out a $5,000 bounty to two researchers for a bug they uncovered in Microsoft’s Windows operating system.

House in Order

Simply deciding to launch a bug bounty program can be a sign that a company already has a high level of security, says Wysopal. That’s because before a company launches a bounty program, it needs to have its security house in order or it could end up racking up a large bill in payouts or, worse, receive a sudden influx of reports about security bugs that it doesn’t have the resources to fix.

“The mere fact that you have a bounty program shows you have a certain amount of [security] maturity, because it would be too expensive otherwise [to launch one],” he says. “You could have your application reviewed by a third party for the price of just five bugs you might pay out [in a bounty program].”

Evans notes that when Google launched its web bounty program, it received a 10-times spike in bug submissions.

“You do want to have a decent-size security team before you undertake this, and you do want to make sure that you’re fairly confident your products meet a reasonable level of robustness,” he says. “Obviously you need a pretty large security team to be able to sort of absorb that increase in load.”

Bug bounty programs also place increased pressure on a company to fix bugs more quickly.

Evans says Google has a company-wide policy of patching serious or critical bugs within 60 days of receiving a report. “That’s a Google-wide standard that we think the industry should be held to,” Evans says. But their average turnaround time for high-severity issues is around 30 days. “We were quite pleased with that as a metric,” he says. “That shows quite a rapid response to non-emergency issues.”

Sullivan thinks companies need to be faster than this, however. “We’re fortunate in that we work at a company where engineers are pushing changes on a daily basis. And so we knew going in to the program that if someone reported a vulnerability we’d be able to turn around and fix it immediately that day or the next day.”

He notes that a member of his security team recently submitted a vulnerability to another company’s bug program, which he declined to name, and was told the company would get back to him in a few weeks.

“When you have a bug bounty program like we do, you have to be able to push fixes on a daily basis, because when someone outside the company reports something to Facebook, they’re watching us to see how quickly we respond,” he says.

Google moved more quickly during its Pwnium contest earlier this year when the company had at least 20 people on hand to address the vulnerabilities submitted by contestants. Evans says they used the competition as a fire drill to test their agility in responding to emergency situations.

In the end there were only two submissions — those of “Pinkie Pie” and Sergey Glazunov — but they involved 16 bugs, and Google was able to patch all of them within 24 hours of receiving the reports. When “Pinkie Pie” reprised his exploit feat in early October during a second installment of Pwnium, Google patched the two bugs his exploit attacked within 12 hours.

Bug Payouts

Not all vendor bounty programs are equal. Rates for paying researchers can range from $500 to $60,000, depending on the vendor, the ubiquity of the product and the critical nature of the bug.

Miller says different payouts appeal to different people. “For me, being paid $500 and $1,000 is not really much,” he says. “[But] if I lived in some country [where I would] possibly make much less than in the United States, then maybe that’s quite a bit of money and I can live for a month on that. There’s a lot of researchers in the world who can live quite well on bug bounties.”

Mozilla pays between $500 and $3,000, and Facebook pays $500 per bug, though it will pay out more depending on the bug. The company has paid $5,000 and $10,000 for a few major bugs.

Google’s Chromium program pays between $500 and $1,333.70 for vulnerabilities found in Google’s Chrome browser, its underlying open source code or in Chrome plug-ins. Google’s web properties program, which focuses on vulnerabilities found in Google online services such as Gmail, YouTube.com, and Blogger.com, pays up to $20,000 for advanced bugs, and $10,000 for a SQL injection bug — the everyday workhorse of vulnerabilities. The ceiling on payouts disappears, however, “if something awesome comes in,” Evans says. “We’ve done that once or twice.” The company maintains a Hall of Fame page to give shout-outs to its bug hunters.

By contrast, Google’s Pwnium contest, which requires researchers to go beyond just finding a vulnerability and submit a working exploit to attack it. Google launched the program with a $1 million total purse — with individual awards paid at a rate of $20,000, $40,000 and $60,000 per exploit, depending on the type and severity of the bug being exploited. Last month, the company increased the total purse to $2 million.

Google said the higher rates for its Pwnium contest reflect the extra work it takes to develop an exploit.

“The level of effort required to find a bug is a lot lower than the level of effort required to take that bug and turn it into an exploit,” Evans says. “Substantially so, in our experience.”

But the higher payout also reflects the higher value Google gets from exploits.

“When we see full exploits, we can actually learn much more from that than when we just see bugs,” he says. “We can look at the techniques used to exploit the bugs, and then, based on the knowledge gained from that, we can deploy defensive measures that will mitigate entire classes of bugs and make exploitation much harder.”

In addition to vendor bounty programs, there are also third-party bounty programs sponsored by security firms, which buy vulnerability information in software applications made by Microsoft, Adobe and others.

iDefense, which provides security intelligence services, launched a bounty program in 2002, but it’s long been overshadowed by the more prominent HP Tipping Point Zero Day Initiative (ZDI) bounty program, launched in 2005. HP Tipping Point also sponsors the Pwn2Own exploit contest each year at the CanSecWest conference, which was the inspiration for Google’s Pwnium contest.

HP Tipping Point uses vulnerability information submitted by researchers to develop signatures for its intrusion prevention system. The company then passes the information to the affected vendor for free, such as Microsoft, so the software maker can create a patch. This means the software maker gets all the advantages of receiving bug reports, without having to pay for them.

The ZDI organizers also share the information with other makers of intrusion prevention systems for free, so that they can protect their customers as well.

The ZDI bounty program has processed more than 1,000 vulnerabilities since it launched in 2005 and has paid more than $5.6 million to researchers. The program pays varying rates that change depending on the vulnerability.

Vendors are required to fix the vulnerability within six months, after which the organizers will publish information about the vulnerability, along with tips for mitigating it, even in the absence of a patch. The organizers began imposing the six-month deadline in February 2011 because some vendors were taking too long to fix vulnerabilities reported to them. One IBM vulnerability, for example, remained unfixed for three years.

“At that point not only is it a pain for us to track it, but in our opinion they’re making people out there less safe,” says Aaron Portnoy, former head of the ZDI program. “It’s not fair to users in our opinion to leave that susceptibility open.”

In the end, that’s what all of the bug bounty programs are about — making users safer on the web.

Update 5:16pm: To make a change to the description of iDefense’s business.