Posted
by
Unknown Lamer
on Wednesday July 10, 2013 @11:50AM
from the open-sores-hates-jerbs dept.

itwbennett writes "U.C. Berkeley researchers have determined that crowdsourcing bug-finding is a far better investment than hiring employees to do the job. Here's the math: Over the last three years, Google has paid $580,000 and Mozilla has paid $570,000 for bugs found in their Chrome and Firefox browsers — and hundreds of vulnerabilities have been fixed. Compare that to the average annual cost of a single North American developer (about $100,000, plus 50% overhead), 'we see that the cost of either of these VRPs (vulnerability reward programs) is comparable to the cost of just one member of the browser security team,' the researchers wrote (PDF). And the crowdsourcing also uncovered more bugs than a single full-time developer could find."

The major problem is that on-staff developers are usually discouraged from going on bug-hunts. Management would rather have them developing new features, so they won't allocate time towards finding bugs. When what the company policy towards finding bugs is conflicts with how your manager assigns you tasks, guess which one wins. Worse, most of the time an employee who ignores his to-do list to go find problems ends up penalized either explicitly (by bad reviews) or implicitly (negative impact from people being annoyed that he made work for them). Outsiders in these bounty programs don't have to worry about a manager assigning them 100% to new features and 0% to finding vulnerabilities and they don't have to worry about the impact of bad reviews or negative comments by managers about the extra work they created for everybody.

And not just bug hunts. I have a laundry list of things that need to be refactored, but every time we think we might have a chance to do so, project management decides something else is more important. We have people complaining about things being slow, but when told that we need to spend time to make it faster, we instead get directed at new features or, worse, tweaks for the sake of a single non-representative customer that happens to have the ear of the project owner.

Yes. At work we have a long-outstanding issue open about performance. I nailed down where the bottleneck was, and put in the library module needed to fix it. Tests using the new module were showing a minimum order-of-magnitude speedup compared to the old (minimum because the test cases were biased to be as favorable to the old method as possible). Yet we've been directed to postpone actually switching to the new module (I suspect politics).

Exactly. I think if you found the right kind of employee and told them to hunt for bugs all day long and get paid for it, They'd probably uncover quite a few bugs. Give them complete access to the code, source control, and test suites, and they could probably find bugs much more efficiently than getting somebody to find vulnerabilities from the outside.

It's amazing our customers still find the product useful enough to pay for it.
It's like a big building made of dried shit. It sort of works in some conditions. You know what happens if it rains a bit, but if you remove all the shitty bits there's not much building left... And to rebuild the building from scratch would take years.

Wow. Aren't you worried that by posting this online that you might get fired from your position on the Microsoft SQL Server dev team?

Yes this happens. The trick to get around it is to sell the bug fixing or refactoring as a new feature. i.e. you refactor the code in order to fix the longstanding bug and you add a new feature to boot. If your manager is too stupid to figure out the bug really, really needs to be fixed, you can always convince the client to "persuade" them instead of you.

I've done this as well - but I don't like effectively misleading management by saying that we need to do X in order to achieve what is only a loosely related Y. Yes, X would make Y easier, or improve Z and W, but it isn't truly essential.

That's not to say things are entirely bad. I've also had management see the true value of some stuff that isn't technically being requested at that very moment, and push things upwards by tying the work to something else in just this manner.

I do not like to mislead project management either. The less information people have about the actual de facto state of the project the more likely it is that things will go off track and development will end up in failure because you either blew up the deadline or funding. However it is also important that projects have a certain degree of quality in them. Otherwise the clients will end up not renewing their contracts and going somewhere else next time. Balancing these needs is certainly not easy.

It is a waste of money for developers to go on bug hunts. If a customer reports a bug and a QA person confirms it,then time should be allocated to fix it, and not only fix it but hopefully to fix the process that allowed it to happen. The benefits of bug bounties, I think, is to encourage sophisticated users to give a detailed analysis of the prob,em therefore saving QA and developer time. In OSS of course the end user can fix the bug, but so can any volunteer, so not sure where the cost savings would be

Nonsense. The outside world can only fuzz against the product looking for a security vulnerability and they are only paid for security vulnerabilities.

Bug hunts reveal architecture missteps that will break the product during upgrades or other usage. Internal developers are aware of the architecture, so they are more able to focus on searching and finding both security vulnerabilities as well as general bugs. A testing matrix cannot predict all the

As the Firefox Security Manager I completely and vehemently disagree. I employ a team that spends 100% of their time "going on bug-hunts" looking for security bugs in Firefox, and I know my counter-part at Google is doing the same for Chrome. Our Bug Bounty programs (VRP? ugh, so very corporate) are an incentive for people who stumble on neat stuff to pass it on, not a substitute for doing the work ourselves.

Close divide the $570,000 by 3 for the number of years the program has been running. You get $190,000 a year in bounties vs the $150,000 a year for a developer. Still not equivalent, but at least a little closer.

You also need to bare in mind you still need to pay the developer to actually fix the issues uncovered by the bounty program...and any other staffing the verify incoming bugs are valid and worth paying for.

I wonder if anything like this is going on internally. Let's say a developer at Google knows about a problem. He could either fix it, and get his regular pay, or he could tell his friend about the bug, and split the bounty with his friend who "discovered" the bug. Either way the bug gets fixed. And it probably get's fixed faster this way, since it's now an externally known vulnerability.

For a lot of these people, it might be a hobby. If it weren't for bug hunting for a bounty, they might be working on open source software instead with no payout at all. For those people, the payout is infinitely greater even if it amounts to $2.50/hr. Most people are just happy to have a hobby that breaks even, nevermind nets a profit.

Many of the people working on these things will also have full time jobs as security researchers. The extra financial incentive for a bug just means that they'll be applying their bug-finding technique to your codebase instead of to someone else's.

The catch phrase is in the last sentence "I am going to write me a new minivan this afternoon." In other words, the guy is going to write a full load of bugs software and fix them. Each bug he found (intentionally left in the software) and fixed would be worth $10. If he is going to be able to buy a new minivan, how many bugs does he need to create & find?

This is indeed true specially for popular companies with rather mature SecOps that pay minimum wages for vulnerabilities that are indeed hard to find or require a pretty darn good skill level to discover. Some of them even only offer swag in exchange of finding serious threats such as persistent XSS or authentication bypass. They maybe feature the researcher in some blog post to publicly thank him and attract the wannabe crowds.

Having said that, I myself have participated in several of these programs (with varying success) and come to realize that probably Google and Facebook are the only VRPs currently paying reasonable wages for bugs in terms of cost efficiency for the researcher.

On the other hand, some of us just enjoy from time to time trying to find security bugs for fun (maybe because we are huge nerds) so these programs offer a great opportunity to test things and not risking ending up in jail.

as a corporation is abdicates you from the responsibility of things like health insurance in countries like america that have very expensive coverage individuals typically cannot afford. In more advanced countries like sweden or canada, youre indirectly allowing a government to subsidize a component of your under-the-table employment of coders and hackers. expenses like retirement, life insurance, dental coverage and the cost of work-related activities like ice cream socials are then realized as a savings

I'm not sure I agree with you. I don't get the sense that they're outsourcing hacking (it's really more dev than QA but not really either). They're both crowdsourcing it and attempting to incentive the finders to report it to them vs. organizations who will use that information to create exploits, etc. I don't believe they've given up on investing their employees in attempting to create secure software, I think they're supplementing those efforts this way (and other ways as well).

Maybe they should have compared the salary of a QA person instead of a developer. As a developer, I find lots of bugs, and then fix them. I also fix the bugs that QA finds, but usually spend a lot of time trying to figure out how to reproduce the issue ("uhh, first I clicked on this and then I clicked on that and then something weird happened").

Anywhile, it's hard to crowd source a product that has not been released yet and most companies don't have the fan-bois and gurls to even consider this strategy

Wait, you're saying QA isn't writing bugs that have detailed steps to reproduce? If they are ones that can be easily reproduced, then they're not writing good bugs. (Obviously there are lots of bugs that are worth writing up that DON'T yet have reproducible cases yet.. sometimes a conglomeration of those not-reproducible cases can lead to a reproducible case too...)

BTW, while I definitely think that 'bug bounty' isn't as good as a company finding its own bugs, I wish ALL companies had an official way to r

Browsers have very large installed base. There are enough bug spotters even if a very small fraction of them actually hunt and report bugs. Even then, the bounty is for finding the bugs, not fixing the bugs that includes the cost of coming up with a fix, verifying it fixes the problem, testing to make sure it does not create new problems and rolling out the fix.

That means that there is a strong incentive for companies to create insecure, crappy software and then let so-called "white hat hackers" fix their bugs at a discount. And because any other form of disclosure is illegal, the companies are pretty well protected from negative consequences of their bugs and deflect from their own negligence by blaming "black hat hackers".

This is effective for the low-hanging fruit, i.e. the easy (relatively) to find security-related bugs. For things that require advanced techniques or expensive tools (like Fortify), it fails. Unfortunately, the harder to find bugs are still well within reach of spy agencies of all kind, including a number that is allowed to do industrial espionage (like the US or France).

So while this looks good on the surface, it is really just making the problem worse. The only exception is software that has very low security needs.

For reliability, it is about as ineffective, as only easy to identify bugs will be tracked down.

I would hope Google is smart enough to know that you don't need an experienced developer to find bugs in their code. Aspiring developers fresh out of college are more then adequate. At 50k a pop google could have hired 7 PFYs spending 14,000 hours scouring code, hell give them 1k bonuses for each bug to keep them motivated.

What I'm really shocked about is that you need a university to figure this out. Or rather do research on this. Companies figured this out quite some time ago and anyone with a functioning brain can see why.What I'm more interested in is that king of people spend their time in participating in programs like this. The chances that you find a bug are not that big. The financial reward, given the amount of time you will spend on finding a bug is probably also relatively small.From a company's point of view on the other hand, it's great. Many people working for you. For free. A job well done:)

Is good to reward people that find security holes, at the very least because is a safer bet than selling them in the black market, or keeping them for yourself or the government to exploit them. But it should not be a replacement for actually having dedicated people activelly working for your security that will report to you if something weird is there, some could actually go to the black market (or be found by government teams and never disclosed that it is there because is an useful cyberweapon) and you must be proactive from your side

High-tech regions tend to be high cost of living, too. In Silicon Valley, 100K USD may well be better than an entry-level salary for a dev with a 4-year degree. The cost of living is so high that this is less impressive than it sounds, though. It's a little less bad up around Seattle ("Silicon Forest") where starting salaries are more commonly in the 70-90k range, but people break six digits very quickly. I haven't job-hunted anywhere else, but at least on the west coast, a 100k estimated average might actu

It's not surprising at all that piecemeal work, with no provision for healthcare, vacation etc. - much less reliable, ongoing income - is more profitable for business.

Why should technology workers be intrigued or inspired by this? Why is this information presented to technology workers as another avenue to praise Google's or Mozilla's cleverness? And why do technology workers so consistently dig their own graves by latching onto this kind of ideology and failing to fight for labor rights?

where you have millions of folks looking at your free software for long periods of time. If you're a commercial software vendor, however, with a $10,000 non web-based package and at most a few thousand users (There are still a *lot* of these), then this approach is very unlikely to succeed. Commercial software users are rarely interested enough to report a bug that doesn't actively interfere with their daily work.

The article says that no one employee could find hundreds of bugs and that's true. But when you hire employees you are building a process. Improving the process by writing a new QC script can eliminate hundreds of bugs over a couple years. These are not attributed to one employee and since the offending code is not committed then they aren't even counted as bug fixes.

Offering a bug bounty, on the other hand, is a unpredictable thing and you'll get random fixes. It

developers don't "create" bugs, we don't sit down and say, hey, lets create a bug! and then go about making one, most of the time, we believe our code doesn't have bugs, cause if we thought it had bugs, we'd write it a different way, or we'd know about the bug in the first place and it'd be in our list of things to fix, normally those things are fixed quickly because we knew it was there, but things we don't know about, well, how do you expect us to find it? I didn't find it whilst I was writing the code an