Tuesday, October 29, 2013

So, I suck at blogging consistently. In my defense, it's been a tough month (but that's another story for another time). This post is a follow up to two previous posts. In the first post, I made an argument for bug bounties. My good friend Lenny Zeltser posted a response, making a couple of good points, which I addressed in a follow up post. But I failed to address Lenny's question, deferring that to a second follow-up. Unfortunately, that took almost a month to write. For the sake of completeness (and those too lazy to click the links), Lenny's comment/question was:

While some companies have mature security practices that can
incorporate a bug bounty program, many organizations don't know about
the existence of the vulnerability market. Such firms aren't refusing to
pay market price for vulnerabilities--they don't even know that
vulnerability information can be purchased and sold this way. Should
vulnerability researchers treat such firms differently from the firms
that knowingly choose not to participate in the vulnerability market?﻿

I addressed everything but the last question (I think) in the last post. But Lenny addresses a serious ethical concern here. Should we as security researchers treat firms differently based on their participation in (or knowledge of) the vulnerability market? There is an implied question here that may be difficult to examine: namely, how do you as a security researcher determine whether the firm has knowledge of a vulnerability market?

I would propose that one way to confirm knowledge is an explicit "we don't pay for bugs" message on the website. This implies that they know other companies pay for bugs, but they refuse to lower themselves to that level. IMHO, these guys get no mercy. They don't give away their research (their software), I'm not really interested in giving mine away either. Ethically, I think I'm good here to release anything I find (and fire for effect).

Generally, I treat any company with a disclosure policy (but no bug bounty) in the same category as those who simply refuse to pay. If you have a published disclosure policy, it doesn't pass the sniff test that you don't also know about bug bounties. Even if there's no explicit policy on paying (or not paying) bug bounties, the omission of this data in and of itself means that that you're not paying. Bad on you. Again, I argue for no mercy using the same "your time isn't free, why should mine be" argument.

In the two categories above, it's pretty easy to slam a company by using full public disclosure or third party sale. What about when neither of these conditions have been met? What sorts of disclosure are appropriate in these cases? Is third party sale of the vulnerability appropriate?

In my opinion, this can be handled on a case by case basis. However, I'm going to take the (probably unpopular) position that the answer has as much to do with the security researcher as it does with the target company. For instance, I would expect a large vulnerability research firm to exercise some level of responsible disclosure when dealing with a software company that employs two full time developers. I would hope that they would work to perform a coordinated disclosure of the vulnerability.

However, I don't think an independent vulnerability researcher with no budget has much motivation to work closely with a large software vendor that has no disclosure policy. If the software firm is making money, why expect an independent researcher to work for free? The security researcher may find himself in a sticky situation if the company has no public bug bounty. Does the company have an explicit policy not to pay for bugs? Is the lack of a disclosure policy just an oversight?

The independent researcher might prefer to give the vulnerability to the vendor, but also has rent to pay. In this case, should the researcher approach the vendor and request payment in exchange for the bug? This seems to be at the heart of what Lenny originally asked about. Clearly this is an ethical dilemma.

If the researcher approaches the vendor asking for money, only three possible outcomes exist:

The vendor refuses to pay any price (and may attempt legal action to prevent disclosure)

Two of these outcomes are subpar for the researcher. Assuming they all have equal probabilities of occurrence (in my experience they don't), the answer is already clear. Further, in the other two cases, the security researcher may have limited his ability to sell the vulnerability to another party. This may be due to pending legal action. In another case, enough details are released to the vendor to substantiate the bug that the vendor is able to discover and patch.

So my answer to Lenny's question is a fair "it depends." I'm not all for a big corporate entity picking on the little guy. But if the tables are reversed, sounds like a payday to me (whether or not the existence of a vulnerability market can be provably known).

Only one question remains in my mind: what if there is no bug bounty but because the attack space for the vulnerability is very small, there is also no market for the vulnerability? Well in this case, disclosure is coming, it's just a question of whether the disclosure is coordinated with the vendor. I don't have strong opinions here, but feel it's up to the researcher to evaluate which disclosure option works best for him. Since he's already put in lots of free labor, don't be surprised when he chooses the one most likely to being in future business.

Thursday, October 3, 2013

I recently wrote another post on the state of security vulnerability research. I discussed my reluctance (shared by many other researchers) to work for free. To that end, I encouraged the use of "bug bounties" to motivate researchers to "sell" vulnerabilities back to vendors rather than selling them on the open vulnerability market. One key point is that simply setting up a bounty program doesn't work unless the rewards are competitive with the open market prices.

I expected some whining from a couple of software companies about my refusal to test their software for free. I got a couple of emails about that, but what surprised me more was the response I got from a trusted colleague (and friend) Lenny Zeltser. Lenny wrote:

While some companies have mature security practices that can
incorporate a bug bounty program, many organizations don't know about
the existence of the vulnerability market. Such firms aren't refusing to
pay market price for vulnerabilities--they don't even know that
vulnerability information can be purchased and sold this way. Should
vulnerability researchers treat such firms differently from the firms
that knowingly choose not to participate in the vulnerability market?﻿

As luck would have it, I'm actually at a small security vendor conference in Atlanta, GA today. I polled some vendor representatives to find out whether or not they are aware of a bug bounty program for their software. I also asked whether they are aware of the vulnerability market. The results were fairly telling. First, let me say that this is not a good sample population (but was used merely for expediency). Problems I see with the sample:

These vendors self selected to attend a security conference. Most of them sell security software. They are probably more "security aware" than other vendors and therefore may have more inherent knowledge of security programs (vulnerability market and bug bounties).

The people manning the booths are most likely not app developers and probably not involved with the SDLC or vulnerability discovery.

The poll says that less than half of vendors surveyed are familiar with the vulnerability market and the vast majority do not implement bug bounties. To be fair, many were confident that being security companies they don't suffer from insecure coding practices. Therefore, their products don't have vulnerabilities and there's no reason to think about a bug bounty. Lenny's assertion seems proven correct. The organizations unaware of a vulnerability market probably aren't mature enough to implement a bug bounty. But some organizations are aware of the market, and yet they still don't want to implement a program.

I can only say that attitude is myopic at best. Practically speaking, if you don't have any vulnerabilities, then a bug bounty program costs you nothing. Why not implement one? You need a policy drafted, some legal review, a web page announcing the program, and some staff to respond to vulnerability reports (note: you'll need the last one anyway, so it's not an additional cost). I'd like to take the position that a bug bounty is never a bad idea. If you disagree, please tell me why. I'm serious about this. If you or your company does software development and you refuse to implement a bug bounty, please share your reasoning (post it here as a comment if you care to so everyone can see). If your reasoning is purely philosophical, I'm sorry to tell you I think that ship has sailed. I'd like to collect a sample set of reasons that companies either refuse to pay bug bounties at all or want to get by without paying market prices.

In my next post, I'll address the second part of Lenny's comment: should vulnerability researchers treat smaller, immature organizations differently than those who knowingly refuse to participate in the vulnerability market. Look for that post early next week.

Wednesday, October 2, 2013

I'm a big fan of responsible disclosure - when you pay me for it. What I'm not into is doing vulnerability discovery on a product for free. Let's face it: if you paid your developers better (meaning you bought better developers), you wouldn't need me to do vulnerability discovery for you. It's really a case of pay now or pay later.

Now I know what you're thinking: huge companies like Microsoft have top notch developers, but because of the complexity of their code they have issues. I'll agree with that. But here's the rub: if they are that big, they have money. They clearly can also pay to get code review and binary analysis done on their own code. I view paying independent security researchers as an alternative to (or extension of) the internal code review team.

At the end of the day, the point is that I don't work for free. I don't do anything for free. You don't give me free software, why should I give you free vuln discovery? If you're looking at this blog post, trust me, in the end it's financially motivated. What's my angle? I'm trying to get additional companies to start paying for vulnerability discovery/disclosure (and that's a good thing for me).

Bug Bounties
There's been a flurry of news this week about the Yahoo XSS bug bounties that paid a whopping $12.50. To add insult to injury, there are some reports that the $12.50 could only be spent in the Yahoo swag store. Facebook tried to skate on a bug bounty last month, albeit for a horrendously written vulnerability report. But at least these companies actually offer bug bounties. What does it say about a company that doesn't offer bug bounties? I don't know, but while bug bounties are back in the news, let's examine one rather extreme case of "we don't do that here."

What if we don't offer a bug bounty?
Box.com is a great example of a company that refuses to even offer bug bounties. But they go a step further than simply not offering bug bounties and use veiled threats of legal action to force researchers to comply with their responsible disclosure standards.
While this is probably the most egregious example I've seen, I have a real problem with companies that don't pay bug bounties but then get all butt hurt when you disclose a vulnerability. There's a vulnerability market out there, and companies need to understand that they can either enter the market (in the form of bug bounties) or lose to any other bidder. If you develop software and don't offer a bug bounty you deserve whatever disclosure comes your way. While we're on the topic of bug bounties, it might be worth noting that vulnerability discovery is a purely speculative market, but so far money paid to researchers in bug bounties doesn't really reflect that. I expect that to change as nation states and private firms begin developing increasingly sophisticated offensive cyber capabilities.