Adallom's Tomer Schwartz talks about disclosure, bounty programs, and vulnerability marketing with CSO, in the first of a series of topical discussions with industry leaders and experts.

Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?

CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.

Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?

Tomer Schwartz (TS), Director of Security Research, Adallom:

In Adallom we always practice responsible disclosure, because as a security company, we understand that some of our responsibility is towards "3rd parties" - users who aren't necessarily a part of our customer base, but nonetheless may be affected by our findings. One of the problems with responsible disclosure is the lack of a formal definition, which allows certain interest groups to take the word "Responsible" and define it in a variety of different ways.

Some vendors believe that the responsible thing to do is to wait indefinitely until a patch is published, but aren't holding themselves accountable for timelines. From my experience, this can be sometimes extend to over a year, without any user being informed, and without any monitoring on whether or not the vulnerability is being exploited in the wild. The majority of security researchers, including myself, find that there is a lack of transparency with this route, and it can be very irresponsible of vendors not to disclose when they know that users are completely vulnerable.

As a security researcher, my opinion is that any choice as to what to do with a vulnerability should be up to the researcher's discretion. Vendors should not coerce researchers either way. Due to bad experience dealing with vendors that are unresponsive and aggressive, some researchers choose not to disclose at all. While I support responsible disclosure, I think it's a terrible state the industry has gotten into.

If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point? If not, why not?

TS: Communication is the carrot, full disclosure is the stick. Unresponsive vendors are abundant, and being soft about disclosure deadlines will not make the software industry any better. I think the Zero Day Initiative (ZDI) has done a great job standing behind their position and training the software industry to accept hard deadlines. There are complaints that full disclosures help adversaries, but it’s actually the reverse.

Let's consider the scenario of a software vendor that doesn't patch a vulnerability for a year. If the researcher finds this vulnerability and waits to disclose it, someone else might find it and exploit it in that period of time, a whole year in this case - plenty of time for any adversary willing to invest resources in zero-day research. However, if the researcher chooses full disclosure, the same vulnerability will be patched usually in a matter of days. In this case, the period of time users are exposed to the risk is much shorter, even though to them it might look like a scary couple of days.

For the same adversary, the benefit of investing resources on exploiting that unpatched, released vulnerability, decreases dramatically, because she also knows the vulnerability is just about to be patched. While researchers are sometimes crucified in the media for choosing full disclosure, in some cases it is the lesser evil.

Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?

TS: Bug Bounty programs are a double-edged sword. It is usually a sign for researchers that a company has good understanding of the processes, and are willing to cooperate with researchers.

The monetary incentive also attracts a lot of researchers, which is good for the security of those vendors. On the other hand, that same monetary incentive can also be abused by the vendors to force longer disclosure timelines, and even require the researcher to not go public with the details.

Since it is proportional to the potential value of an adversary, I tend to value a vulnerability or an exploit based on its potential market value in the underground market, and no Bug Bounty program can compete with that today; I doubt any program ever will. Many Bug Bounty programs underestimate the time and efforts required to find a specific vulnerability.

Granted, the researcher disclosing that vulnerability should take it into consideration before disclosure, or even before the research starts. Having said that, the ability to publicly publish an advisory places a unique opportunity for the researcher to build his reputation, which is one of the reasons junior researchers usually do it. This is another reason why I'm against vendors who require the advisory to never be published - it directly harms the researcher.

Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?

TS: Even though I don't like it at all, I have to acknowledge that it has some value. Without it, the patching cycle for those same vulnerabilities would have been much longer. However, as more and more vulnerabilities are “branded”, PR firms are starting to cry wolf and pitch any vulnerability as "the next Heartbleed"; VENOM was a recent example.

Occasionally, critical vulnerabilities are patched without anybody understanding how critical are they, usually because it can take a lot of time to understand the potential impact of a single vulnerability; sometimes longer than the time required to find it in the first place.

The vulnerability branding trend trains the market to differentiate between the "cool", branded vulnerabilities and the old-school CVE-####-#####; it harms the industry since the latter can sometimes be as important as the former, if not even more. In general, the faster patching cycles are, regardless of media attention, the better.

If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?

TS: It's hard to predict what will be the exact terms when (or if) Wassenaar becomes law, and the wording can drastically impact the outcome. Any implementation of a law takes time, and execution is hard, so any change will probably be more gradual than immediate.

The Computer Fraud And Abuse Act (CFAA) is already pretty vague, and was exercised in court in a variety of different ways, so even without Wassenaar passing, some areas of security research are already exposed to legal threats. In the long run, I doubt this type of changes to law will encourage disclosure; in my opinion, it is more likely to increase self-censorship.

Currently it's very hard to estimate the magnitude of this problem, as it's hard to tell how many researchers prefer to keep their names out of their own findings. It's already a complicated situation that Wassenaar is not going to make any easier.