6 Answers
6

You should let the developer(s) know privately so that they have a chance to fix it. After that, if and when you go public with the vulnerability, you should allow the developer enough time to fix the problem and whoever is exposed to it enough time to upgrade their systems. Personally, I would allow the developer to make the announcement in a security bulletin in most cases rather than announcing it myself. At the very least, I would wait for confirmation that the vulnerability has been fixed. If you have time and have access to the source code, you could also provide a patch.

Personally I think Responsible disclosure seems to be the best way to go from an ethical point and worked well for Dan Kaminsky revealing the details of the DNS cache poisoning vulnerability. But it all depends greatly on the company or group you are dealing with and also the user base that it will affect.

Responsible disclosure works fine. Usually every vendor has public disclosure policy that should be respected as well. Very often, vendors do ask for grace period that can be used as a buffer to make sure that biggest number of customers applied the patches. Following these steps will grant the finder, usually, the right to be publicly acknowledged as Microsoft, Oracle, SAP and other vendors do
–
Phoenician-EagleNov 12 '10 at 15:37

@VirtuosiMedia does a great job of outlining "Responsible Disclosure".

I would add two points:

Work with the vendor (if you can), to ensure they understand it fully and don't issue a half-baked patch.

If the vendor disregards you or ignores you, keep trying. However, if they claim it's not a vulnerability, go ahead and publish. As loud as possible. If they promised to fix, but don't, try to get an answer from them, together with a definitive timeline to which they commit. At some point, if they keep postponing, eventually you might want to tell them you're going to publish anyway - and then give them some time to actually fix it (but short and limited.)

Generally, it depends on the response of vendor. Good practice is when security researcher informs vendor about vulnerability, then during conversation you talk about terms of publication of poc/exploit of this vulnerability. Actually, researcher choose what to do with this vulnerability - publish later or not. Then vendor releases patch or new product version. Maybe. But, how experience shows - not all vendors are so nice. Some of them fix bugs silently, without informing end users and researchers, some prefer to ignore researcher. Others even try to sue. That's why sometimes anonymity is preferable way of initial communication with unknown vendor.

Also I would like to admit that there are bug bounty reward programs - those are offered by Google, Mozilla. Besides, others buy vulnerabilities - ZDI, iDefense, SNOsoft, coming "exploit hub", and etc.
So, there are at least three ways how to inform vendor - directly, by publishing vulnerability information on some list or via third-party companies.

Many of these 3rd party companies that are offering to buy vulns, usually dont do it to notify the companies for you. They do it to use for their own nefarious ends (even if its just slightly defrauding their consulting clients).
–
AviD♦Nov 12 '10 at 13:06

Well, can't speak for all companies, but as I know, ZDI really do notify vendors.
–
anonymousNov 12 '10 at 13:14

If they have a public issue tracker, see if you can file a bug with a "private" or "security" label.

Regardless of whether they have an issue tracker, email security@companyname and let them know.

If they don't respond fairly promptly (see "Window of Disclosure" in Schneier article below) then you need to think about disclosing it more widely. Look for mailing lists that security academics/professionals lurk on and ask them how they report issues to the vendor in question. They may be able to make introductions to the right place in the organization.

If all that fails, read the Schneier bit and think about whether full disclosure would be being part of the problem or part of the solution.

Bruce Schneier gives an argument for full disclosure in certain circumstances based on a "be part of the solution, not part of the problem" standard. It's definitely worth a read.

This is the classic "bug secrecy vs. full disclosure" debate. I've written about it previously in Crypto-Gram; others have written about it as well. It's a complicated issue with subtle implications all over computer security, and it's one worth discussing again.

...

This free information flow, of both description and proof-of-concept code, is also vital for security research. Research and development in computer security has blossomed in the past decade, and much of that can be attributed to the full-disclosure movement. The ability to publish research findings -- both good and bad -- leads to better security for everyone. Without publication, the security community can't learn from each other's mistakes. Everyone must operate with blinders on, making the same mistakes over and over. Full disclosure is essential if we are to continue to improve the security of our computers and networks.

...

Second, I believe in giving the vendor advance notice. CERT took this to an extreme, sometimes giving the vendor years to fix the problem.

...

I like the "be part of the solution, not part of the problem" metric. Researching security is part of the solution. Convincing vendors to fix problems is part of the solution. Sowing fear is part of the problem. Handing attack tools to clueless teenagers is part of the problem.

This is one heck of a complex topic. I was involved in the disclosure of the TLS renegotiation bug a couple of years ago, and believe me, we tried very hard to be "responsible", but in the end, we succeeded mainly in pissing off everyone around us and (perhaps) delaying the actual release of the fix. Not to say that vendor notification is necessarily bad, only that it's really easy to get whipsawed and wind up causing as much harm as good, or worse.

In our case, it took an action of the IETF (RFC 5746) to solve the problem, and even though we had an internet draft ready on the day it leaked, the actual work of debating and deciding on the solution took about three more months, and didn't really get started in earnest until the disclosure took place.

Anyway, this isn't an answer to your question, but it's one of the more interesting disclosure stories I'm aware of. More on that story in the 2010 ShmooCon keynote I did with Marsh Ray, who discovered the problem.