If you keep an eye on security issues, and as a DBA you should, then you know that there's an ongoing debate on disclosure. Some people want all security issues disclosed as soon as possible and others argue vendors should be made aware and the issues not released until a patch is available. I'm kind of in the middle here. I see both arguments and think that we should know about them as soon as we can, but vendors should get some time to patch. I'm kind of a "give them a month" guy, thinking that if it cannot be patched in that time that perhaps we are better off knowing there's an issue.

Recently Microsoft held it's 3rd Blue Hat Security Conference where they bring a bunch of security researchers or hackers to Redmond and put them together with developers, engineers, executives, etc. and talk about what is wrong with Microsoft products relating to security. It's a great idea and from the feedback I've heard from MS employees, it has really helped them to better understand the problems with how the products have evolved. And it shows that Microsoft is working on security. They're not there yet, but getting better all the time and it's a bold move inviting people to come tell your employees where they are making mistakes.

Time will tell how much gets out, but I think this is one of the best things for Microsoft security that they have done. It really shows a commitment to making things better. Now if they can just get the fixed for the issues incorporated into products by the engineers.

I have to agree with you (not that it hurts or anything) about the security point - falling in the middle between immediate disclosure and no disclosure. However, a month is too short. Before you get all excited let me explain. First, I don't really know when the clock begins on the month. Validating a security concern can sometimes take a fair amount of time - especially if it's complex. Once the issue has been verified to be real it can take significant time to architect a fix. We'd all like to think that companies are full of the brightest minds, and many of them are, but security related problems can be very challenging to address. You need to be sure that you're not blowing customers out of the water by making the ramifications of applying the fix enormous. In addition, you need to be sure you're not opening up another security hole. Once the fix has been architected and coded it needs to be tested. Again, this takes time. And we want it to take time so we're sure the fix is good under stress. If you start the clock after the issue has been verified I'd be more likely agree with you. However, if you start the clock as soon as it's reported to the company we're going to have a problem.

I work on SQL Server and I'm proud of our track record. We do a lot of things right and we spend time thinking through the customer impact. We absolutely do not sit on security vulnerabilities and work incredibly hard to get the fix out as soon as possible. But the worse thing for us would to be in a situation where we were forced to get a fix out - a situation that would probably lead to us having to "fix" the issue multiple times or releases subsequent fixes that address new issues caused by the first fix.

Customers need to be confident enough in the fix to apply it right away. So long as we continue to release solid fixes they'll deploy them. But if the system causes bad behavior and bad fixes everyone looses.

Good points Dan. Appreciate the response and kudos to you. The SQL Server team has done a great job.

Maybe a month is too short, and honestly since I have no idea of the timing, I don't know what a good time frame is, but I think there should be some limit. Some companies have sat on issues before. There needs to be some push to get things fixed rather than delayed so some other new feature can be worked on.

I agree that some companies need a kick in the pants to get security fixes out. The hammer of full public disclosure of the threat could be that kick. Or we could force companies (um, the SEC could I suppose) to disclosure statistics on handling security threats. I think this would benefit everyone - granted it probably means more "paperwork" for someone like me.

I haven't thought through exactly how this would work. Or maybe we could do a rating system, think Moody's or Morningstar, for companies that maps to their handling of security threats. Would you want to do business with a company that receives a D? How about a C. This could reduce some of the spin we get from software companies on their responsiveness.

Actually I think that the insurance industry will have to drive this. As they start to cover and insure losses, they'll bring pressure on disclosure and more focus as they have with workplace security and OSHA regulations.