IE zero-day bug leads to squabble between Microsoft, researcher

A new zero-day IE flaw is now circulating after being publicized on January 1 …

[Editor's Note: The original version of this story was published before receiving proper vetting, and many of you rightly chastised us for it. We apologize and present the following coverage, which more completely examines the issue.]

Microsoft is at odds with a researcher employed by Google who published a zero-day Internet Explorer vulnerability on New Year's Day. The vulnerability was discovered using cross_fuzz, a browser fuzzing tool created by Google researcher Michal Zalewski, who says he gave Microsoft more than six months of warning before going public with the flaw. That hasn't stopped Microsoft from sharply disagreeing, however, with the company arguing that Zalewski has now put thousands of IE users at risk.

According to Zalewski's published timeline of events, he first told Microsoft about the vulnerability in July of last year and provided the company with copies of cross_fuzz for independent verification. Zalewski informed the company that he planned to release the tool in January, and Microsoft acknowledged the report at that time—confirmed on Tuesday by Microsoft spokesperson Jerry Bryant.

Microsoft said it was unable to reproduce any problems using the cross_fuzz tool upon being informed of the issue in July, despite Zalewski's insistence that he saw "multiple crashes and GDI corruption issues" in IE. The company claims it was only notified on December 21 of a new version of cross_fuzz that could cause a potentially exploitable crash.

Microsoft immediately issued Security Advisory (2488013), confirming that the vulnerability impacted all supported versions of IE. Microsoft explained that the vulnerability exists due to the creation of uninitialized memory during a CSS function within the browser, making it possible for the memory to be leveraged by an attacker with a specially crafted webpage.

"We immediately worked to reproduce the issue with the updated and original tool and are currently investigating it further to determine if it is actually exploitable," Bryant told Ars.

This is when the stories diverge, however. Zalewski says he heard virtually nothing from Microsoft until mid-December, at which point others were able to reproduce the problem, including by means of the original cross_fuzz version used last July. According to Zalewski, Microsoft was suddenly concerned about the potential PR fallout and claimed the IE problems only surfaced after he had updated his code. Zalewski said he confirmed that the problem was unchanged by running both the new and old versions of the fuzzer and told Microsoft again that he planned to release the tool in January.

"Response from [Microsoft Security Research Center] confirms that these crashes are reproducible with the July 29 fuzzer; unclear why they were unable to replicate them earlier, or follow up on the case," Zalewski wrote on December 29. As promised, he released the fuzzer on January 1.

Now, Microsoft is accusing Zalewski of increasing the risk to IE users—the company says attackers may find a way to exploit the flaw before a patch can be tested and distributed. Zalewski insists that Microsoft knew about the flaw and his plan to release in January for more than six months, however, and did nothing until it was almost too late.

92 Reader Comments

Editor's Note: The original version of this story was published before receiving proper vetting, and many of you rightly chastised us for it. We apologize and present the following coverage, which more completely examines the issue

A new zero-day IE flaw is now circulating after being publicized on January 1. Microsoft hasn't yet issued a patch, and is now engaging in a he-said, she-said fight between the Google employee who discovered and reported the flaw last July.

While I agree in principle with the concept of claiming that a flaw will be published by a certain date, the back and forth and lack of clarity make it seem as if the responsible thing to do would have been to delay the release. The entire point to public release should be to pressure the company to respond and take action, not to actively worsen the security situation once there is dialog and (apparently) progress being made, even if it comes seemingly at the eleventh hour.

The presence of multiple versions of the exploit passed along at different times along with apparently earlier correspondence makes everything sound even murkier and point more in the direction of a delay in release being the better choice. They can argue about which versions worked when and who did/didn't see behavior with which versions, but it sounds as if releasing "on schedule" (especially on the 1st, right during the holidays) was a poor decision by the researcher either way.

Was there really anything to be lost by delaying the release even just to the end of the month (which would have still been "in January" even)?

Was there really anything to be lost by delaying the release even just to the end of the month (which would have still been "in January" even)?

I don't think there's any indication that delaying disclosure seven months (what you propose) would have been any better than delaying six months (which is what happened). Remember that the security researcher in question found strong evidence that malware writers already knew about the bug before going public. At that point, giving Microsoft more time would just serve to give the malware writers a larger attack window. At least the public now knows enough to defend themselves (whether they will or not is a separate question--just because people don't stop using the car seat doesn't mean the manufacturer shouldn't issue the recall).

Yes, by all means, delay for another month while Lord knows what can happen. Something tells me there's very little MSFT can't fix in a week if it's public. If it's private however it takes them months?

Yes, by all means, delay for another month while Lord knows what can happen. Something tells me there's very little MSFT can't fix in a week if it's public. If it's private however it takes them months?

One month to publish code to something as critical and widespread as Windows is not ridiculous.

The problem is that no one depends on Google. If they mess something up, people will use Bing instead. If MS messes something up, many many things can go wrong.

Its a good thing that Microsoft carries out extensive testing (although it slows them down) before releasing stuff.

Yes, by all means, delay for another month while Lord knows what can happen. Something tells me there's very little MSFT can't fix in a week if it's public. If it's private however it takes them months?

One month to publish code to something as critical and widespread as Windows is not ridiculous.

The problem is that no one depends on Google. If they mess something up, people will use Bing instead. If MS messes something up, many many things can go wrong.

Its a good thing that Microsoft carries out extensive testing (although it slows them down) before releasing stuff.

My point is that they've had 6 months thus far and they say they couldn't come up with a patch. I bet they're have a patch out within a week now.

Given Microsoft's poor, to literally non-existent, bug tracking capabilities, and Microsoft's well publicized responses to anybody who actually makes a bug known to the public, I find it very difficult to accept the concept that the Google employee is at fault for anything here. I find it very easy to accept that somehow this Google employee did manage to notify Microsoft through it's vast web of complete and total incompetence, and that Microsoft, in it's complete and total incompetence, just ignored the July report.

One month to publish code to something as critical and widespread as Windows is not ridiculous.

The problem is that no one depends on Google. If they mess something up, people will use Bing instead. If MS messes something up, many many things can go wrong.

Its a good thing that Microsoft carries out extensive testing (although it slows them down) before releasing stuff.

I think it says more about how dangerous it is for the computing world to be so dependent on a single corporation. Diversification and variety in the computing ecosystem would grant single bugs less of a vulnerable population to exploit.

No sympathy for Microsoft here. I lean towards disclosure being the best for all involved anyway, but a few of things about their part of this bothers me:

1. Why does someone else have to write software and file bugs to find what are apparently rather obviously testable bugs. (e.g. where is ms_fuzz 1.0?)

2. Why after becoming aware of the general class of tests that find these bugs did they not build their own/contribute to the original tool? (fuzzer_now_with_ms_foo 2.0?)

3. 6months of heads up is sufficient by any measure and it sounds like the proof that the July29 edition worked was about the easiest test environment ever...if they can't be bothered to fix it...tough.

4. Apple/Firefox don't seem to be complaining here, why is this sooo unfair to Microsoft but fine enough for them?

The problem is that no one depends on Google. If they mess something up, people will use Bing instead. If MS messes something up, many many things can go wrong.

So, nobody depends on chrome but absolutely everybody on IE?

If someone at MS finds a security vulnerability in chrome and google doesn't fix it for six months, while there are indications that malware writers are already aware of the problem - by all means make it public.

Microsoft said it was unable to reproduce any problems using the cross_fuzz tool upon being informed of the issue in July, despite Zalewski's insistence that he saw "multiple crashes and GDI corruption issues" in IE. The company claims it was only notified on December 21 of a new version of cross_fuzz that could cause a potentially exploitable crash.

The problem is that no one depends on Google. If they mess something up, people will use Bing instead. If MS messes something up, many many things can go wrong.

So, nobody depends on chrome but absolutely everybody on IE?

If someone at MS finds a security vulnerability in chrome and google doesn't fix it for six months, while there are indications that malware writers are already aware of the problem - by all means make it public.

Yes, considered many application just use the IE API MS can mess something up much worse than Chrome.

Given Microsoft's poor, to literally non-existent, bug tracking capabilities, and Microsoft's well publicized responses to anybody who actually makes a bug known to the public, I find it very difficult to accept the concept that the Google employee is at fault for anything here. I find it very easy to accept that somehow this Google employee did manage to notify Microsoft through it's vast web of complete and total incompetence, and that Microsoft, in it's complete and total incompetence, just ignored the July report.

Huh? As much as I despise Connect from a usability point of view, I don't see how it's not a bug-tracking database. How does it constitute "literally non-existent bug tracking capabilities"?

I said this in the original discussion thread, but I would argue Zalewski didn't even have to give MS any time to react. It has never been, nor should it ever be, the bug-discoverer's responsibility to make sure other IE users aren't harmed by the problem. Any action he takes to do so is pure philanthropy on his part.

If he reveals the problem immediately without any disclosure to MS, he's still doing the community a service in the long run. I feel better knowing upfront that I should pause my use of IE while the problem is fixed rather than finding out later that there was a window of time I could've been hacked without my knowledge. Knowledge is power, and he is simultaneously giving that power to the consumer and the hacker, the latter of which arguably already had the power in this case.

For the record, MS claims the fuzzer tool didn't work for them in July.

For the record, the tool did work at that time:

"Response from [Microsoft Security Research Center] confirms that these crashes are reproducible with the July 29 fuzzer; unclear why they were unable to replicate them earlier, or follow up on the case"

Given Microsoft's poor, to literally non-existent, bug tracking capabilities, and Microsoft's well publicized responses to anybody who actually makes a bug known to the public, I find it very difficult to accept the concept that the Google employee is at fault for anything here. I find it very easy to accept that somehow this Google employee did manage to notify Microsoft through it's vast web of complete and total incompetence, and that Microsoft, in it's complete and total incompetence, just ignored the July report.

Huh? As much as I despise Connect from a usability point of view, I don't see how it's not a bug-tracking database. How does it constitute "literally non-existent bug tracking capabilities"?

There was one project I was contributing to with unfixed bugs that had the Connect profile close because they were "no longer accepting bugs", and all open bugs were closed as "wontfix".

Connect is largely showmanship that you kinda cross your fingers will get some attention paid to an issue.

Was there really anything to be lost by delaying the release even just to the end of the month (which would have still been "in January" even)?

I agree that nothing would of been "lost", but what needs to be considered is that there are more browsers out there then just IE. The tool that was released will be extremely helpful for anyone trying to build a better and secure browser. Unlike most security tools though, the author went to MS first to get a large vulnerability fixed before the tool gets released publicly. MS had the time to fix it (and should of jumpped on it as well since their XML files have been the root causes of some bigger issues out there, including a few of the issues that caused Stuxnet http://www.youtube.com/watch?v=dd7o2T5osgc )

I aplude the author for the tool he created, as it will allow the people who update Firefox, Chrome, Opera, Safari, Kommander, Seamonkey, Lunascape, and many others to find and patch what malware vendors have been using for a while to preform drive-by installs and downloads.

2) How do users gain from the researcher publicly revealing the flaw? Security researchers sometimes publicly announce they found a flaw, they have informed MSFT (or whoever), but decline to reveal. Why couldn't the researcher do that?

Given Microsoft's poor, to literally non-existent, bug tracking capabilities, and Microsoft's well publicized responses to anybody who actually makes a bug known to the public, I find it very difficult to accept the concept that the Google employee is at fault for anything here. I find it very easy to accept that somehow this Google employee did manage to notify Microsoft through it's vast web of complete and total incompetence, and that Microsoft, in it's complete and total incompetence, just ignored the July report.

You're using inaccurate comparisons. Microsoft is a closed source software shop, so the issue tracking systems for their products are also closed systems. A better comparison would be to something like, say, another closed source competitor such as Apple. Apple has a form to let you submit bugs, but you can only see those you've submitted - not those of others.

2) How do users gain from the researcher publicly revealing the flaw? Security researchers sometimes publicly announce they found a flaw, they have informed MSFT (or whoever), but decline to reveal. Why couldn't the researcher do that?

He revealed the flaw to them 6 months ago, and waited then. Microsoft decided to do nothing about it at the time. Six months later, he informs them again, but they give him the same response. Given the circumstances, you are the presented with a choice: keep quiet about a vulnerability that you think might be exploited out in the field (and have already kept quiet for half a year), or go public to try to encourage them to fix it? It's an ethical dillema to which there's no good answer. That's what makes these issues so challenging. Yes, it's potentially harmful to users if the exploit is not already known, but it's even more harmful to keep quiet about it when a) you don't expect it to be fixed, and b) you believe people are already using the exploit. Drawing public attention to it significantly ups the priority of getting it fixed.

Btw, it isn't entirely clear from this article but it sounds like the researcher in question has had to hold off on releasing his security tool to the public (as opposed to a few testers etc) as it would have revealed the IE flaw to anyone using it. Is that actually the case?

If so then sitting on a completed tool with broader applications for 6 months seems to be an admirable display of restraint. Especially if the other party seems to be mostly twiddling their thumbs.

I don't understand those defending Microsoft or lambasting Google. Microsoft had the tool to discover this bug in July 2010, and correspondence that should have triggered testing with the tool. The fact that Microsoft (claims that it) didn't find the bug back then, and released PR to this effect only after the release of the tool is interesting.

Does anyone think that given an extra month or year, Microsoft would have patched this? I think a more likely reality is that we would see this same outcry. Releasing the tool forces Microsoft to escalate the issue, rather than let it sit on the backburner like it had, and likely would have continued to do.

For the record, MS claims the fuzzer tool didn't work for them in July.

For the record, the tool did work at that time:

"Response from [Microsoft Security Research Center] confirms that these crashes are reproducible with the July 29 fuzzer; unclear why they were unable to replicate them earlier, or follow up on the case"

(from the article)

For the record, MRS reran the test in Dec using the July version of the tool, which then worked.

2) How do users gain from the researcher publicly revealing the flaw? Security researchers sometimes publicly announce they found a flaw, they have informed MSFT (or whoever), but decline to reveal. Why couldn't the researcher do that?

He revealed the flaw to them 6 months ago, and waited then. Microsoft decided to do nothing about it at the time. Six months later, he informs them again, but they give him the same response. Given the circumstances, you are the presented with a choice: keep quiet about a vulnerability that you think might be exploited out in the field (and have already kept quiet for half a year), or go public to try to encourage them to fix it? It's an ethical dillema to which there's no good answer. That's what makes these issues so challenging. Yes, it's potentially harmful to users if the exploit is not already known, but it's even more harmful to keep quiet about it when a) you don't expect it to be fixed, and b) you believe people are already using the exploit. Drawing public attention to it significantly ups the priority of getting it fixed.

Again, according to MS, when they ran the tool in July, it didn't work.

What isn't entirely clear at this point (for security?) is whether Zalewski told Microsoft enough for them to do something. From the timeline, it sounds like Microsoft tried but wasn't able to reproduce the "potentially exploitable crashes." (How many times has a user's problem gone away as soon as you looked at it yourself?)

As this particular fuzzer is using random input to the browser, the suggestion that they should just keep running it longer is no doubt technically accurate but not particularly helpful. The Microsoft-specific "triggers msie_crash.txt reliably" seed is dated Dec 22.

(Edit: remove unnecessary distinction between random and repeatable; random is repeatable with the same random seed.)