Krebs on Security

In-depth security news and investigation

Bugzilla Zero-Day Exposes Zero-Day Bugs

A previously unknown security flaw in Bugzilla — a popular online bug-tracking tool used by Mozilla and many of the open source Linux distributions — allows anyone to view detailed reports about unfixed vulnerabilities in a broad swath of software. Bugzilla is expected today to issue a fix for this very serious weakness, which potentially exposes a veritable gold mine of vulnerabilities that would be highly prized by cyber criminals and nation-state actors.

The Bugzilla mascot.

Multiple software projects use Bugzilla to keep track of bugs and flaws that are reported by users. The Bugzilla platform allows anyone to create an account that can be used to report glitches or security issues in those projects. But as it turns out, that same reporting mechanism can be abused to reveal sensitive information about as-yet unfixed security holes in software packages that rely on Bugzilla.

A developer or security researcher who wants to report a flaw in Mozilla Firefox, for example, can sign up for an account at Mozilla’s Bugzilla platform. Bugzilla responds automatically by sending a validation email to the address specified in the signup request. But recently, researchers at security firm Check Point Software Technologies discovered that it was possible to create Bugzilla user accounts that bypass that validation process.

“Our exploit allows us to bypass that and register using any email we want, even if we don’t have access to it, because there is no validation that you actually control that domain,” said Shahar Tal, vulnerability research team leader for Check Point. “Because of the way permissions work on Bugzilla, we can get administrative privileges by simply registering using an address from one of the domains of the Bugzilla installation owner. For example, we registered as admin@mozilla.org, and suddenly we could see every private bug under Firefox and everything else under Mozilla.”

Bugzilla is expected today to release updates to remove the vulnerability and help further secure its core product. Update, 1:59 p.m. ET: An update that addresses this vulnerability and several others in Bugzilla is available here.

“An independent researcher has reported a vulnerability in Bugzilla which allows the manipulation of some database fields at the user creation procedure on Bugzilla, including the ‘login_name’ field,” said Sid Stamm, principal security and privacy engineer at Mozilla, which developed the tool and has licensed it for use under the Mozilla public license.

“This flaw allows an attacker to bypass email verification when they create an account, which may allow that account holder to assume some privileges, depending on how a particular Bugzilla instance is managed,” Stamm said. “There have been no reports from users that sensitive data has been compromised and we have no other reason to believe the vulnerability has been exploited. We expect the fixes to be released on Monday.”

The flaw is the latest in a string of critical and long-lived vulnerabilities to surface in the past year — including Heartbleed and Shellshock — that would be ripe for exploitation by nation state adversaries searching for secret ways to access huge volumes of sensitive data.

“The fact is that this was there for 10 years and no one saw it until now,” said Tal. “If nation state adversaries [had] access to private bug data, they would have a ball with this. There is no way to find out if anyone did exploit this other than going through user list and seeing if you have a suspicious user there.”

Like Heartbleed, this flaw was present in open source software to which countless developers and security experts had direct access for years on end.

“The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. This is why we can see such foolish bugs in very popular code.”

Update, Oct. 7, 12:44 p.m. ET: Mozilla issued the following statement in response to this story:

Regarding the comment in the first paragraph: While it’s a theoretical possibility that other Bugzilla installations expose security bugs to “all employees,” Mozilla does not do this and as a result our security bugs were not available to potential exploiters of this flaw.

At no time did Check Point get “administrative privileges” on bugzilla.mozilla.org. They did create an account called admin@mozilla.org that would inherit “netscapeconfidential” privileges, but we stopped using this privilege level long before the reported vulnerability was introduced. They also created “admin@mozilla.com” which inherited “mozilla-employee” access. We do actively use that classification, but not for security bugs.

This entry was posted on Monday, October 6th, 2014 at 9:06 am and is filed under Latest Warnings, Time to Patch.
You can follow any comments to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

And how many times did Microsoft release critical fixes for all supported windows versions, from Windows XP till Windows 8 and Windows 7 before? Or am I just imagining that this is over 10 years of OS development?

“…The fact is that this was there for 10 years and no one saw it until now…”

At least, no one reported it till now. That doesn’t mean that no one has used it till now. If Joe Hacker has an undetectable shorthand method of probing everyone’s sites to see which hosts are vulnerable to what – I don’t think he’s going to make a lot of noise about it.

This article is full of inaccuracies! The vulnerability doesn’t let you access sensitive bugs; it only lets you change your email address used for your account to one email address which doesn’t belong to you. But such account do no get special privileges by default and so this means you cannot access confidential bugs.
And mentioning that Bugzilla is going to release a fix later today is just irresponsible from you. You could have waited that the releases were available from the Mozilla FTP webiste before disclosing the vulnerability. You just wanted to make a buzz, right?

@sven: because Mr. Krebs doesn’t know all the details, and take the risk to disclose a security vulnerability without giving Bugzilla admins a chance to upgrade their installations. This is just the wrong way to talk about security.

@Mike: I know what the article says, but this is plain wrong! I’m a Bugzilla core developer and I got confirmation from Mozilla admins themselves that neither @mozilla.org nor @mozilla.com accounts can see security bugs by default. This is just plain wrong. Checkpoint clearly wants to make the buzz too.

I stand corrected, I rechecked it and we were not able to see all Firefox bugs. We were positively members of multiple “confidential” groups I will not mention here, as well as the permission to edit bugs and perform various actions. We did not dive deeper than that as that was not a direct research goal.

Now we agree. You indeed couldn’t see any of the security bugs, because Mozilla Bugzilla is not configured that way (i.e. automatically give such privileges based on your domain name only). What is true is that you could see *some* of the bugs restricted to some groups.
I’m going to reword what I said before: *by default* Bugzilla doesn’t give you any special privileges based on the domain name of your email address. But an admin is free to configure his installation to automatically give some given permissions to all users belonging to some domain name. In the case of Mozilla, this involves e.g. the @mozilla.com or @mozilla.org domains. But the article wasn’t clear about that, and let everybody thought that you could get admins privileges and see all security bugs magically, which is not true.

This isn’t about Linux vs. Mac (you can run Bugzilla on OS X too) vs. Windows.

And it isn’t really about open-source vs. closed-source (you can and will find bugs in closed-source software too).

It is true that the idea “given enough eyeballs all bugs are shallow” is unfortunately false. At best “given enough eyeballs with excellent domain specific knowledge/understanding, bugs can be fixed given additional time and resources”.

A bug like this one would be much harder to find without the source, since you would have to have a hunch that a given area might be vulnerable, instead of some other area. There are lots of entry-points to modern web applications, which means that there are lots of places where you could choose to focus your efforts. With the source, you can more easily identify areas worth probing.

Probably either crashed from an induced DDoS-level of demand by developers to see if there’s a fix and users to see whether it’s a problem that affects them, or it’s been taken offline until there is a fix.

I was an IT and networking professional until 2003 when I moved away from the US. That’s a lot of time to forget, and to watch things move forward without you. I’m a somewhat savvy layman now.

I started paying attention to security issues again several years ago when the Stuxnet story broke. It gets worse and worse every year. Every week or every month something new crashes in on us.

People keep adding on layer after layer of techno complexity to society, and people keep consuming all that is new – while criminals are jumping on every opportunity to work the technology to their gain.

I don’t know how people with no real IT background can possible keep up with it. Most of the people who use computers, and smart phones, are stationary targets for cyber criminals.

It’s not going to end. Cyber crime is analogous to jihad across North Africa and the Middle East. Or border security against drug trafficking. It’s like climate change.

” The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. ”

Moronic statement.

Code that has been looked at by many eyes is more secure than code that hasn’t. Note this doesn’t mean there are no bugs.

Whether the code is open source or not makes no difference to people being paid to audit it or being committed to it (whatever that means).

The fact that this bug has been found is proof that someone is looking for bugs.

People who see the finding and reporting of bugs as a sign of insecurity have such a skewed perception of the world that it’s very hard to know what to say without resorting to verbal abuse.

This sort of thing happens just as often in the world of closed source. Only it happens without your knowledge; without your understanding of the issue and how it affects you; without giving you any ability to apply temporary workarounds or prioritise your updates. If you think this is an open source problem you are naive to the point of retardation.

” The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. ”

OpenBSD is audited constantly, for all bugs, including security bugs, and is probably one of the safest common OS at this time.

Then there is the live CD or USB install, as suggested by Brian. Your only vulnerability then is the BIOS or router.

A even safer OS is QubesOS, created by Joanna Rutkowska, a top security analyst. Every aspect of the OS is isolated in a separate domain with restricted privileges.