Navigation

Firebug Goes Evil

published: April 4th, 2007

Firebug is a very powerful JavaScript debugger for Firefox. I love it! It has tones of useful features like a dynamic console, DOM tree explorer, CSS viewer/editor, script explorer and my favorite, a network monitor where I can see all Flash, XMLHttpRequest, JS and Image requests.

Firebug is mainly used by web developers to find bugs in their code but it can also be used from security guys like me to find and explore various client-side and server-side vulnerabilities. Firebug is my best buddy. I even partially based Technika, the bookmarklets powertool, on the top of Firebug.

Unfortunately, Firebug suffers from rather simple but quite dangerous vulnerability. I have discussed similar issues before. In general, browsers try their best to prevent common vulnerabilities from crippling into their code. However, that’s not the case with browser extensions. Very often, browser extension authors do not consider the security aspects of their work, i.e. extensions are not carefully inspected for security vulnerabilities. Because of this, incidents occur. IMHO, the next wave of browser attacks will target exactly this.

In this post I am going to disclose a vulnerability for Firebug which can be used by attackers to gain control of every system where the Firebug extension is installed. Of course, the user needs to visit a malicious page first, which means that the attack surface is greatly reduced. However, given the fact that the largest user base of the Firefox browser are geeks and Firebug is a top extension at http://addons.mozilla.org, attackers can cause quite a lot of trouble.

The vulnerability is of a type Cross-zone or Cross-context scripting, where a script from a web pages is injected inside the zone of the browser, also know as the chrome, or in the zone of the file: protocol. In both cases the result is quite devastating, although the second is a bit less critical then the first. Remote scripts in the browser are restricted by a sandbox. This means that everything that is prefixed with http: or https: is secure. Browser extensions make use of the chrome: protocol. This protocol is not restricted at all and everything is allowed. In that respect, browser extensions are trusted. However if a remote script, tricks the browser into executing JavaScript expressions on chrome: then this script can take control of the entire chrome and also the underplaying operating system because then command execution and read/write file access operations are allowed.

In order to cause Cross-zone scripting in Firebug you need to do the following:

console.log({'<script>alert("bing!")</script>':'exploit'})

If you put this JavaScript expression into a page and open it with the browser while Firebug is on, you will be prompted with an alert box. This is not very interesting but there is a lot more you can do then that. For example, attackers can easily inject the following function into the browser chrome:

The function runFile allows execution of files. With the function declaration in the browser chrome, attackers can call console.log a few more times to spawn any file they want or even silently install browser extensions, not to mention that they will be able to read and write the file system too. The possibilities for evilness are endless.

There is a catch though. The Cross-context scripting vector is very tiny. In order to exploit the vulnerability, I needed to go through some extreme things like dynamically composing the malicious payload in a string then evaluating the string content inside the chrome. I wrote two (1, 2) Proof of Concept exploits that you can try out.

It is highly recommended that you disable Firebug until this issue is fixed which I have no doubt that it will be quite soon.

Interesting, but the exploit works on my side. So you are saying that you receive errors on your side because I need to put the input elements inside a form? Give me an example. I’ve tried to exploit on a couple of machines and it worked flawlessly.

I’ve always run firebug, but since it causes a bit of a slow down anyway, I always disable it by default except on sites where I explicitly enable it. Ironically, gnucitizen was one of those sites since I was playing around with carnival a while back. The list of allowed domains is back down to domains I control… ;-)

Good catch, pdp. I think it’s time to move my browsing habits into a sandbox… /me sighs.

this is something David Kierznowski suggested as well. It seams that the Firebug disabling features work on http: and https: protocols but not on file:. On file: Firebug is enabled and you cannot do anything about it, unless you completely disable the extension from the Firefox Add-ons dialog.

Thor Larholm has identified another vulnerability in Firebug which by nature is similar to my finding. This vulnerability affects 1.0.3 which means that you should disable Firebug for now. For more information about the new issue click here.

The above exploit has been fixed as well. Joe is very responsive and I thought it would be worth posting his response on the last link here as well:

Joe Hewitt says:
April 6th, 2007 at 3:44

I have fixed this issue and and released 1.04.

As you suggested, I now escape all text before inserting it into HTML, rather than leaving it up to the caller. I’ve also added support for disabling file: urls.

I hope there aren’t any more vulnerabilities to be found, but if there are, please give me a day to patch it before you publish. I do appreciate you taking the time to make Firebug more secure, but it’s better for everyone to have the patch surface before the exploit.

It is a good think that Firefox has an automatic update system, so every Firebug user should be secured within a few days.

We often need to take extreme routes to make a point otherwise nobody would listen. However, I knew that Joe Hewitt will patch Firebug very quickly because the extension has one of the cleanest source code structures I have ever seen.

I don’t think there is anything extreme about publishing a vulnerability when you find it. sure, Joe could have been away for the Easter holiday visiting his family and therefor not been able to patch it immediately, but I fail to see how that is my concern. In that regards I have treated him no worse or better than I have treated Microsoft, Mozilla or Valve in the past.

This is research that pdp and I have independently performed. We’re not employees of Microsoft or Firebug, instead we are altruistically researching and publishing the very things that others are also researching – but keeping private.

You are not “altruistically researching” anything when you don’t consider what’s best for the end users?

There aren’t just two parties involved in these bug reports — the software developer and the security researcher — there is a third group with interests at stake: the thousands, if not millions of end users who are vulnerable.

You’re right that you don’t owe Joe Hewitt anything (though I would opine it would be nice if you extended common courtesy). And you also deserve credit for finding this security hole.

But you do a huge disservice to all the everyday users like me when you dismiss the possibility that “Joe could have been away for the Easter holiday visiting his family and therefor not been able to patch it immediately”.

I’m looking for a way to detect the version of Firebug running *without* going into Firefox. Registry Keys, file version of specific files, file version in readme files, or anything like that. I’m trying to write a detection of vulnerable versions for a network scanner.

I don’t think anyone here is saying you should bury the research. We all know that security through obscurity is a fallacy. However, asking you to delay publishing an exploit for a day or two is certainly not unreasonable. It helps protect the users more than the author.

Premature publishing of explicit exploit information is one of the ways that “zero-day” problems become widespread. As a user of Firebug I’d rather you had talked to Joe first. Actually as a user of any software I’d rather you give it’s author a chance to patch before publishing exploits.

If you feel you must post immediately, post the information that the vulnerability exists and some basic details without the full “how-to” and follow up later with the full disclosure.

There are certainly times where it’s appropriate to use disclosure as a means to force action, but give the author a reasonable chance to respond. It’s better for all of us.

James.

Previously Thor Larholm said:

I don’t think there is anything extreme about publishing a vulnerability when you find it. sure, Joe could have been away for the Easter holiday visiting his family and therefor not been able to patch it immediately, but I fail to see how that is my concern. In that regards I have treated him no worse or better than I have treated Microsoft, Mozilla or Valve in the past.

This is research that pdp and I have independently performed. We’re not employees of Microsoft or Firebug, instead we are altruistically researching and publishing the very things that others are also researching – but keeping private.

Yes I think you should inform the owner before you publish or you are just contributing to the problem. So that would make you just as guilty as the people causing all the problems on the net today. But on the other hand its people like you all that help keep the net safe just be responsable with your powers are you doing it for good or evil.