its actually backwards, the only two sites that didn't got pwned in the world where sla.ckers and google because both disabled the filter as soon as they became aware of the issue.

why give the heads up to sla.ckers and google? well, because I happened to find the issues while experimenting here and in google docs.

> I was under the impression that Google had set: X-XSS-Protection: 1; without: mode=block

IE's filter is enabled by default... it makes no sense to send x-xss-protection: 1 without mode=block..

> they must have changed something
they kept you and lots of google users safe from.. me.. haha, that was kinda sad, but acceptable since a couple of friends knew about the issue before it was patched.

> Anyway, the example suffices in favor of my previous arguments
whatever dude, but I think you are missing Jonas's point completely. CSP is a Policy at an HTML level, his document describes a Policy at a JavaScript level (if I understood correctly). You can argue that CSP sucks because its based on blacklisting, and indeed is bypassable (I have to find some time to report that bypass.. but I cant remember my password at bugzilla.. pff).

I for example, tried to do something like CSP in browserland called ACS (http://eaea.sirdarckcat.net/testhtml.html), and I had to endup reimplementing the HTML parser as a whole.. that kinda sucked, but well.. what can you do.

In this case Jonas wants to make a policy at a JS level, JSReg is already trying to achieve that (http://tinyurl.com/jsreg) at browserland, with a sandbox, and it still lacks some type of configurable rate limit, but that was going to be part of ACS supposedly.

Google CAJA also has something similar, http://caja.appspot.com/ however learning how to use it correctly is kinda hard..

> MSIE XSS filter bypass is a good example of how trust in your own policy rules
> WILL be broken, and even turned against itself. Google was vulnerable for setting
> a header flag enabling the MSIE XSS Filter, and got pwned through it, despite
> M$'s extensive testing, they've made a mistake in only a handful of Regexp-rules.

dude, you are lost xD, that paragraph is completely wrong.

Google is not enabling the filter whatsoever, its being disabled, and if enabled its on blocking mode.

I discovered several issues on IE's filters, and you are probably referring to one of the ones thornmaker and I made public on blackhat.

Yeah that's what I was referring too, I read the slides, but you got me confused now. I was under the impression that Google had set: X-XSS-Protection: 1; without: mode=block and changed it to 0 later on, why else give Google a heads up? they must have changed something so the only thing I could think of was that header.

Anyway, the example suffices in favor of my previous arguments. I can dig up a whole can of worms that had a similar approach, but I reckoned that was unnecessary to make a point. But eh, those who don't learn the lessons from history, are destined to repeat them.

Based upon this paper: http://www.cse.chalmers.se/~dave/papers/ASIACCS09.pdf

Which is a great waste of time imho. Sorry, it's how I see it. It's a nice intellectual exercise, but that's all. It's far removed from the real world, in hacking theory isn't practice, and never will be be.

Edit to add: And this isn't some case of just 'fixing' what you fail to catch, like; Ah I missed that! fixed! because that is the wrong approach. If it's just for fun, it's okay, but if you aim at a "tamper-proof" solution as proposed in the scientific paper, you will be in for a couple surprises. I know no-one who ever managed to build a solution that catches all possible vectors for a platform the solution was proposed. It's silly to assume you even can, because you don't know all possible vectors, or even vectors that aren't invented yet. So putting "tamper-proof" aside would be the most honest thing to do imho.

Hi again all! Finally back from China. (Thanks to my icelandic colleagues for arranging an involountary extended stay).

@holiman - Congratulations! I hope everything works out so that I get to meet you.

@Gareth - Too bad, but hopefully there will be other occasions/conferences.. And btw, if you become president all you get is one of those cheap Norwegian Nobel peace prize knock-offs, not the real Swedish deal.. ;D

Skyphire Wrote:
-------------------------------------------------------
> @Jonas Magazinius
>
> Based upon this paper:
> http://www.cse.chalmers.se/~dave/papers/ASIACCS09.
> pdf
>
> Which is a great waste of time imho. Sorry, it's
> how I see it. It's a nice intellectual exercise,
> but that's all. It's far removed from the real
> world, in hacking theory isn't practice, and never
> will be be.
>

First off, I find it really nice that our research is intriguing enough to start a discussion! I don't mind that at all that you think it's a waste of time. I wasn't very convinced by Phung, Sands and Chudnov's paper [ASIACCS'09] myself when I first read it. I think it focuses slightly too much on the policies, which takes focus away from the proposed technique which is what is interesting. Since then I have gotten a better understanding of what they were trying to accomplish and together we have been working improving the security of the wrapper mechanism and providing better support for authoring policies which are "sane".

Of course theory isn't the same as practice. But this is taking theory an important step closer towards practice. AOP for JavaScript is used in practice to enforce security properties and unless research is done to try to understand the difficulties that might bring, how can we say that it is secure?

To clarify a bit and give a brief summary of the first paper; The browser provides an environment with a number of built-in objects/methods which can be used by user defined JavaScript. These built-ins can be (mis)used to do malicious actions that compromise the integrity of the user's information (stealing the cookie) or affect the user experience (spamming pop-ups or alerts). The paper proposes a technique that allows site developers to control how these built-ins are used by defining policies that can allow or disallow the access to/execution of these built-ins.

The paper proposes using existing AOP techniques for JavaScript to wrap the built-ins with a function that execute the policy to ensure that the arguments conform to the developers demands. Using AOP to wrap the built-ins means the wrapper will hold a unique pointer to the built-in and that any access will have to go through the policy.

If we can just guarantee the integrity of the wrapper and the policies, then it's up to the site developer to provide the policies that should apply.

We have discovered and fixed several flaws in the implementation described in the paper above. But the example above is not a flaw in the method per se, but an example of a bad policy. In general we cannot prevent the user from writing bad security policies, but we can make it easier, and this is one of the things we will present in he paper at OWASP AppSec'10. In particular we have note that the problem of different aliases to the same point-cut (policy application point) should be handled by the library and not by the policy writer. As an example, the point-cut given could be "window.alert" but that is just an alias for "Window.prototype.alert". In the paper we argue that, in a fresh browser context, any built-in has a fixed set of static aliases. If the policy author specifies any one alias then the policy should automatically apply to all and hence we can ensure that the correct built-in is wrapped.

> // Blacklisting sucks and should be put to rest.
> // There are probably many more ways to circumvent
> this policy based approach;
> // Think objects, css, the list is endless (see
> the many sla.ckers threads about obfuscation
> etc.), and incomplete due to it's
> unknown/undocumented attack landscape.

I agree that blacklisting sucks, but that's not what we try to achieve. Regardless of how the JS is introduced (html, css, events...), and regardless of the level of obfuscation, it all uses the same JS-engine and the same environment. By ensuring that the site specific security critical built-ins are wrapped with policies that cannot be affected by user code, we can provide a robust way to enforce security policies which is not sensitive to e.g.
the level of obfuscation in the code.

> Edit to add: And this isn't some case of just
> 'fixing' what you fail to catch, like; Ah I missed
> that! fixed! because that is the wrong approach.
> If it's just for fun, it's okay, but if you aim at
> a "tamper-proof" solution as proposed in the
> scientific paper, you will be in for a couple
> surprises. I know no-one who ever managed to build
> a solution that catches all possible vectors for a
> platform the solution was proposed. It's silly to
> assume you even can, because you don't know all
> possible vectors, or even vectors that aren't
> invented yet. So putting "tamper-proof" aside
> would be the most honest thing to do imho.

As far as the wrapper goes we pretty much have an implementation that is "tamper-proof" in the sense that it is not affected by user code. There have been alot of difficulties (e.g. built-ins can be restored from a fresh window context like an iframe) but we are working them out.

When it comes to policies it is a question of how we can ensure that the policies supplied by the developer can only have one meaning (declarative policies) and cannot be affected by user code. The developer should be able to write any policy, regardless of its usefullness (like "only 3000 alerts are allowed"), but the meaning should not be influenced by user code (like "reset the counter and keep on alerting"). As an example of how a bad policy can be influenced, consider your guess on the AllowedURL policy. Your policy is vulnerable to a function subversion attack as such:

> We have discovered and fixed several flaws in the implementation described in
> the paper above. But the example above is not a flaw in the method
> per se, but an example of a bad policy. In general we cannot prevent the user
> from writing bad security policies, but we can make it easier, and
> this is one of the things we will present in he paper at OWASP AppSec'10.

No one can write a good policy based upon blacklisting*, no one ever will because
no-one knows all attack angles. It can only reduce, not stop. But the flipside
of reducing something, might increase or enable another attack that relies on the
same attack one stops. That's why it's a black-box, you never know how systems
will interact.

*Simplified policy example, but based upon blacklisting because of setting a watch on a limited set of properties.

Assuming the developer knows it all, which he won't, he even might enable policy
rule number x, and number z, and thereby influencing policy number y. which might
lead to a whole other compromise. You just don't know how an attack will emerge.
I think that a developer who even takes notice of this library, should spend his
time fixing/preventing holes instead of layering unknown attacks and attack mitigation layers.

A recent example;

MSIE XSS filter bypass is a good example of how trust in your own policy rules
WILL be broken, and even turned against itself. Google was vulnerable for setting
a header flag enabling the MSIE XSS Filter, and got pwned through it, despite
M$'s extensive testing, they've made a mistake in only a handful of Regexp-rules.

holiman Wrote:
-------------------------------------------------------
> This thread contains postings from no less than
> three guys who will be presenting on the Appsec
> Conference in Stockholm! Cool! (sirdarckat,
> thornmaker and jonas)
>

Awesome! And I hope .mario will also be attending since he's won one of their contest. How about you holiman? Gareth will you be there? I just met and had a really good time with Marco Balduzzi at the ASIACCS2010 conference in Beijing (where I'm currently stuck). And there's a number of other people I'm looking forward to meeting. Since I'm a local I will guarantee whoever attends will have a great time!

> @Jonas : I read the paper by Phung/Sands/Chudrov
> about "Lightweight Self-protecting javascript"
> last summer and thought that you guys would
> probably find some of these sla.ckers-threads
> pretty fun...

Yeah, it's a nice idea (even though there are some serious difficulties to overcome) and this year we will be presenting a follow up paper where we deal with a number of flaws in the wrapping technique used. Soon when our updated prototype is finished I will post a contest to break it! I'm sure some of you will come up with new flaws that I haven't thought about.

> they kept you and lots of google users safe from.. me.. haha,
> that was kinda sad, but acceptable since a couple of friends
> knew about the issue before it was patched.

What a pompous statement. I don't even use that fucked up MSIE browser in the first place. I bypassed that filter in the first week it was released, but that doesn't mean I should jump on the media bandwagon or whoring myself out to corporate fucktards like Microsoft to prove it. Next time I'll be less gracious in pulling up a vector from someone else, and present my own, okay? But then I probably get accused for trumpeting my own vectors.

> whatever dude, but I think you are missing Jonas's point completely.
Yeah, whatever 'dude'.