Monday, December 23, 2013

TL;DR: In Firefox regexps with 999 998+ groups return false, no matter was the given string valid or not. It seems like a performance optimization, but theoretically can lead to security issues. I believe it should raise an exception instead of fooling the code.

Unrelated prehistory: Few weeks ago I was trying to XSS m.facebook.com location.hash validation with a timing attack.(/^#~?!(?:\/?[\w\.-])+\/?(?:\?|$)/).test(location.hash)&&location.replace(...
I was trying to craft *long enough* regexp argument so I could get a spare second to replace the location.hash just before location.replace happens.
I'm no browser expert and don't comprehend how it works on low level - it didn't work out anyway, because it is single-threaded (with setTimeout timing attack works fine - try to XSS this page.)

The side-research is more interesting!

pic unrelated.

When I was testing FF i did notice, for huge payloads JS simply returns "false". For "/a/a..N times" it still was true but for N+1 times all of a sudden - false
Yeah, JS was like saying "TL;DR, hopefully it's false".

Wait, what?! Regexp can't just return a wrong result only because of the argument's length! Let's double check:

Why is it a bug?
1) App is supposed to have Read & Write permission to access DMs. With this shortcut you can bypass that protection
2) DMs are easier to use for spam. User will barely notice it.
3) Also DMs don't show if it was sent with official client or a 3rd party OAuth client. Which is great for phishing.

It didn't help. Now I want people to take the issue seriously. This is a huge unsolved problem.

Developers have a tool which they don't know how to properly use. So they use it how they feel convenient. It leads to security breach. Reminds mass-assignment bug (sorry, i have to mention it every day to feel important)

How it works?
Attacker redefines RJS function(s), asks authorized user to visit specially crafted page and leaks data (which the user has access to) and his authenticity_token. Very similar to how JSONP works.

Some people are +1, some found it "useful in our projects" (=vulnerable), also DHH is against my proposal:

Not only are js.erb templates not deprecated, they are a great way to develop an application with Ajax responses where you can reuse your server-side templates. It's fine to explore ways to make this more secure by default, but the fundamental development style of returning JS as a response is both architecturally sound and an integral part of Rails. So that's not going anywhere.

Saturday, November 9, 2013

In my opinion usefulness of an XSS on a random website is next to nothing. What are you going to do with XSS on your local school website? Stil da kookies?
So either you should look for XSS on a major website or find tons of random XSSes in hope some of them will turn out to be any useful.

site.com/#<img src=x> for $(location.hash) might work too but you need to catch alert execution in runtime.

please share other lazy tricks you know!

So I took the first 1k of Alexa and apparently 10+ are vulnerable (wikia, rottentomatoes etc). The script is not perfect (if status code is 404 it doesn't check the body) — feel free to improve it. For 1kk output might be 10k+ of vulnerable websites having PR more than 1. PROFITZ.

And make some money. For (black) SEO purposes you can make the google bot visit popular-website.com/<a href=//yoursite.com>GOOD</a> etc.
Also send some cheap human traffic on the XSSed pages and dump the page (with tokens in it) + document.cookie to analyse / data mine it, hoping to find something useful.

So the recipe is simple: take that 1 million, create a lazy XSS pattern (consider more comprehensive patterns, ?redirect_to=..) then use the XSSed page for SEO / stealing cookies.

So innocent open redirect on logout will simply reveal access_token of current user when we set redirect_uri to Client/logout?to=3rdparty

That's really bad. Users won't be happy to see your client posting spam on behalf of their accounts.

What about open redirect in Rails?

As I previously mentioned in my blog posts, Rails loves playing with arguments. You can send :back, or "location" or :location => "location" or ->{some proc} or :protocol=>"https", :host=>"google.com" etc. So because of this line

redirect_to params[:to]

attacker has multiple ways to compromise the redirect.

1. params[:to][0] == '/' is not good protection, since //google.com is a legit URL. (By the way if you use regexp avoid $^, luke)

2. XSS by sending ?to[protocol]=javascript:payload()//&to[status]=200

3. Discovered yesterday, we can see plain HTTP headers in response body by sending ?to[status]=1

Frankly, this bug is really common. I found it in more than 5 popular rails startups though I only started checking.
If you need careful security audit (or brutal pentest) of your app, visit http://www.sakurity.com/

Vulnerability
From time to time we need to translate news / articles from foreign trustworthy websites, official news feeds etc. Can we break into GT(Google Translate) system and modify output for a prank or something worse? Yes, because GT loads all untrusted "user content" using the GUC domain (googleusercontent.com). Thus GUC guc2.html can modify content of GUC kremlin, because of SOP (same origin policy).

But there were many interesting "pitfalls" i had to solve on my way to this awesome PoC:

By default google has "Safe Mode" on GUC pages. sandbox=allow-scripts allow-forms allow-same-origin. This means GUC pages cannot open new windows. So i used "opener" trick: I opened guc2.html?i in a new window and replaced current one with GT kremlin. Now there is a bridge between GUC guc2.html and GUC kremlin via parent.parent.opener.frames[0]

GUC guc2.html cannot close GT guc2.html because this action is also restricted by sandbox attribute. This is why I framed GT guc2.html under guc2.html?i (I really don't know why GT doesn't have X-Frame-Options. This helped me a lot)

I need at least one click to open guc2.html?i. Otherwise popup will be blocked. When you hover on the link above you see the innocent google translate path. In background, the onclick event will produce one more window ( guc2.html?i ).

guc2.html?i creates an iframe (opacity=0!) with GT guc2.html, and waits for "onmessage" event to close itself.

As soon as GUC guc2.html is loaded it starts a setInterval trying to modify GUC kremlin using this elegant chain: parent.parent.opener.frames[0]. When modifications are done it invokes parent.parent.postMessage('close','*') to let guc2.html?i know that business is done.

Second window is closed in 1-2 seconds, first window has GT kremlin with modified content. Enjoy reading Mango news.

Sunday, July 21, 2013

I've been ranting about cookies before, but now realized the very general flaw of cookies: they ignore Origins.
Origin = Protocol (the way you talk to server, e.g. http/https) + Domain (server name) + Port. It looks like http://example.org:3000 or https://www.example.org

Obviously, cookies is the only adequate way to authenticate users.
Why localStorage is not for authentication? it's accessible on client side thus authentication data can be leaked + it's not sent on server side automatically, you need to manually pass it with a header/parameter.

Origins made web interactions secure and concise:localStorage from http://example.com can't be read on http://example.com:3000DOM data of http://example.com can't be accessed on http://example.com:3000XMLHttpRequest cannot load http://lh.com:4567/. Origin http://lh.com:3000 is not allowed by Access-Control-Allow-Origin

and so on.. But here comes cookies design!

Cookies got 99 problems, and XSS is one.
1) In fact "most of the time most of the people" don't need access to cookies from client side. Flag 'httpOnly;' was created to restrict javascript access to a cookie value (such a confusing name, isn't it? Only for http: or what). Also, who actually needed the 'path=' option ever? One more layer of confusion.

Tip: Store csrf_token inside of the session cookie (literally) and refresh it after login.

3) The "port" thing.
This is not a solved problem yet. If someone owned a ToDo-App at https://site.com:4567 he will get all the cookies for https://site.com with every request. Yes, "httpOnly;". Yes, "Secure;" too. Cookies just don't care about ports.

How can we fix cookies?
1) Don't let Ports share cookies - make browsers "think" that port is a part of a domain name.
2) Origin field. Set-Cookie: sid=123qwe; Origin=https://site.com:4567/; will mean "Secure;", "Port=4567" and "Domain=site.com" at the same time.

Conclusion.
During last decade many workarounds were introduced only to make cookies more sane, yet not implementing straightforward Origin approach. We are doomed to maintain it forever...?

Tip: don't run unreliable apps using a different port because cookies will ignore it. Check your localhost, developer!

Thursday, July 4, 2013

Quick introduction to the XSS problem
XSS is not only about scripts. Overall it happens when attacker can break the expected DOM structure with new attributes/nodes. For example <a data-remote=true data-method=delete href=/delete_account>CLICK</a> is a script-less XSS (in Rails jquery_ujs will turn the click into DELETE request).
It can happen either on page load (normal XSS) or in run time (DOM XSS). When payload was successfully injected there is no safe place from it on the whole origin. Any CSRF token is readable, any page content is accessible on site.com/*.

XSS is often found on static pages, in *.swf files, *.pdf Adobe bugs and so on. It happens, and there is no guaranty you won't have it ever.

You can reduce the damage
Assuming there is GET+POST /update_password endpoint, wouldn't it look like a good idea to deny all requests to POST /update_password from pages different from GET /update_password?

Thankfully similar technique can be implemented with subdomains:
www.site.com (all common functions, static pages)
login.site.com (it must be a different subdomain, because XSS can steal passwords)
settings.site.com
creditcards.site.com
etc

App serves specific (secure) routes only from dedicated domain zones, which have different CSRF tokens. All subdomains share same session and run same application, upgrade is really seamless, user will not notice anything, redirects are automatic.

How to add it to a Rack/Rails app

You only need to modify config.ru (I monkeypatched rails method, it's easy to do same for a sinatra app)

It fails if you GET/POST www.site.com/secure_page from www page because middleware redirects to secure.site.com (which has different origin - you cannot read the response)
It fails if you POST to secure.site.com/secure_page with www page CSRF token because secure subdomain uses different CSRF token which www doesn't know.
Neat!

The demo is a basic prototype and may content bugs, here you can download the code.

There is also a subdomainbox gem. I hope to see more Domain Box gems for ruby apps along with libraries for PHP/Python apps. Maybe we should add this in Rails core, telling other programmers who cares about security the most?

Thursday, June 20, 2013

I wrote a post about CSRF tool and readers noticed an intentional mistake in it — storing a token in a plain cookie can be exploited with cookie forcing (like cookie tossing we saw on github, but works everywhere). TL;DR With MITM attackers can replace or expire any cookie (write-only)

Actually I don't think it's mistake in CSRF protection design, I still think problem is about horrible cookies design. But we got to deal with current situation, and taking into account this feature I came up to concise and effective protection.

At first glance it might look ugly. But web is ugly so it's OK.

Step 1. We simply put csrf_token value into session. Wait, we do not store it in database, because it's pointless. We do not store it in signed cookie either like Rails do (this is also overwork).
I propose to literally store csrf_token in session cookie:sid=123qweMySeSsIon--321csRFtoKeN
You only need a middleware on top of the stack slicing the token after --.

Can you see anything simpler than this trick? I guess everything using two separate cookies will require signatures and cryptography with server secret. Too much work for a design mistake W3C made.

Although CSRF protection on its own is not popular, cookie forcing protection is way less popular — don't be surprised how many websites/frameworks are vulnerable. How to check: it is vulnerable if you can replace csrf_token cookie (use Edit This Cookie) not breaking session or csrf_token remains same after login.

For example Django framework (csrftoken cookie is not integrated with session), twitter (doesn't refresh authenticity_token after log in, so it can be fixated), etc.

Sunday, May 19, 2013

TL;DR Nope, I didn't find a major breach, just an interesting detail in reCAPTCHA's design.

CAPTCHA ain't no good for CSRF
I was told on twitter that CAPTCHA mitigates CSRF (And, wow, it is "officially" on OWASP Challenge-Response section). Here is my opinion — it does not, it will not, it should not. Feel free to wipe out that section from the document.

CAPTCHA in a nutshellhttp://en.wikipedia.org/wiki/CAPTCHA
"challenge" is literally a key to solution (not necessary random, solution can be encrypted inside - state less).
"response" is an attempt to recognize the given image.Client is a website that hates automated actions, e.g. registrations.Provider generates new challenges and verifies attempts.User — a human being or a bot promoting viagra online no prescriptionlas vegas. Nobody knows exactly until User solves the CAPTCHA.Ideal flow
1. User comes to Client.
2. Client securely gets a new challenge from Provider.
3. Client sends back to User the challenge value with corresponding image.
4. User submits the form along with challenge and response. Client verifies them by sending over to Provider.
5. If Provider responds successfully — User is likely a pink human, otherwise:
if amount_of_attempts > 3
BAN FOR A WHILE
else
GOTO 2
end

As you can see User talks directly to Client. He has no idea about Provider.
Only 1 challenge per form attempt is allowed. 3 fails in a row = BAN. Does not matter how hard challenges are, User must try to solve them.

reCAPTCHA is easy to install, free, secure and very popular.

The reCAPTCHA flow
1. Client obtains public key and private key at https://developers.google.com/recaptcha/intro
2. Client adds some JS containing his public key to the HTML form.
3. User opens the page, User's browser makes a request to Provider and gets a new challenge.
4. User solves it and sends challenge with response to Client
5. Client verifies it by calling Provider's API (CSRF Tool template). In case of failed attempt User is required to reload the Provider's iframe to get a new challenge - GOTO 3.

The reCAPTCHA problem
Client knows how many wrong attempt you made (because verification is server side) but doesn't know how many challenges you actually received (because User gets challenge with JS, Client isn't involved). Getting a challenge and verifying a challenge are loosely coupled events.

Let's assume I have a script which recognizes 1 of 1000 reCAPTCHA images. That's quite a shitty script, right?

Wait, I have another script which loads http://www.google.com/recaptcha/api/noscript?k=VICTIM_PUBLIC_KEY (demo link) and parses src="image?c=CHALLENGE_HERE"></center

For 100 000 images script solves (more or less reliably) 100 of them and crafts valid requests to Client by putting solved challenge/response pairs in them.

Analogy: User = Student, Client = Exam, Provider = Table with questions.
To pass it Student got to solve at least 1 problem, and he has only 3 attempts. In reCAPTCHA world Student goes to the table and looks over all questions on it, trying to find the easiest one.

You don't need to solve reCAPTCHAs as soon as you receive them anymore. You don't need to hit Client at all to get challenges. You talk directly to Provider, get some reCAPTCHAs with Client's PUBLIC_KEY, solve the easiest and have fun with different vectors.

There are blackhat APIs like antigate_com which are quite good at solving CAPTCHAs (private scripts and chinese kids, I guess).
With such trick they can create a special API for reCAPTCHA. You send victim's PUBLIC_KEY and get back N solved CAPTCHAs which you can use in malicious requests.

Mitigation
I cannot say if it should be fixed, but website owners must be aware that challenges are out of their control. To fix this reCAPTCHA could return amount of challenges and failed attempts with verification response.

Questions? I realized this an hour ago so I can possibly be mistaken somewhere, or I didn't discover it first. Point it out please.

Saturday, May 18, 2013

A while ago our friend Nir published CSRF changing Facebook password and it was the last straw. I can recall at least 5 major CSRF vulnerabilities in Facebook published in last 6 months. This level of web security is inacceptable nonsense for Facebook.

So, here is a short reminder about mitigation:

Every state-changing (POST) request must contain a random token. Server-side must check it before processing the request using value stored in received cookies: cookies[:token] == params[:token]. If any POST endpoint lacks it — something is clearly wrong with implementation

For making world a better place I created simple and handy CSRF Tool: homakov.github.io

Copy as Curl from Web Inspector, paste into text field and get a working template in a few clicks:

No hassle. Researchers need a playground to demonstrate CSRF, with CSRF Tool you can simply give a link with working template.

No disclosure. Fragment (part after #) is not sent on server side, so I am not able to track CSRFs you currently research (Github Pages don't have server side anyway). Link to template contains all information inside.

Auto-submit for more fun, Base64 makes URL longer but hides the template.

Tuesday, May 14, 2013

UPD: no wonder, I missed the fact that OAuth providers use static passwords and it cannot be legit 2nd factor, just makes 1st factor harder to exploit. Thanks for feedback people from reddit!

Disclaimer: I'm noob in Two Factor Authentication (TFA). I got an idea today which I want to share and get feedback, your comments are totally welcome.

I don't have a mobile phone. Not only because russian mobile providers are cheaters (likely, same in your country) but also for many other reasons: traveling (my mastercard was blocked once in Sofia and I needed SMS approval code, which I couldn't receive — my mobile was "outside the coverage area" all the time), no daily usage (never needed to call someone ASAP in real life. maybe I am such a nerd), VoIP FTW etc — who cares, this is not my point.

The thing is all physical items (mobile phone, yubikey, token generators, biometrics of eye, fingerprints) are clone-able / steal-able or just not reliable enough (face/gesture/speech recognition).

Again, in disclaimer I said I don't know if scientists already created a universal reliable physical object for TFA, I just read wiki article a bit and seems they did not.

Why must Second Factor provider be a real object in our digital century? Is it really any better/safer (clearly less convenient) than yet another password or bunch of cookies our browsers store? I doubt.

The more guys say John is a reliable person I can trust, the more I believe he really is. And I don't need to look at John's tattoo (a poor analogy for biometrics) which he hates to show!

Hassle-free. Just be logged in FB/twitter all the time and couple of quick OAuth redirects in iframes (no interaction required at all) will make sure that your current FB account is the one attached to example.com account, your current twitter user is equal example.com attached one. It can be simplified and more secured because you only need /me endpoint data, actual access_token will not be used.

Leaving the post short by purpose, waiting for your ideas, perhaps I missed something huge. Thanks!

Broken concept & architecture. This feels as weird as the client-side sending code that is going to be evaled on the server-side... wait... Rails programmers had similar thing for a while :/Any RCEJS technique can be painlessly split into client's listeners and server's data.

Escaping user content and putting it into Javascript can be more painful and having more pitfalls then normal HTML escaping. Even :javascript section used to be vulnerable in HAML (</script> breaks .to_json in Rails < 4). There can be more special characters and XSS you should care about.

JSONP-like data leaking. If app accepts GET request and responds with Javascript containing private data in it attacker can use fake JS functions to leak data from the response. For example response is:

Rails 4 has built-in default_headers (guess who added it?), which has better performance than before filter

current CSP is easily bypassed with JSONP trick, just write in console on github.com:document.write('<script src="https://github.com/rails.json?callback=alert(0)//"></script>')i will fix it when get spare time, btw: https://github.com/rails/rails/pull/9075

CSP is very far from ultimate XSS prevention. Really very far, especially for Rails's agileness and jquery_ujs. Github should consider privileged subdomains https://github.com/populr/subdomainbox

frames and window are the same variable — WindowProxy object. You probably know that any frame is accessible in parent via calling window.name:<iframe name="some_frame" src=...> creates some_frame variable pointing to another WindowProxy object.The tricky part is, frame can redefine its own window.name and it will inject a new variable in parent's namespace (if that variable is undefined — you cannot shadow existing variables...)Let me remind you the "script killer" trick (we used to kill framebreakers with it). XSS Auditor by default removes pieces of scripts looking "malicious" (if you want to cut off <script src="/lib.js"></script> just pass this code in the query string)Rough example:1. page on (frameable) site.com has some innocent iframe (e.g. facebook like).2. also it does some actions when JS variable 'approved' is true-ish:<script>var approved = false</script>after page load or click ...<script>if(approved){actions!}else{alert('not approved yet');}3. evil.com opens <iframe name="target" src="http://site.com/page?slice=%3Cscript%3Evar%20approved%20%3D%20false%3C%2Fscript%3E"> - this kills approved variable.4. Now it navigates facebook button (because of Frame Navigation madness) target[0].location='http://evil.com/constructor' (target is WindowProxy - and [0] is its first frame - fb like)5. constructor changes frame's window.name = 'approved' 6. now there is "approved" variable in namespace of site.com and if(approved) returns true - actions!Also:1. approved+'' returns "[object Window]" (didn't find a way to redefine toString). Would be much more dangerous if we could play with the stringified value2. it can be any complex nested variable (/constructor will just build nested frames page with specific window.names). e.g. config.user.names[0] - we can replace this variable (creating iframe in "names" iframe under "user" iframe which is under "config" iframe)! With [object Window], but anyway ;)3. you can use this playground from previous postSuch tricks are reminders how ridiculous web standards can be (URL disclosure is my favorite WontFix).

Friday, April 5, 2013

I don't like both idea and implementations of the Sandbox feature (part of so-called HTML5). Most of the articles about it are lame copy-pastes of the spec. Straight question WHY WE NEED IT AT ALL remains not answered. Furthermore, nobody writes that implementation is a little bit crazy — by fixing small vulnerabilities it introduces huge ones.

Why?

When we embed frames with external content we are still open for navigational and/or phishing attacks (obviously cookies/inner HTML cannot be stolen because of the same origin policy). The iframe code can change top.location or open a phishing popup window prompt('your password?'). ... that's it. That's all we needed to fix.

Besides allow-popup and allow-top-navigation options W3C introduced allow-scripts, allow-forms and allow-same-origin. They wanted best you know the rest.

Javascript... malicious shit.

OMG, sandbox can load any page with javascript turned off.

People are still fixing CSS for IE6. Meanwhile W3C destroys all Javascript-based framebreakers, not asking anyone's advice (only one anti-clickjacking solution left - X-Frame-Options). Nobody gave a shit about compatibility, I guess.

W3C needed a really good reasonto introduce such a crazy feature: "Javascript from a frame can do malicious actions, now you can turn it off".
Malicious what? Tomorrow they will call Javascript a Remote Code Execution Vulnerability and ask us to remove the browser? Javascript cannot be malicious on it's own!

allow-forms + disallow scripts = clickjacking working like a charm for javascript-based framebreakers. Surely, not everyone uses X-Frame-Options yet, e.g. vk.com. UPD: Here is a talk from BlackHat 11 last 2 years it is WontFix :O

Untrusted code on the same origin.

Another announced killer-feature of Sandbox - running untrusted code on your origin (remark: USE A SUBDOMAIN DONT BE STUPID). People tend to believe it: "Cool, I will set up a javascript playground on my website!". This?

<iframe src="SITE/playground?code=alert(1)" sandbox="allow-scripts">

Now think again, what stops an attacker to simply use 'SITE/playground?code=alert(1)' as an XSS hole?

Following may sound like a genius idea to prevent it - putting Sandbox into Content-Security-Policy header and let server-side to set sandbox directive. So far only Chrome supports such header (I use this technology in Pagebox - XSS protection through privilegies).

You cannot rely on it out-of-box, thus you need a way more comprehensive solution — you have to make sure that current page is loaded under sandbox (request is authentic), not in a new window

Wednesday, March 20, 2013

I found new vectors and techniques for the detection attack from my previous post. There is a cross browser way to detect does certain URL redirect to another URL and is destination URL equal to TRY_URL. Interestingly in webkit you can check a few millions of TRY_URLs per minute (brute force).

check_redirect(base, callback, [assigner])

base, String - starting URL. Can redirect to other URL automatically or only with some conditions (user is not logged in).

callback, Function - receives one Boolean argument, true when at least one of assigned URLs changed history instantly (URL was guessed properly) and false when history object wasn't updated (no URL guessed and all assignment were "reloading").

If the frame changed location to POINT 2 - your TRY_URL was equal to REAL_URL and history object was updated instantly. If it is POINT 1 - TRY_URL was different and it made browser to start page reloading, history object was not changed immediately.

Wait, only Chrome supports cross origin access to "history" property - how can you make it work on other browsers?

Thankfully I found a universal bypass - simply assign a new location POINT 3 which will contain <script>history.go(-3)</script> doing the same job.

Bruteforce in webkit:

Vkontakte (russian social network) http://m.vk.com/photos redirects to http://m.vk.com/photosUSER_ID. We can use map-reduce-technique from my previous Chrome Auditor script extraction attack.
Iterate 1-1kk, 1kk-2kk, 2kk-3kk and see which one returned to POINT 2. Repeat: 2000000-2100000, 2100000-2200000 etc.

It will take a few minutes to brute force certain user id between 1 and 1 000 000. Vkontakte has more than 100 millions of users - around 10 minutes should be enough.

Friday, March 15, 2013

We've got Nir Goldshlager working on our side (he simply loves bounties and facebook does pay 'em). We both discovered some vulnerabilities in Facebook and we joined our forces to demonstrate how many potential problems are hidden in OAuth.

TL;DR: all oauth exploits are based on tampering with the redirect_uri parameter.1234
Here I demostrate 2 more threats proving that a flexible redirect_uri is the Achilles Heel of OAuth.

This makes me really curious what the hell is wrong with Facebook and why don't they care about their own Graph security and their own users. Why do they allow a flexible redirect_uri, opening an enormous attack surface (their own domain + client's app domain)?

We simply play Find-Chain-Of-302-Redirects, Find-XSS-On-Client and Leak-Credentials-Then-Find-Leaking-Redirect_uri games. Don't they understand it is clearly their business to protect themselves from non ideal clients?

A good migration technique could be based on recording the most used redirect_uri for every Client (most of rails apps use site.com/auth/facebook/callback) and then setting those redirect_uris as the whitelisted ones.

P.S. Although, as bounty hunters, me, @isciurus and Nir are OK with the current situation. There is so much $$$ floating around redirect_uri...

P.S.2 feel free to fix my language and grammar, I get many people asking to elaborate some paragraphs.

I simply send tons of possible payloads and if some page was banned (payload was found in the source code) then we found which bunch contains the real content of script.

'map-reduce'-like technique:

open 10 windows (10 for simplicity, we can use 25 windows. if target has no X-Frame-Options it can be way faster with <iframe>s) containing specific bunches of possible payloads:<script>pin=1<script>pin=2<script>pin=3<script>pin=4...from 1 to 1000000, from 1000000 to 2000000, from 2000000 to 3000000 etc

if, say, second window was banned we should do the same procedure and open new windows with: from 1 000 000 to 1 100 000, from 1 100 000 to 1 200 000 etc and so on

approximately, exploitation time for different token size:
6 characters — 4.5 minutes
7 characters — 120 minutes
8 characters — 72 hours
Saving checked bunches in the cookie (next time you will continue with other bunches) + using location.hash (it is not sent on server side and can be very long) = plausible CSRF token brute force.

all kinds of detections are bad. There is no innocent detections. A small weak spot can be used in a dangerous exploit. Today I found a way to brute-force 25 000 000 URLs / minute using location.hash detection (WontFix!). And keep making it faster.

Bonus, how to keep malicious page opened for a long time:

close if u can data:text/html,<script>onbeforeunload=function(){location.reload();return 1};onload=function(){location.reload();}</script>
— Egor Homakov (@homakov) March 13, 2013

TL;DR: Github is vulnerable to cookie tossing. We can fixate _csrf_token value using a Webkit bug and then execute any authorized requests.

Github Pages

Plain HTML pages can served from yourhandle.github.com. These HTML pages may contain Javascript code.
Wait.
Custom JS on your subdomains is a bad idea:

If you have document.domain='site.com' anywhere on the main domain, for example xd_receiver, then you can be easily XSSed from a subdomain

Surprise, Javascript code can set cookies for the whole *.site.com zone, including the main website.

Webkit & cookies order

Our browsers send cookies this way:

Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED;

Please have in mind that Original _gh_sess and Dropped _gh_sess are two completely different cookies! They only share same name.
Also there is no way to figure out which one is Domain=github.com and which is Domain=.github.com.Rack (a common interface for ruby web applications) uses the first one:

cookies.each { |k,v| hash[k] = Array === v ? v.first : v }

Here's another thing, Webkit (Chrome, Safari, and the new guy, Opera) sends cookies ordering them not by Domain (Domain=github.com must go first), and even not by httpOnly (they should go first obviously).
It orders them by the creation time (I might be wrong here, but this is how it looks like).

Initially I was able to break login (500 error for every attempt). I had some fun on twitter. Github staff banned my repo. Then I figured out how to fixate "session_id" and "_csrf_token" (they never get refreshed if already present)

It will make you a guest user (logged out) but after logging in values will remain the same.

Steps:

let's choose our target. We discussed XSS-privileges problem on twitter a few days ago. Any XSS on github can do anything: e.g. open source or delete a private repo. This is bad and Pagebox technique or Domain-splitting would fix this.

We don't need XSS now since we fixated the CSRF token. (CSRF attack is almost as serious as XSS. Main profit of XSS - it can read responses. CSRF is write-only).

HACKED session is user_id-less (guest session). It simply contains session_id and _csrf_token, no certain user is specified there.
So the Game asks him explictely: please Star us on github (or smth like this) <link>.
He may feel confused (a little bit) to be logged out. Anyway, he logs in again.

user_id in session belongs to the Githubber, but _csrf_token is still ours!

Meanwhile, the Evil game inserts <script src=/done.js> every 1 second.
It contains done(false) by default — it means, keep submitting the form to iframe :

the Githubber replies: "Nice game" and doesn't notice anything (github/github was open sourced for a few seconds and I cloned it). Oh, his CSRF token is still ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=Forever. (only cookies reset will update it)

Fast fix — now github expires Domain=.github.com cookie, if 2 _gh_sess cookies were sent on https://github.com/*.
It kills HACKED just before it becomes older than ORIGINAL.Proper fix would be using githubpages.com or another separate domain. Blogger uses blogger.com as dashboard and blogspot.com for blogs.

Friday, March 1, 2013

Please don't think about OAuth2 as about the next generation of OAuth1. They are completely different like colors: OAuth1 is the green version, OAuth2 is the red version
The biggest OAuth1 provider - Twitter.
I bet ($100!) they are not switching to OAuth2 in the near future. Pros and cons:
+ becoming compatible with the rest of social networks
- making authorization flow insecure, like the rest of social networks

I am not telling OAuth1 is super secure — it was vulnerable to session fixation a few years ago. If you made user to approve 'oauth_token' issued for your account, then you could use same oauth_token again and sign in his account on the Client website.

It was fixed in oauth1.a. Wait, read again: it was fixed. None of oauth2 vulnerabilities i pointed out in my previous posts a year ago was adressed in the spec. OAuth1 is straight, concise, explicit and secure protocol. OAuth2 is the road to hell.
Here we go!

OAuth2 core vulnerabilities - parameters

I have no idea why, who and, generally speaking, what the fuck, but we can transfer "response_type" as a parameter in URL (not as a setting of your Client).

Vector 1. Set respones_type=token in authorization URL and use specially crafted redirect_uri to leak access_token from URI fragment. You can add Chrome vulns like we did to hack Facebook. Hashbang #! is very nice bug too. Controlled piece of code (XSS) or mobile app like Nir did. BTW avoid pre-approved Clients ( Facebook Messenger with full permissions)

Stolen access_token = Game Over.

response_type must be a constant value in application settings.

redirect_uriVector 2. If spec was implemented properly then tampering redirect_uri to other, "leaky", values is pointless. Because to obtain access token you must send redirect_uri value with client creds. If actual redirect_uri was "leaky" and not equal real redirect_uri Client will not be able to obtain access_token for this code.

Vk (vkontakte) was vulnerable to this attack until Sep 2012 - redirect_uri wasn't required to obtain token. An <img> on client website could leak victim's 'code' through referrer and attacker could use same 'code' to log in victim's account.

Checking permissions after obtaining access_token is barely a right solution. This is rather a work around. Much better:

if it's a parameter in URL then User should be able to remove some permissions by clicking [X]. This is my human rights, fuck yeah!

if you really need some scope so badly - set it in Client's settings. Now you are 100% sure that User granted these permissions.

Authentication vs Authorization

Wikiauthorize = permit 3rd party Client to access your Resources (/me, /statuses/new)authenticate = prove that "this guy" is "that guy" which has account on the website.

OAuth2 was designed for authorization purposes only, but now in 90% (yep!) of cases people authenticate with it too! They simply use /me endpoint and find a record in database by external_user_id and provider_name.

At some extent state-by-default is attr_accessible-by-default (you know what i mean). I think state should be a compulsory parameter. Otherwise we can invent a new compulsorycsrf_token parameter doing exactly same job with "mnemonic" name.Any implementation w/o 'state' is vulnerable to CSRF. It leads to session fixation (Alice is logged in as Mallory) and account hijacking (Mallory can log in as Alice because Alice connected Mallory Provider Account).

I truly love signed_request (Facebook feature!), it provides information about Client that issued this access_token. I think we should stop using response_type=token and use signed_request only. Because it is a reliable way for mobile and client side apps to obtain access_token+client_id of this token+current user_id (less round trips).
If we make redirect_uri constant and introduce encrypted_request (same data but encrypted with client_secret) it would be just perfectly secure flow.

use response_type=code and set wrong state. If CSRF protection is implemented 'code' will not be used and attacker can hijack User's account simply using this 'code' (within 10 minutes). Friendly reminder for Providers: 'code' must be used only once and within couple of minutes — I got $2000 bounty from facebook for finding replay attack.

if csrf protection is NOT implemented (wtf?!) you can try a cool trick - 414 error. Set super long state (5000 symbols), facebook has no limits on state but Client server most likely has. 'code' won't be used because of server error - slice it!

Verified Clients. Similar popup near Client name, like we have for twitter accounts.

Add information about amount of Users this app has. I won't authorize "Skype IM" having only 12 users with such app installed.

Client Leaked All the Tokens.
What is Provider gonna do if some Client leaked all access_tokens at once (SQL injection for example). I think there should be kind of Antifraud system, monitoring token usage. And I am developing one btw, let me know if you wish to try.

'refresh_token'
I might be missing something, but why can't we use access_token + client creds to obtain a new access_token? refresh_token = access_token. If access_token is stolen it cannot be refreshed w/o client creds anyway. If client creds are stolen too = Game Over.

Infinite access
Dear Providers, please, add valid_thru parameter, I want to set for how long the Client has access to my Resources.

Last but not least threat, MITM
SSL and encryption. OAuth2 relies heavily on https. This makes framework simpler but way less secure.

so, OAuth...?

This is a sad story. And I don't know what to do and how to fix it.
Who to join? OAuth2.a? i am just a russian scriptkiddie nobody listens to :/Framework, not protocol, they said.