I recently came across an article regarding a CSRF vulnerability on Facebook discovered by M.J. Keith. Keith documents how Facebook does not enforce the CSRF protection token “post_form_id”. He states that when it is omitted entirely from some of the requests, Facebook executes the request as if it wasn’t even required in the first place. A demo video is located on the site which shows how someone can exploit this flaw to seamlessly update a user’s Facebook profile information.

Although Facebook has reported that it has fixed this flaw, I decided to look into it a bit further.

After playing around with a few requests, I noticed that there still remains a CSRF vulnerability pertaining to the request used to delete a friend. By completely omitting the “post_form_id”, as well as a few other parameters, I noticed that Facebook will still carry out the deletion of the friend whose id was specified in the request. This allows an attacker to seamlessly delete specified friends from the currently logged in Facebook user by getting the user to visit a specially crafted web page.

I than began thinking if it would be possible to delete ALL the friends of the currently logged in Facebook user at once, just by making them visit a malicious web page. Being aware of Facebook’s privacy settings, I remembered that most users’ friend lists are public. Having done a little bit of Facebook application development in the past, I began looking into some of the old REST API calls to see if there was a quick and easy way to obtain the profile ids of each of the victim’s friends without authorization. I noticed that this was possible, but it required authorization from the user. Authorization is also necessary for the new API calls. I decided to do a raw HTML scrape of their friends list. After parsing out the id from each friend on the target’s friend list, I can then execute the user deletion request for the id of each of the victim’s friends.

The PoC that I wrote to demonstrate this uses Ajax to constantly fetch and parse waves of friend ids from the victim’s public friend’s list. It then dynamically creates an iframe in the DOM for each friend id gathered and uses the iframe’s source to pass the id to a another script. This script then populates a form with the appropriate request variables, then auto-submits the vulnerable request to Facebook, using javascript. This process seamlessly loops in the background until all of the victim’s friends are deleted.

Below is a simple demo video demonstrating the CSRF vulnerability on a single friend.

*Update (5/22/10): After reporting the flaw to Facebook Wednesday afternoon, I have confirmed as of Friday afternoon that the flaw has been successfully patched. Facebook now strictly enforces the existence of the “post_form_id” CSRF protection token in the request. Hopefully they do this for all other serious requests as well. I have updated the above post with more details pertaining to the flaw, as well as my method of exploiting it.

This weekend I was finally able to finish up a minuscule, open source, research project of mine that I’ve been working on. It’s called ProminentDork. I wrote ProminentDork to act as a very simple and generic multi-threaded web application fuzzer that uses Google dorking to obtain domain specific URLs that may be vulnerable to a variety of web vulnerabilities such as SQLi, XSS, LFI, RFI, or anything else you can scan website source for. Although using Google dorks for web application pen-testing is nothing new, I thought it would be interesting to research Google search automation and URL aggregation, as well as learning more about how they validate a search session by using captchas in the process. In the process, I decided to build in a simple and flexible solution for scanning the collected results for a variety of different vulnerabilities that can be detected by a string being concatenated to the end of the requested URL and then analyzing the source of the response.

Many pre-existing or current security tools that use Google dorks for auditing, are used strictly in conjunction with the Google Hacking Database or focus primarily on a specific type of vulnerability. Although, automating web auditing with Google dorks is a great way to get a quick grasp of your sites security based on popular query results, it does not, in anyway, obsolete the effectiveness of using a web crawler to generate a full site-map based on files/directories listed in the source, and then scanning those files/directories independently in conjunction with analyzing the sites structure.

ProminentDork currently supports features such as: Multiple query, error string, and appended request string support, Google Hacking Database support, proxy support, and Google captcha support. More features are sure to come as time progresses.

I wrote ProminentDork to be simple and broad enough to support many kinds of web vulnerabilities using the same method of gather->append->request->analyze. Surprisingly, that simple four step process is great for auditing your website for common web vulnerabilities. It is nice to be able to have an automated solution use Google to aggregate a list of potentially insecure/popular web pages from your specified domain and test them against a variety of different exploits that you supply. In the next release, I plan on adding in a more in depth scanning solution that will obtain all the GET and POST variables from the source and build requests off of those to return more results. I already wrote a small script in python a few months ago to do just that and I plan on implementing it into ProminentDork soon.

I wrote ProminentDork in C#, to experiment with the proclaimed ease and efficiency of implementing a thread queue through the System.Threading.ThreadPool class in .NET. The source can be downloaded from the link at the bottom. I have released the code under the GNU General Public License as published by the Free Software Foundation.

IMPORTANT: I wrote this code as a simple research project, I am not responsible for or condone any misuse or illegal use of this code or application. Its purpose is meant STRICTLY for research and must not be used illegally. Do NOT perform malicious scans on websites you do not legally own. If you are to use this application, you must abide by Google’s Terms of Service as well as the law.

It is quite evident that clickjacking is becoming more and more popular amongst popular social networking sites. I believe this to be the case strictly because of the simplicity, flexibility, and effectiveness of the CSRF variant.

Clickjacking first got my attention through this article I came across at Guya.net which exemplifies how a users webcam can become comprised by utilizing clickjacking to manipulate how your webcam can be accessed by flash on Adobe’s website.

Quite honestly, anybody with a basic knowledge of HTML, and a tiny bit of creativity, can embed an iframe into a malicious site and point their sources to any other website with a clickable element. But the beauty of clickjacking is that you can bypass any form of authenticity or session token passed from the victims browser session to the target server without any means of obtaining that token individually. Your basic CSRF vulnerability on a server that allows something such as: “http://bank.com/withdraw?total=1000&from=victim&to=attacker” can be patched by adding in any form of session or authenticity token and passing that along with the other variables. But since clickjacking is just embedding the website directly into a malicious website and convincing the user to click something on that site, any need to obtain that token individually, which is not a simple task due to the same origin policy, is bypassed.

Facebook has most recently been hit with two clickjacking worms that interestingly enough, propagated at immense speeds. Facebook’s first attack, noted for its tagline “Click Da’ Button, Baby!”, was really the first time clickjacking has been used in the wild. When the victim actually clicks the button, he/she is actually clicking the ‘Share’ button on Facebook which has been embedded in a multiple iframes, given an opacity of 0, and overlapped on the big red button. Clicking the share button in turn shares the malicious URL to all of your other friends in an attempt to infect their sessions as well. In another recent clickjacking attack against Facebook, you can actually see part of the “Share” button on the page.

Twitter also fell victim to such an attack that consisted of a user’s standard twitter homepage being embedded in an iframe. Twitter quickly responded by adding a few lines of frame-busting javascript below the body tag to prevent it from being embedded in an iframe. Below is an excerpt of that javascript:

I like how they set the innerHtml of the body to nothing after 1 millisecond, so even if you attempt to stop the page load, it would just show the background. Unfortunately, this is not the case with the mobile website. The simplicity of many other popular social networking mobile websites makes many of them a perfect target of clickjacking. Because the security measures built into the robust standard site cannot be used in the mobile versions of the site because of the platform that they are being accessed on. And that is exactly what I, and others, are demonstrating. By simply embedding an iframe with a source of “http://m.twitter.com/home?status=Clickjacked” or with a source of “http://m.facebook.com/sharer.php?u=http://en.wikipedia.org/wiki/Clickjacking”, the GET variables ‘status’ and ‘u’, are automatically inserted into their corresponding text boxes. And with a little CSS, I aligned both iframes in such a way that the update status button and the share button are on top of the input buttons on the main page with an opacity of 0. I put together a simple proof of concept of such an attack to demonstrate this more clearly. This demo was made for Firefox. The styles can be manipulated to work with other browsers that allow clickable iframes with an opacity of 0.

Twitter/Facebook Clickjacking Demo, with an iframe opacity of %70

The victim thinks that he/she is clicking on the button on my page, but is actually clicking on the update status button on their Twitter page or the share button on their facebook page. This demo also works with javascript disabled. As you can imagine, the possibilities with such an attack are endless. All an attacker has to do is set the victims status to the URL of the malicious website containing the embedded iframe, and watch it propagate. The attacker can also craft the webpage to redirect to another webpage after the user reposts the vulnerable website. This website can then serve malware, attempt a phishing attack, or perform any other form of malicious attack.

Implementing worm-like propagation on social websites such as Facebook or Twitter, or taking advantage of One-Click purchase buttons in Amazon, makes clickjacking a serious threat. Thankfully, browsers such as Safari and Internet Explorer 8, have implemented clickjacking protection, but it’s always best to protect yourself. For Firefox users, I suggest the ‘NoScript‘ plug-in.

Many popular web applications are left susceptible to Cross site request forgery (CSRF) and request alteration due to the lack of request validation. This form of validation is vital to ensuring the origin of a request, to ensuring the authenticity of the data within the request.

About a week ago I stumbled upon a perfect example exemplifying the importance of request validation and data authenticity. I was invited to a website called Lockerz.com. Lockerz offers its users different ways to earn points on their account by answering questions on their website daily and by inviting other friends to sign-up. Users can then redeem those points for amazing products ranging from iPod Touches, Apple TVs, Mac Book Pros all the way to concert tickets, expensive pocket books, and jewelry.

After I registered, I was directed to a page welcoming me to Lockerz. My points counter showed up at the bottom right of the screen and was populated with 2 points for just signing up. I was then directed to a breakout type game that each user plays once, right after they sign-up. The objective was to catch as many falling money squares before time runs out. The more you catch, the more bonus points you get for signing up. The game began and I started moving my cursor around the screen trying to catch the falling sqaures. Time ran out and my point counter incremented by my total winnings. Below is a screen shot of the game.

Like usual, my curiosity got the best of me and I decided to take a deeper look at the operation of the game. I set up a local proxy and tunneled all HTTP traffic through it so that I could take a closer look at the requests being made throughout game-play. I noticed that only one post request was made at the end of the game by the embedded SWF, which was used to increment my point counter. But I noticed something peculiar about the POST variables; there was no form of authentication being used to validate my score sent from the SWF to their server. Below is an example of the post request that the SWF sent to their server.

By simply changing the value of the “score” variable to 9000 and altering the Content-Length to 39, my score counter incremented to 9,002 points. I would then be able to redeem prizes with high point prices instantly. I immediately notified Lockerz about the vulenrablity and that I encourage they use some form of encryption or token based validation to authenticate the legitimacy of the score sent off by the SWF to the server. I received an email shortly after telling me that the team has been notified. I checked back two days later and they did just what I suggested. They added several more variables to the POST request validating the score before it is entered into their database. The new POST variables are listed below.

As you can see, a few new variables have been added, the most important being scoreCheck and gameToken. After much discussion with a good friend and colleague of mine, Praveen, we both settled on the idea that it seemed highly likely that they are using some form of salted hash for the scoreCheck value to verify the score received by the server to the request made by the SWF since the score is still being sent through the request which would be useless if encryption was being used. I am also assuming that they added in the gameToken value to authenticate the origin of request and to make sure that the SWF that is making the request is actually embedded on their website.

The idea of using request validation tokens to avoid CSRF and data alteration is nothing new. But it cannot be stressed enough, especially in the case of point based sites like Lockerz, where an attacker can exploit the system to steal expensive prizes with little to no effort. In Lockerz’s case, using a hash made a lot of sense, since you can’t really store the SWF score in a session prior to sending it off to the server for validation. But in PHP, setting a simple session token on the server and sending it through a request that you would like to protect can be simply accomplished by using the $_SESSION associative array. You can check out this article for more information and easy to understand examples demonstrating the proper use of the $_SESSION associative array in relation to avoiding CSRF attacks and validating request data in PHP.

Last night, a friend of mine directed me to an interesting article on Slashdot. The article publicizes Comcast’s new implementation of DNS hijacking on non-existent domains to all of their customers. Having just installed Comcast Cable a week prior to this I tried it out for myself. After going to a non-existent domain, I was immediately redirected to an ugly, ad infested page telling me that Comcast was not able to find the domain specified, and to try and respell my web address or use their search to help me find what I’m looking for.

Not only does this break internet standards, but it becomes a huge issue for IT professionals who manage the network and applications used by thousands of employees in a large company. As discussed in the comments of the article, many large companies use a split tunnel VPN to allow employees to have access to the internal mail server hosted on the company’s intranet. DNS resolution normally works by attempting to resolve a domain name primarily via the external DNS server, and if an IP is resolved, that IP will be used. If the DNS server returns NXDOMAIN, than the internal DNS server within the VPN is queried for an IP. If the internal DNS server returns an IP, than that will be used, otherwise it will return NXDOMAIN. This poses an issue to an employee using a split tunnel VPN to access the internal mail server of his/her company, because when that employees mail client attempts to resolve a domain name existent only via the VPN’s DNS server, the mail client will first query the external DNS server via their ISP. This would normally return NXDOMAIN and point the mail client to query the internal DNS server via the VPN which will return the correct IP to the mail server, but instead, anyone using a Comcast connection with their mail client would resolve an IP to Comcast’s hijacked page when the external DNS is queried for an IP, timing out the mail client.

Although Comcast does offer a way to opt-out of the service by using your modems MAC address and your customer email; it is still an obnoxious process. Although some people enjoy the help of Comcast pointing them in the right direction, many do not. Although DNS hijacking may not be new for users of the open source DNS alternative, OpenDNS, it sure is new to Comcast users.

After looking around a bit, I decided to take a more in depth look at the page that everyone is being redirected too and noticed a few serious security threats. I have contacted Comcast and made them aware of these threats.

The first thing I noticed was that the search page was vulnerable to an XSS exploit via the GET variable “url”.

I consider these XSS vulnerabilities to be quite serious considering the fact that the host is that of a trusted ISP and one of the servers is SSL certified. All it takes is a spoofed email address and some creativity to take full advantage of this vulnerability and threaten the privacy of unknowing Comcast customers.

After stumbling across this vulnerability I decided to dig a little deeper and ended up finding out a few more threats. Their Apache version is outdated, leaving their server vulnerable to moderate security threats patched in newer versions as well as the infamous Apache Mod_Rewrite Off-By-One Buffer Overflow Vulnerability allowing memory corruption on the server. I also noticed that they kept their Apache server-status enabled. Intentional or unintentional, keeping this enabled gives malicious users more information than they should have access to about your server as well as access to a real time feed of every request made to the server by what looks to be Comcast customers everywhere.

Enabling the Server-Status page in Apache is great for debugging and testing, but not so great for ISP’s server handling every customers search queries and misspelled domain names.

And the last thing I stumbled across was a backup of a PHP file stored on their server resulting in source code disclosure. For legal reasons I will not post the URL to the backup file, or the Server-Status page until Comcast has fixed these issues.

I am still very happy with my Comcast cable, but a little disappointed by their lack of security, server side or not, they are an ISP and should take more precautionary matters to protect their customers form such threats.

*UPDATE: Thanks to a prompt response from Giedrius Trumpickas, a principle engineer at Comcast, I have been notified that the XSS vulnerability located at https://login.comcast.net has been fixed.