Wednesday, January 31, 2007

To perform some Intranet Hacking we need the web browser's internal NAT'ed IP Address (ie: 192.168.xxx.xxx). While not the most elegant solution, Java Applets (MyAddress) are the only real way to go. It turns out JavaScript can invoke Java classes directly (Firefox), including java.net.Socket, and can achieve same results. No Applet required making the proof-of-concept code a lot easier.

Tuesday, January 30, 2007

Update 02/02/2007: The Mike Rothman (Pragmatic CSO) posted a simple way to explain the differences and also provides further insights. "Assessments give you an idea about all the POTENTIAL holes. Pen tests prove whether the holes are in fact actionable."Security Assessments and Penetration Tests are infosec industry terms commonly and erroneously used interchangeably. This causes confusion for business owners who are trying to figure out what solution they need to protect their Web businesses. For starters, security assessments are thorough evaluations of a website to validate security posture and/or detect ALL the possible weaknesses. Penetration tests simulate a controlled (internal/external) bad guy website break-in with the goal of achieving a certain level of system/data access. Both methods are acceptable and add a lot of benefit if implemented properly and at the right time.

Security AssessmentsThere are number of methodologies for performing website security assessments including black box vulnerability assessments, source code reviews, threat modeling, configuration audits, etc. and some engagements may use combinations. Security assessments are invaluable for understanding what you own and the current security posture. This information is helpful in making educated decisions and applying the appropriate resources that’ll make the most meaningful impact.

Penetration TestsA pen-test team’s job is to break into a website, using whatever parameters they’ve been given, and gain access to the designated data they shouldn’t be able to obtain. They’ll exploit whatever vulnerabilities they need, but they’re NOT responsible for finding all the issues. The benefit is understanding how resilient your website is to determined attackers. At the end you should have interesting results and purpose built exploit code examples that tell a compelling story.

ExpectationsThe trick to choosing between the two is really understanding your business needs, requirements, and the value of what your’re protecting. If there is any resource to help you do that, it’s Pragmatic CSO. For those new to web application security, assessments are the way to go. Statistically most websites are known to be insecure so a pen-test isn’t going to be of much value at the start and you’d be better served by something more comprehensive. The second trick is properly setting the scope between you and the vendor which may include IP ranges, hostnames, level of testing depth, time frame, frequency, costs, reporting, solutions, re-testing, etc.

EvolutionThe rate of web application code change remains unrelenting and still only a relatively small percentage of websites are in fact professionally tested for security. As anyone can imagine this drastically increases the likelihood of security vulnerabilities and eventually leads to compromise. Good news for criminals, bad news for customers and website owners. Thankfully there’s been a marked improvement of widely disseminated knowledge and a larger awareness of web application security issues. Web application security is no longer a dark and mysterious art only known to a select few insiders. Novices, with no more skill beyond they’re web browser, now easily master powerful tricks-of-the trade from readily available books and whitepapers.

Two things that stand out in my mind:

1) Security assessment methodology has increased from a few thousands unique tests to tens-of-thousands on the average website.2) The technical skills required to perform a good security assessment has actually increased rather than diminished!

Conclusion: Experience CountsTo comprehensively assess the security of a website, a tester must be adept at the known 24 classes of attack (WASC Threat Classification). Additionally, they need to be comfortable in applying potentially hundreds of attack combinations described in scattered books and research papers. This type of expertise is developed over time while exposed to hundreds of assessments and practicing exploiting real-world websites. You want someone understands your business and provide value in those terms.

Qualified security testers need the necessary skills to recommend appropriate solutions in a variety of given situations. Every website’s security requirements are different and the particular circumstances must be taken into consideration. Identical vulnerabilities may be resolved in any number of acceptable ways. The testers job is to find the right combination of solutions to effectively mitigate risk. Otherwise you may end up with a time consuming and expensive false sense of security.

This question is asked regularly with respect to solutions for Cross-Site Scripting (XSS). The answer is input validation and output filtering are two different approaches that solve two different sets of problems, including XSS. Both methods should be used whenever possible. However, this answer deserves further explanation.

Input Validation(aka: sanity checking, input filtering, white listing, etc.)Input validation is one of those things ranted about incessantly in web application security, and for good reason. If input validation was done properly and religiously throughout all web application code we’d wipe out a huge percentage of vulnerabilities, XSS and SQL Injection included. I’m also a believer that developers shouldn’t have to be experts in all the crazy attacks potentially thrown at a websites. There’s simply too much to learn and their primary job should be writing new code, not to become web application hackers. Developer should only have to concern themselves with the solutions required to mitigate any attack no matter what it might be. This is where input validation comes in play.

Input validation should be performed on any incoming data that is not heavily controlled and trusted. This includes user-supplied data (query data, post data, cookies, referers, etc.), data in YOUR database, from a third-party (web service), or elsewhere. Here are the steps that should be performed before any incoming data is used:

NormalizeURL/UTF-7/Unicode/US-ASCII/etc decode the incoming data.

Character-set checkingEnsure the data only contains characters you expect to receive. The more restrictive the rules are the better.

Length restrictions (min/max)Ensure the data falls within a restricted minimum and maximum number of bytes. Limit the window of opportunity for an attacks as exploits tend to require lengthy input strings.

Data formatEnsure the structure of the data is consistent with what is expected. Phone should look like phone numbers, email addresses should look like email address, etc.

Regular expression examples with iteratively more restrictive security:(These are just samples, not recommended for production use)

ImplementationFor a variety of reasons input validation has proved time consuming, prone to mistakes, and easy to forget about. The best approach is defining all the expected application data-types (account ID’s, email addresses, usernames, etc.), abstract them into reusable objects, and made easily available from inside the development framework. Input validation is all handled behind the scenes, no need to parse URLs, or remember to apply all the relevant business logic rules. The benefit to this approach is security becomes consistent and predictable. Plus developers are assisted is creating software at faster rate. Security and business goals are in alignment, which is exactly the place you want to be.

For example, let’s say you’re in an objected oriented environment working with a product purchase process:

URL:http://website/purchase.cgi

Post Data:product=100&quanitiy=4&cc=4444333322221111&exp=01/08

// Check if the user is properly logged-in and their account is activeif (user.isActive) {

// make sure the product is available in the requested quantityif (req.product.isAvailable) {

// make sure the credit card is valid for the purchase totalif (req.creditcard.isValid(total)) {

// initiate the transactionprocessOrder(user, req.product, req.qty, total, req.creditcard); } else { // inform user that their credit card was not accepted with a consistent message and also log the error to central database. requestFailed(req.creditcard.error); }

} else { // inform user that items is not available with a consistent message and also log the error to central database. requestFailed(req.product.error);

}

} else {

// inform user that they are not properly logged-in with a consistent message and also log the error to central database. requestFailed(user.error);}

Notice in the example code there is no input validation, direct database calls, or implicit strings. Everything is handled behind the scenes by the objects and methods. This makes mistakes less likely to occur and extremely helpful in preventing a wide variety of attacks including XSS, SQL Injection, and more.

Output FilteringWhen you get right down to it, XSS happens on output when the unfiltered data hits the user (victim) web browser. Plus untrusted data may originate from a variety of locations, including your own database. As a developer you’re never really certain if someone else is doing their job and placing potentially malicious data in the DB. Better to play it safe when printing to screen.

Control the output encodingDon’t let the web browser guess at a web pages content encoding. They’re known for making mistakes that could lead to strange XSS variants. There are two ways to set encoding, response header and meta tags. Its best to use both methods to make certain the browser gets it right.

Removing HTML/JavaScriptMany of the languages and frameworks have their own methods to convert special characters in their equivalent HTML Entities, it’s probably best to use one of those. If not, here is Perl regex snippet that can be used or ported. I welcome anyone to comment on libraries they like, I’m not familiar and up to date with all of them. As with input validation its best to abstract this layer and make it second nature for developers.

Wednesday, January 24, 2007

I recently did a Picking Brains With...Jeremiah Grossman, interview with Ronald van den Heetkamp of Jungsonn Studios (who goes by the name Jungsonn). "Picking Brains With..." is a collection of interviews Jungsonn is starting to put together with various experts in the industry. He asks them a series of infosec related questions, and zip zap, your done. I thought it would be fun, so I figured, why not!

This week I appeared a Podcast (my first ever), with the Alan Shimel and Mitchell Ashley of StillSecure, After all these years. Let me tell you these guys rock! Alan and Mitchell are a lot of fun, simply hilarious, and know what they’re talking about too! Making it hard to believe these guys are honest ta'goodness infosec experts. They asked for my thoughts on the web application security industry, specifically vulnerability assessment. I had a great time and hopefully I’ll get to do it again in the future. In the meantime I might have to go back through their audio archive and seem what they've done in the past.

Monday, January 22, 2007

RSnake leaked that other day that he's been having a positive dialog with MS about browser security. He said MS seems genuinely interested about adding more anti-XSS features into IE7. All I got to say is WOW! FINALLY someone is listening. Thank you! Few people know the issues as well as he. Here's the thing I worry about…if MS does a good job at helping the user protect themselves against XSS, I might have to switch browsers and recommend people do the same. That’s a lot of crow to eat. :) Hey Mozilla/Firefox developers, little help here!

*The following code and concepts should be considered highly experimental and should be considered a work in progress. Not to be used for production websites.*

Web Worms (like Samy) targeting social networking websites (like MySpace) typically involve combining two attacks, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). An attacker posts JavaScript Malware (Web Worm) to their user profile web page because the website allows user-supplied HTML and the input filters didn’t catch the offending code. When a logged-in user visits the infected profile web page their browser is hi-jacked (user XSS’ed) to “friend the attacker” and post of copy the Web Worm code to their profile (user CSRF’ed) causing self-replication. There is no “Cross-Site” as part of the forged requests as CSRF implied, but that conversation is for another time.

As was case for MySpace and many other website, important features such posts to a user profile and friend’ing users, are protected from CSRF using session tokens embedded in URL’s or HTML Forms. Requests aren’t valid without a token. To defeat the CSRF solution, Web Worms first request a third-party page (on the same domain) to get a valid token and use it as part of a forged request. Since the attack is on the same domain, access to session tokens is can be easily achieved. This is why many people, including myself, have believe that CSRF solutions can be defeated when XSS vulnerability exist on that domain. However, there may be something we can do.

Without using browser exploits, JavaScript only has a few ways to access HTML data from another page on the same domain. Generally speaking, if we can prevent JavaScript on an XSS’ed web page from being able to read in session tokens from other pages, we might have something worth pursuing.

XMLHttpRequestwindow.openIFrame

If we can remove access these API’s, we may be able to prevent or make it harder for JavaScript Malware to bypas CSRF security. Enter prototype hijacking. The following proof-of-concept code effectively does this when called first at the top of the web page.

PoC Code is (Firefox ONLY!) Should this method be found workable, we can port examples to other browsers, namely Internet Explorer.

1) XMLHttpRequest

The Samy Worm used this method. This following function overwrites the XMLHttpRequest constructor so it has no functionality of any kind.

Attacks always get better, never worse. 2006 was a significant year for website hacking. “Hack” is the term used loosely to describe some of the more creative, useful, and interesting techniques / discoveries / compromises.

In this Webcast, WhiteHat Security founder and CTO, Jeremiah Grossman will look back on what was discovered – he’s collected as many of the new 2006 web hacks as could be found and narrowed the list to the Top 10. With issues ranging from XSS, confusion over AJAX and Javascript vulnerabilities, and more, it’s sure to be an informative discussion.

* Reveal the top 10 attacks of 2006 by creativity and scope * Predict what these attacks mean for website vulnerability management in 2007 * Present strategies to protect your corporate websites

Thursday, January 18, 2007

Dr. Nick: "With my new diet, you can eat as much as you want, any time you want!"Marge: "And you'll lose weight?"Dr. Nick: "You might! It's a free country!"Dr. Nick Riviera (The Simpsons)

A common approach to vulnerability assessment (VA) is going after the so-called “low-hanging fruit" (LHF). The idea is to remove the easy stuff making break-ins more challenging without investing a lot of work and expense. Nothing wrong with that except eliminating the low-hanging fruit doesn't really do much for website security. In network security the LHF/VA strategy can help because that layer endures millions of automated and untargeted attacks using “well-known” vulnerabilities. Malicious attacks on websites are targeted using one-off zero-day vulnerabilities carried out by a real live adversary(ies).

Let’s say a website has 20 Cross-Site Scripting (XSS) vulnerabilities, 18 of which classifiable as LHF. Completing a LHF/VA process to eliminate these might take a week to a month or more depending on the website. By eliminating 90% of the total issues, how much longer might it take a bad guy to identify the one of the two remaining XSS issues they need to hack the site? An hour? A few? A day? Perhaps a week if you’re really lucky. A recent thread on sla.ckers.org offered a perfect illustration.

Someone said vulnerabilities in Neopets, a popular social network gaming site for virtual pets, were hard to come by. The first question was who cares about Neopets? The answer was it has millions of players and currency with monetary value. Through my browser I could almost hear the keyboards as the members raced to be the first. A dozen posts and 24 hours later a XSS disclosure hit. I didn’t bother confirming. sla.ckers.org has already generated over 1,000 similar including several in MySpace, so it wouldn’t be out of the ordinary. However, these are also not the guys we need to be worrying about.

The real bad guys are after the money. They have all day, all night, every day, weekends and holidays to target any website they want. In the above example, we were just talking about some silly gaming site. What if the target was something more compelling? Think the real bad guys will be so nice as to publish their results? They’d bang on a system 24x7 until they got what they wanted and happily be on their way. Reportedly the group that hacked T-Mobile and Paris Hilton’s cell spent more than a year targeting the system.

The point I’m trying to make is if your going to spend weeks and months of time finding and fixing vulnerabilities make sure the end result protects you for more than a lucky week. Sure going after LHF is better than nothing, but if you’re a professional responsible for security, that’s the last thing you want to tell your boss your VA strategy is based on. The strategy you want is comprehensiveness. Push the bad guys away for months, years, or hopefully forever.

Tuesday, January 16, 2007

What is Cross Site Request Forgery?"Cross Site Request Forgery (also known as XSRF, CSRF, and Cross Site Reference Forgery) works by exploiting the trust that a site has for the user. Site tasks are usually linked to specific urls (Example: http://site/stocks?buy=100&stock=ebay) allowing specific actions to be performed when requested. If a user is logged into the site and an attacker tricks their browser into making a request to one of these task urls, then the task is performed and logged as the logged in user. Typically you'll use Cross Site Scripting to embed an IMG tag or other HTML/JavaScript code to request a specific 'task url' which gets executed without the users knowledge. These sorts of attacks are fairly difficult to detect potentially leaving a user debating with the website/company as to whether or not the stocks bought the day before we initiated by the user after the price plummeted."

Thursday, January 11, 2007

If your going to RSA and want to meet-up with the webappsec crowd, here's your chance.

This years RSA Conference is being held at the San Francisco Moscone Center (February 5 – 9) and every year, for the past couple years, we’ve coordinated an informal WASC Meet-Up. Usually about 20 or so people in the web application security community show up to have some fun sharing drinks, appetizers, conversation, and few laughs. It’s a great opportunity to see people that we only otherwise communicate with virtually. Everyone is welcome and please drop me a note if you plan on coming:

Wednesday, January 10, 2007

Recently I’ve been discussing how vulnerability discovery is more important than disclosure. And also how website owners are going to have to deal with the disclosure whether they like it or not. Scott Berinato’s (CSO), The Chilling Effect, just posted a very well-written article describing the current web security environment and where we’re heading. Definitely worth the read and RSnake has posted his comments.

From the experts:

Dr. Pascal Meunier (Professor, Purdue University)“He ceased using disclosure as a teaching opportunity as well. Meunier wrote a five-point don't-ask-don't-tell plan he intended to give to cs390s students at the beginning of each semester. If they found a Web vulnerability, no matter how serious or threatening, Meunier wrote, he didn't want to hear about it.”

Rsnake (ha.ckers.org and sla.ckers.org)“RSnake doesn't think responsible disclosure, even if it were somehow developed for Web vulnerabilities (and we've already seen how hard that will be, technically), can work.”

Jeremiah Grossman (CTO, WhiteHat Security)"Logistically, there's no way to disclose this stuff to all the interested parties," Grossman says. "I used to think it was my moral professional duty to report every vulnerability, but it would take up my whole day."

Jennifer Granick (Stanford's Center for Internet and Society)“Granick would like to see a rule established that states it's not illegal to report truthful information about a website vulnerability, when that information is gleaned from taking the steps necessary to find the vulnerability, in other words, benevolently exploiting it.”

Tuesday, January 09, 2007

Normally I don’t post shameless company promotions on my blog, but this one is different. I thought people might find it interesting to follow the results. Commercial web application scanner vendors (Cenzic, SPI Dynamics, Watchfire, etc.) and service providers like myself from WhiteHat Security go back and forth with claims about what scanning technology can and can’t find. I say scanning is only capable of testing for about half of the issues (technical vulns) – they claim they find logical flaws. Who’s right? It's time to find out.

Enterprises are incentivized to select the best solution to find exactly where the vulnerabilities are. That’s where the focus should be. We all loathe reading lame paid-for 4-star reviews and bogus magazine awards. It’s 2007, and I say it's time to let the results speak for themselves. The hard part about measuring results is you never really know the total number of vulnerabilities present in custom web applications, and demo sites are a poor baseline for measurement. The best results are gathered using real websites when solutions go head-to-head, but obviously you just can't go out and pen-test any website you feel like.

As it happens a large portion of our Sentinel customers, with some of the largest and most popular websites in the world, previously purchased commercial scanners. They said they were complex, reported too many false positives, or the assessments were faster to do by hand. *Survey results back this up* Its not that the tools don’t work. They’re sophisticated and ended up not being the right solution for the job. Unfortunately many others in a similar situation are hesitant to try something new for fear of throwing away good money after bad. Worse still, their websites remain unprotected and head-to-head comparisons between competing solutions become few and far between.

Our results are better, but I'm not here asking people to take my word for it. I have something else in mind. Here's the deal: If someone previously purchased a commercial scanner and ended up not using it, not liking it, or curious about alternatives they can receive up to a $30,000 credit towards an annual Sentinel subscription. Completely risk-free. They'll see our results first hand on their website for comparison against their current scanner reports. (Full details) The enterprise gets to decide what can and can’t be scanned for. Win or lose or draw, good or bad or otherwise - we're all going to learn something.

The challenges of automated web application vulnerability scanning is a subject of frequent debate. Most websites have vulnerabilities, a lot of them, and we need help finding them quickly and efficiently. The point of contention revolves around what scanners are able to find, or not. Let's clear something up: Scanners don't suck, well some do, but that's not the point I'm making. My business is actually reliant upon leveraging our own vulnerability scanning technology. What I’m describing is setting proper expectations of what scanners are currently capable of on how it affects the assessment process.

"The OWASP Top Ten is a list of the most critical web application security flaws – a list also often used as a minimum standard for web application vulnerability assessment (VA) and compliance. There is an ongoing industry dialog about the possibility of identifying the OWASP Top Ten in a purely automated fashion (scanning). People frequently ask what can and can’t be found using either white box or black box scanners. This is important because a single missed vulnerability, or more accurately exploited vulnerability, can cause an organization significant financial harm. Proper expectations must be set when it comes to the various vulnerability assessment solutions."

Separate from the Universal XSS two main attacks are described, XSS Prototype Hijacking and Auto Injecting Cross Domain Scripting (AICS). Both attacks target websites using Ajax and assume their victim has already been XSS’ed. Or, as I like to put it, infected with JavaScript Malware. The rest of the payload comes after that point and what a bad guy could do. And speaking of Ajax, I’ve already published my view on subject, so here’s what the authors have to say:

“Applications based upon Ajax are affected by the same problems of any other web application, but usually are more complex because of their asynchronous nature.”

Fair enough. Not sure I agree, but it’s not a vital topic for our purposes here. Lets move on.

XSS Prototype HijackingTo leverage the example from the paper: The victim has been XSS’ed during a visit to a web bank that uses Ajax (XMLHttpRequest) for funds transfer. The JavaScript Malware overwrites the XMLHttpRequest object, which allows the attacker to transparently intercept and modify HTTP request/responses to the website. Sort of like an active sniffer in the browser DOM. The attacker could then initiate fraudulent transfers without the user’s knowledge. Interesting idea.

Aside from not having seen any web bank using Ajax, the attack still could be plausible for other situations. The question I had is why would an attacker need to do this? Wouldn’t it be simpler to phish them with a login/password DOM overlay (Phishing with Superbait) or something else similar? Then I read this and thought a little differently:

“In this case, the attack is totally independent from any authentication system used such as One Time Passwords or RSA tokens.”

That’s a good point. Sometimes stealing credentials is useless and the attacker might need to modify the request/response data in real-time. It really depends on what they’re trying to achieve and on what site. While the act of overwriting JavaScript objects is not exactly new, my Gmail contact list hack was based on this technique, what these guys did was take it to the next level. This is one more technique that could come in handy down the road.

Auto Injecting Cross Domain Scripting (AICS)“…an attacker could get total control over a website (which has a XSS vulnerability in it) by simply controlling an inner frame. If a browser is vulnerable to HRS this technique could be applied in a cross domain context every time a user opens a new page or exits from the browser, by injecting a new HRS. So even if a website in not vulnerable to XSS, it could be controlled.”

That’s quite a claim! The attack has two more requirements beyond XSS. The victim must be using a forward proxy and browser vulnerable to HTTP Response Splitting/Smuggling. I don’t know how common this scenario is, but let’s go with it anyway. If you recall Anton Rager’s XSS Proxy and my version based-on off that design, you’ll remember we achieved persistent control over a victim’s browser by using an invisible full-screen iframe. Whenever the victim clicked on the website, it was within the iframe, and we could monitor they’re activity. The limiting factor was if the victim traveled to another domain the thread of control was lost due to the same-origin policy.

Stefano and Giorgio said you could overcome the limitation by priming the browser with a Splitting attack initiated by XMLHttpRequest.

If the Splitting attack was successful, the victim’s proxy will see two requests and subsequently send back two responses. The second one being laced with JavaScript Malware from the evil website:Response 1: http://www.evil.site/2.html: foo

Response 1_2: http://www.evil.site/3.html:

alert("DEFACEMENT and XSS: your cookieis"+document.cookie)

Here’s where the magic happens. From the (vulnerable) browser perspective only 1 request has been sent so the second response is queued up waiting. If the victim were to then visit http://webbank.com/, they be served up the second response and not the page from the real website. Ouch! Going back to the beginning with the XSS Proxy limitation, before the victim clicks to go off-domain, a Splitting request is primed waiting to serve up more JavaScript Malware from the evil website. And like the description said, the next website doesn’t necessarily need to be vulnerable to XSS. Clever!

Honestly I don’t know if this attack works, or how well, though I assume it does if you have the properly vulnerable set of software. I don’t see a reason why it wouldn’t.

ConclusionStefano and Giorgio deserve a lot of credit for their discoveries and I hope they keep at it. Personally I find this kind of cutting-edge web attack research fascinating, no matter how (im)-plausible it might end up. The fact is you never know when someone else might see something you don’t. For myself, it’s one of the coolest things when others find ways to improve upon my past work. That’s why I try to release as manylittlehacks as I can no matter how strange.

At end of the day this paper wont' force us to do anything else on the web server. Find and fix your cross-site scripting vulnerabilities. It does illustrate yet another reason why we need more browser security enhancements.

Update 01/18/2007The results are in and the people have spoken! Our goal was to capture the “thoughts” of the crowd and boy did it ever! The 59 (4 less than Dec 06) respondents shared their battleground views of web application security and in doing so presented interesting perspectives and great insights of a larger world. There is a huge amount of data inside and I couldn’t be more pleased with the results. We also unexpectedly created a database of the most popular vulnerability assessment tools and knowledge resources. Thank you to everyone who took the time to submit.My Observations

Most already predicted 2007 as the year of XSS, CSRF, and Web Worms. The survey validates this message. (Q5) (Q13) (Q15) Virtually all those who are webappsec savvy say the vast majority websites have serious security vulnerabilities (Q6), web browser security sucks (Q10), and believe most security professionals don’t realize it or understand why (Q4). Clearly we have some challenges ahead.

An uncanny number of people identically answered “My Brain” as their top tool for finding vulnerabilities (Q12). Unsurprisingly the open source proxies are among the most popular software in the webappsec arsenal. And we have some real characters in the crowd that’s for sure. (Q14)

Half of the web application security community has a background in software development (me too) and the other half is IT/NetworkSec (Q3). This is an interesting pairing as traditionally these two groups never really had cause to communicate with one another, let alone forced to work together. Such is the state of web application security. This is probably an unpopular opinion, but software developers seem to have a hard time respecting any solution beyond the code. While IT/NetworkSec types understand the best results come from a collection of risk mitigating solutions. We all should try to keep an open mind when new concepts and ideas arrive.

There is split between how people view the impact or involvement of Ajax technology on website security (Q8). Half say Ajax opens some new attack vectors; the other side says it increases the attack surface. I’m of the opinion the Ajax In-Security discussion has more to do with a semantic debate rather than a misunderstanding of the technology. This is fine when we speak amongst ourselves in the webappsec community, but it causes a lot of confusion for those outside circle seeking education.

People are cautiously optimistic of web application firewalls (Q9). Stopping attacks without having to fix the code certainly has its allure. To say nothing of the prospects of defense-in-depth. There are many out there though who’ve been soured by a bad experiences with crappy and/or older WAFs.

Its official, RSnake’s blog is the most popular place among the webappsec crowd (Q11). *applause* He’ll be making an appearance at the WASC meet-up during RSA to sign autographs. *grin*

DescriptionThis monthly survey has become a really fun project. It's receiving great reviews and right when you think you know something, the answers to a couple questions reveal something unexpected. That's what we're really going for here. Exposing various aspects of web application security we previously didn't know, understand, or fully appreciate. From the last survey people said really enjoyed the "thoughts" from the crowd in the bonus question. We'll try to capture more of those this time around.

As always, the more people who submit data, the better the information will be. Please feel free to forward this email along to anyone that might not have seen it.

Guidelines

Survey is open to anyone working in or around the web application security field

Answer the questions in-line and if a question doesn’t apply to you, leave it blank

Comments in relation to any question are welcome. If they are good, they may be published

a) All or almost all (0%)b) Most (19%)c) About half (26%)d) Some (49%)e) None or very few (7%)

"To my experience the traditional "network security" guys still see WebAppSec as primarily a pet project or an oddity, at least in the government space. For example I am the only full time app sec engineer for a agency of about 8,000. If i had a staff of 10 we would still keep busy. "

"Most get web, since that's half of what the internet is (the other half is email). It's everything beyond the web that they don't get."

5) What are your thoughts about the Universal XSS vulnerability in Adobe’s Acrobat Reader Plugin?a) Really bad (53%)b) Bad (37%)c) Never heard of it (2%)d) Nothing new here, move along (5%)e) Please stop with the FUD! (2%)

"It is certainly bad. Especially the bit with local file system access. But like all XSS vulnerabilities they have a high exploitability potential they are not that difficult to individually remediate (systematically is a whole different ball of wax)."

"What is really bad is the recommendations going around about how to fix this. Telling people to "upgrade their Acrobat" isn't the way to protect users NOW. Additionally, half the recommendations don't even account for the fact the information after a # sign in a URL is NOT sent back to the server, so the server side URL filtering jive isn't going to fix anything when it boils down to PDFs. In fact, this vulnerability is similar to just entering in "javascript:alert('xss')" in the address field of the browser -- the script runs, but it obviously would not be mitigated by running server side validation. The best approach I've seen is to change the MIME type sent back from the server from "pdf/application" to "pdf/octet" and also to add a "Content-Disposition: Attachment" header for outbound PDFs which prompts the user to download the PDF rather than running it in the browser."

"truly the best candidate for the most widespread worm to utilize"

"(I think its bad, and was a great find for 07. Really surprised it wasn't found any sooner)"

6) During your web application vulnerability assessments, how many websites where that DID NOT have at least one relatively severe vulnerability?

a) All or almost all (2%)b) Most (2%)c) About half (2%)d) Some (19%)e) None or very few (74%)

"I don't care, though I like to say "sea-surf," I use "XSRF" in documentation because it is more consistent with XSS."

"After much gnashing of teeth over this question I had to come down on the side of the term that I believe best describes the attack. Plus, people will never agree on whether to use "C" or "X", so why not eliminate that problem? (fwiw, I prefer XSRF over CSRF just so it's consistent with XSS)"

"...how about one click attack?"

8) Does using Ajax technology open up new website attacks?a) Yes (9%)b) Yes, it adds some new things (35%)c) No, but it increases the attacks surface (40%)d) Nothing new here, move along (5%)e) Other (9%)No Answer (2%)

"Sorta. It makes developers more prone to make mistakes while the "attack surface" isn't really that much bigger the likleyhood of some of the same or old issues is greater. For example i find that AJAX developers have i higher tendency to use client side validation only. Not making the realization that AJAX is just like form submission just with a slicker front end. A POST is a POST people!"

"People extending their website to use AJAX does provide new potential entry points, I don't see how anyone can deny that. But using AJAX to create a complex attack could be done without AJAX using images to create GET requests and iframes to create POST requests."

"It's something that would otherwise be done server-side by PHP/ASP and thus closed source - so developers lose their security through obscurity."

"I think it will eventually, possibly as applications provide more features in order be used offline and with caching features. Innovation in this area (and thus new forms of attacks) are mainly coming from startups and non-enterprise companies, most of who do not have a process or funds for proper web application testing."

"It can increase the attack surface, but more importantly, Ajax technologies are being used to create better exploits. Focusing on whether using Ajax technologies creates new vulnerabilities is causing many people to look the wrong way when crossing the road."

"Adds more attack surface, and doesn't give you any new vuln types, but it does expose more general application logic/architecture nuances which an attacker could use to better infer inner app workings (and thus problem areas)."

"modsecurity has saved me from several stupid bugs in third-party stuff"

"Obfuscation rules all firewall/ids. They sure do make alot of money for vendors tho. Spend it on secure software practices and training instead."

"I think it can add a layer of defense, but certainly does not fix the problem in our testing."

"In any for-profit business, it certainly makes financial sense to use them. The depth of defense they add far outweighs the setup cost and power usage."

"Most seem to be best used only as a learning tool to help you find how they can be improved and/or how your skills at detecting such attacks can happen and how you can prevent them from happening before turning to a firewall for protection."

"for now they are not so useful. During my security audits I developed some hacker methods to evade webapp firewalls (mod_security in particular) and I plan to write an article about it. They need to be improved."

"oh look this IDS will protect me against 0hday, oh shit wait we've just lost our file server due to RPC what?"

"I'm some what neutral on this. I think they can add to a defense in depth posture, but in most cases, time and resources are better spent going back up to the application level and training developers on how to write secure code, or by performing code reviews and blackbox testing. I may recommend these more as they become more advance and mature, but my experience is that they are not yet the best way to spend your resources."

"Up what? ;)"

"Unfortunately, many tend to assume WAF can fix bad code, but they cannot. Also, many think WAF alone are a sufficient counter measure to vulnerable and badly designed web applications, and they are not."

"While browsers facilitate some web based attacks they really aren't to blame for a bunch of things. App developers should be responsible for their own work and not rely on browsers for their security. Anytime you "outsource" your security model is just asking for trouble."

"is there even any cheese left anymore? all i see is holes.."

"browsers are the least secure, most dangerous software we use, due primarily to the execution of "content-controlled" code. It will be a "big deal" for computer security as a whole if we manage to secure this platform."

"Things are in a pretty bad and scary state. "the browser is the operating system" so maybe there's no way to avoid it, but the frequency and number of vulnerabilities should be much lower for such a critical piece of software."

"The amount of functionality the average user expects of its web browser has increased remarkably along the past years. As a result, vendors made it more easy to extend the functionality of their products by use of third party software. Unfortunately, most of these have never been sufficiently reviewed from a security perspective. And, being vulnerable to security issues which have been long fixed in the core products, they reintroduce just these and taint the security level of the entire product. As long as plugins and widespread extensions reintroduce vulnerabilities into the commonly used web browsers, and they are widely used, the security of plain web browsers do not matter much."

13) What are the Top 3 types of website attacks we're most likely to see a lot more of in 2007?(Listed in order of popularity)

Cross-Site Scripting

Cross-Site Requests Forgery

Web Worms

XSS-Phishing (Phishing w/ Superbait)

SQL Injection

Web Services / XML Injection

Ajax-based Attacks

Logical Flaws

UXSS

Unknown Issues

Denial of Service

PHP Includes

Browser Plug-in / Extension Hacking

Exponential XSS

Backdooring Media Files

"chrome" attacks

internationalization and charsets

Privilege Escalation

Authentication attacks

Session Hi-Jacking

Mobile Code (Flash/Java)

Configuration issues

14) What was your information security New Year's resolution?

Less projects, more quality

To spend more time with my wife and less time thinking about security.

Finding vulnerabilities in FireFox

Start own research in a particular web app sec area.

Get a job doing WAVA full-time (no other hats or VA work)

1024x768 ;-)

Learn to demonstrate exploits.

To say important things. If developers don't listen, it's the fault of WSJ for not defiling enough companies.

Start working in a security company?

White-list sites with JavaScript execution permissions

Blog more. (is that infosec related? Blogging on infosec topics...)

Become more involved (less of a lurker) in the web app security area.

More champaign while doing assessments!

Dang, go to defcon, toorcon again. Do more R + D, do some posting to the lists (I never do historically).

Continiue the learning process.

Keep the OWASP Kansas City chapter active and involved.

Be better than I was last year.

http://www.cgisecurity.com/2006/12/07

CISSP Certification?

To give back to the field more than i extract. i.e. to come up with new ideas or improve upon existing ones more than i learn from others' research.

I didn't make one. I slept through New years as well.

Not to make any more resolutions

Research more.

The last year was very interesting in security field and the new 2007 year

will be also interesting and hot. There are very small amount of site which

were security audit and every day new sites appear. So there are a lot of

sites to look at their security

Finally understand that after doing this since 1998, clients won't learn, it won't save the world and getting old makes you more cynical

All your web 2.0 are belong to us ;)

Locate at least one huge issue that wakes up the browser community out of their web application security slumber.

Work to bring more education, awareness, and training to more developers

and a wider audience than our standard security community.

never do blacklisting, white listing save life

Educate developers

Be up to date and do more research.

Build more tools and share knowledge - have fun :)

Get better

Get certified (QDSP, QPASP)

I will not get 0wn3d.

The booze.

15) What's the first thing you have or plan to learn/research/try/code/write in 2007?

XSS/CSRF combo attacks

Adding embedded Ruby support to a popular hex editor to make it the Emacs of reverse engineering.

Universal XSS Worm, but only as a PoC and personal information gain.

SANS course in forensics.

Attend my local OWASP meeting

Convince my Bose to provide more training

A better web goat bank than web goat bank,

Hopefully, a JavaScript functional analyzer?

Soon to be released to the public ...

Wish I could -- no time. Will be busy doing nothing but risk assessments on campus. Know the techniques and technologies, but I don't spent enough day-to-day doing them so I need to get more hands-on time which I should be getting.

Researching possible vulnerabilities within the .NET Framework.

Wrote an exploit for the UPDFXSS to scare the crap outta some execs. FUD is good for my paycheck!

Neato stuff, I prefer to keep it a secret. I'll just say it has to do with automated attacks.

Learn security weaknesses of web services and SOA architecture.

.net framework 3.0 object model weaknesses

Either a game server in PHP (not real time obviously) or a CMS built from the ground up with security and extensibility in mind.

UIML's and fuzzers

CISSP Certification and study up on IPS/IDS

Intranet hacking using Google Desktop Search

I plan learn more about buffer overflows and test for them as well as finding more ways to improve basic application errors and tactics for doing so. I will focus more on stand alone applications and less on webapp security maybe too but it all depends on

Complete a VMWare lab for Web Application testing

New trojan horse techniques.

For some time already I am planning to write my own security scanner (with unique features and functionality). And I plan to developed also web version (online web security scanner), and maybe just single web version.

I'm retiring and leaving security :0)

Play with XSS

Working on my webapp risk metrics and some other paperwork stuff in this field.

I've been thinking about interesting ways to do active detection and filtering of malicious traffic.

Working on ways to alert, help, and guide Ruby on Rails developers to write secure code.

too many to list,....how abt inventing new way to prevent webapps ... something like integration of code review and webapp firewall.

Take the Top 10 Web Hacks of 2006 and the 60 more that follow to see what I mean. XSS, CSRF, and other attacks make it so bad we can’t be certain we’re the ones driving our browsers. Short of completely reinventing HTTP/HTML/JavaScript/Cookies and other fundamental Web technologies (not going to happen) there are a few things we can do. People will get infected with JavaScript Malware, but there’s no reason why we can’t limit the damage without impacting the user experience.

Here are 3 web browser security enhancements I’d like to see. The sooner the better.

1) Restrict websites with public IP’s from including content from websites with non-routable IP address (RFC 1918)

This restriction is designed to protect against Hacking Intranet Websites from the Outside (Port Scanning, Fingerprinting, etc.). If JavaScript Malware can’t force a browser to make non-routable IP requests, then there’s not much left it can do whether or not if it has your private IP. I can’t think of any good reason that a website with a public IP would legitimately need to include data from a private IP.

The name says it all. There are excellent extensions and provide a good amount of security that all users can benefit by. Collin Jackson, Andrew Bortz, Dan Boneh, John Mitchell fromStanford and the guys from Netcraft did a great job. I don’t know what Mozilla’s policy is on this kind of thing, but this is one they should definitely consider building in by default. Another feature I’d like to see is restriction of any non-alphanumeric character in the fragment portion of the URL. Designed to stop DOM-based XSS and UXSS.

3) Same-origin policy applied to the JavaScript Error ConsoleJavaScript errors from code located on DomainA should not be readable from DomainB. This enhancement is design to protect again the Login/History Detection Hack. So when SCRIPT SRCing in a page from another domain (Gmail, Yahoo Mail, MSN, etc.), hoping to get a signature match, you’d be out of luck because you can’t see the error message. This might hinder debugging in some cases, but not much I don’t think.

Thursday, January 04, 2007

"This find changed Web app expert Jeremiah Grossman's mind about the bug. Yesterday, Grossman, CTO of White Hat Security, had said the PDF XSS bug didn't really raise the XSS risk level overall. But in light of RSnake's finding, Grossman now considers this "really bad" and worries that it could be used as a payload for attacks much worse than XSS."

To clarify I was thinking the issue didn't raise the risk level of XSS since just about every website is vulnerable anyway.

Anyway, I’ve been reading the reports and the data conflicts all over place. InfoSec people are having the same problem. They’re unsure about what this is or what they need to do about it. I’ll try to boil this down to the relevant points and see if I can help out.

Here’s how the attack works:

Attacker locates a PDF file hosted on website.com

They create a specially crafted URL pointing to the PDF append with some JavaScript Malware in the fragment portion (Example: http://website.com/path/to/file.pdf#s=javascript:alert(”xss”);)

Attacker entices a victim to click on the link

If the victim has Adobe Acrobat Reader Plugin 7.0.x or less, confirmed in Firefox and Internet Explorer, the JavaScript Malware executes.

Everything XSS has shown to be capable of including Phishing w/ Superbait, Intranet Hacking, Web Worms, History Stealing, etc is now available to the attacker.

Things to keep in mind

The vulnerability is very pervasive as it lowers the hackabilty bar from the target website needing to have an XSS issue to simply hosting a PDF.

Normally XSS vulnerabilities are a problem in the server-side code, this one is on the client-side (web browser).

The fragment portion of the code, where they payload is stored, is NOT submitted to the web server. So the server can’t see it, and won’t be able to block it.