Thursday, December 28, 2006

Sylvan von Stuppe published a pair ofexcellent posts speaking about how we as security practitioners can motivate "the business" to improve the current state of web application security. Sylvan adds to my earlier list of questions with something very insightful.

"Once flaws have been identified, what is my motivation to fix them? If you can't give me the likelihood of attack, and what I stand to lose by it being exploited, how many dollars should I invest to repairing it?"

As security practitioners, we continue to say how much the development environments need to learn to make secure software. I'd say there's another side to that coin - security practitioners need to be able to measure the impact of particular threats in terms of dollars so that we don't just reveal vulnerabilities and the threats that might exploit them, but what the business stands to lose of the vulnerability isn't fixed.

Very well stated and got me thinking about how this could be done. For some reason the movie Fight Club popped into my head with the scene about how Jack, as a automative manufacture recall coordinator, applied "the formula". Seemed like a fun way to go about it. :)

JACK (V.O.)I'm a recall coordinator. My job is to apply the formula.

....

JACK (V.O.)Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X...

JACKIf X is less than the cost of a recall, we don't do one.

BUSISNESS WOMANAre there a lot of these kinds of accidents?

JACKOh, you wouldn't believe.

BUSINESS WOMAN... Which... car company do you work for?

JACKA major one.

I know I know, I broke the first rule of Fight Club. Anyway, I have no idea how "real" this formula is or if its applied, but it seemed to make sense. I wondered if something similar could be applied to web application security. And if nothing else, an entertaining exercise.

Take the number of known vulnerabilities in a website, (A), and multiply it by the probability of malicious exploitation, (B), then multiply the result by the average average financial cost of handling a security incident, (C). A times B times C equals X...

If X is less than the cost fixing the vulnerabilities, we won't.

Sounds like it could work given you could be somewhat accurate in filling in the variables, which is the hard part. The thing is this process probably isn't a suitable task for an information security person. Maybe we need to seek the assistance of an economist of a probability theorist and see what they have to say.

We know security of websites must improve, heck it can’t get much worse. One major challenge we’re facing is not everyone agrees on what needs to be done. For example, if someone were to ask a mailing list, “what do I need to do to secure my website?”, they’d likely receive 20 different answers and perhaps get asked just as many questions. Conversely, if the same question was asked about a PC, you’d probably get 3 similar answers and maybe a question about why your using Windows (*just kidding*). This inevitably leaves website owners in a state of confusion unsure of what to do. Maybe they’ll do nothing.

I think the reason behind the lack of consensus is due to void of data and/or a means to measure success. We’re essentially flying blind. Let’s rhetorically consider several questions people commonly ask:

“How do I find out how many websites I have?”“What do they do and how *important* are they?”“Who’s responsible for them?”

Digging into a single website….

“How large and complex is the code base?” “What’s the rate of application code change?”

Narrowing down to vulnerabilities…

“What vulnerabilities do I have?”“Who’s fault is it and how do I prioritize their remediation?”“What do I do to protect myself in the meantime?”

Finally organizational changes…

“Which should I focus on, developer education or the use of a modern development framework?”“Which testing process is better, white box or black box or glass box?”

Answering these questions is anything but simple, largely dependent on any number of factors, unknown to any single person, and varies from organization to organization. The point is an organization must be able to understand its current state of affairs. And we as an industry must be able measure if a particular strategy or solution is working and if so how well. This brings us to where I think we are today. Best-practices based upon conventional wisdom held over from other areas of information security, which do not apply here. A harsh reality.

To begin looking at things in fresh and new perspective, I find its helpful to line up the "knowns" and "unknowns" for a particular problem set. From there it’s easier to spot trends, relationships, inconsistencies, and areas that should yield immediate return from investigation.

In what would normally be considered the largest, most popular, and “secure” websites, it’s found the vast majority have serious vulnerabilities. We have no idea about the security of the mid and lower end websites which are typically not assessed.

Those typically in charge of information security do not have the same level of control over the safety of their websites as they do at the network infrastructure level. Consequently, the responsibility of website security is unassigned or rests among several constituencies.

Attacks targeting the web application layer are growing year over year in number, sophistication, and maliciousness. Real would visibility into these attacks are extremely limited.

Firewalls, patching, configuration, transmission/database encryption, and strong authentication solutions do not protect against the majority of web applications vulnerabilities.

All software has defects and in turn will have vulnerabilities. Security enhancements provided by modern development frameworks help to prevent vulnerabilities, though will not eliminate them altogether. Measured benefit is unknown.

Change rate of commerce web applications is relatively rapid updated with incremental revisions. Traditional PC or enterprise software tends to be slower with larger versioned builds. Web applications tend to have a steady and faster flow of vulnerabilities.

Developer education in software security and implementing security testing inside the quality assurance phase reduces the number of vulnerabilities, but will not eliminate them. See #5. The overall expected reduction of vulnerabilities as a result is unknown.

It’s impossible to find all vulnerabilities through automation, which requires a significant amount of experienced human time to complete thorough security testing. How much time is required and how close the process will come to finding everything is debatable.

Web application security is a new and complex subject for which there is a limited population of experienced practitioners relative to the amount of workload.

Web browser security is largely and fundamentally broken leaving unable to protect users against modern attacks. The situation hasn’t significantly improved with Firefox 2.0 or Internet Explorer 7.0. and it’s unclear it future releases will attempt to address the problem.

What does this tell us? A lot of things actually. First and foremost, there is a lot of work to do *like we didn’t know that already*. Here are a couple of my observations:

Solutions must come from areas other than "fixing" the code

We need to invest resources into measuring ROI from various solutions and best-practices

Tuesday, December 26, 2006

Recently Alan Shimel (StillSecure) went out on a tiny twig and said, “vulnerability assessment (VA) is dead”. Of course Alan’s speaking about network security not web applications. His remarks are about VA's convergence with NAC’s. Fair enough. When I spoke with him he said, “Actually VA for web apps is one of the few bright spots in the VA space these days.” I'd like to think so. :) This topic is always on my mind since this is exactly what my company does. “What is the future of web application vulnerability assessment?” is a question that doesn’t get asked a lot. Personally I think we’re at the point where network VA was a few years ago, solving the challenge of scaling.

Granted my numbers could be off and may vary a great deal from enterprise to enterprise. However, this exercise helps estimate the relative needs of the market. Let's see what kind of resources we need if we're trying to assess all these websites for vulnerabilities twice per year.

Today we'd need:

1 million total vulnerability assessments

25,000 experienced experts in web application VA

$2,500,000,000 (US) in salary for web application experts

$5,000,000,000 (US) retail assessment cost

Even though the assumptions were way the conservative side, it’s immediately apparent that this scenario is completely fictitious. There are probably only 3,000 experts (a guess) in the world qualified to perform assessments relative to the 25,000 required. And much as I’d wish they would, enterprises are simply not going to spend multi-billions on web application security in 2007.

Of course as the awareness of web application security builds the numbers will climb, but for now we have to face facts. And the fact is unless we can vastly improve the web application VA process, most websites will not be assessed for security and remain insecure. That’s what’s going on today. And that’s why I’m saying the future of web application vulnerability assessment is about scale.

While we certainly can’t reduce the number of “important” websites, can reduce the number of man-hours and expertise required to perform an assessment using technology and a modern processes. Modern assessment processes need to be highly streamlined, repeatable, thousands running concurrently and performable by less than top-tier webappsec experts. This is what it truly means to “scale”.

How much improve can be made near term is a subject of much debate, but we’re working on it. For fun, let’s try a few more guesses at how certain efficiencies will help.

Future improvements:

500,000 “important” websites (roughly 1/2 of 1% of the total population)

Assessments 2-times a year per website. (Vary on change rate)

An expert can perform 40200 assessments per year with base salary of $100,000$80,000 (US).

Retail cost per assessment $5,000$2,000 (US).

Adjusted requirements:

1 million total vulnerability assessments

5,000 experienced experts in web application VA

$2,000,000,000 (US) in salary for web application experts

$400,000,000 (US) retail assessment cost

These numbers are much more palatable in the grand scheme of things and gives us our benchmarks for where technology and process must bring us to. How long will it take to get there is anyone's guess.

Friday, December 22, 2006

pdp (architect) recently invited me to guest blog on gnucitizen. I'd never done something like that before, so I figured it'd be a fun first. Here goes...

Secure Code Through Frameworks105 million sites make their home on the Web - 4 million more move in each month. That’s a staggering number to think about, and as we well know, the vast majority of websites (I say 8 in 10) have serious security issues. Industry discussions go round and round about what should be done. We talk about secure coding practices, training, compliance, assessment, source-code audits, and the like. What’s going to work? Then I read something Robert Auger posted, the lack of security enabled frameworks is why we’re vulnerable, touching on an area I’ve thought a lot about recently.

Friday, December 15, 2006

Attacks always get better, never worse. That’s what probably what I’ll remember most about 2006. What a year it’s been in web hacking! There’s never been such a big leap forward in the industry and frankly it’s really hard to keep up. My favorite quote came today from Kryan:

"The last quarter of this year, RSnake and Jeremiah pretty much destroyed any security we thought we had left. Including the "I'll just browse without javascript" mantra. Could you really call that browsing anyways?"

To look back on what’s been discovered RSnake, Robert Auger, and myself collected as many of the new 2006 web hacks as we could find. We’re using the term "hacks" loosely to describe some of the more creative, useful, and interesting techniques/discoveries/compromises. There were about 60 to choose from making the selection process REALLY difficult. After much email deliberation we believe we created a solid Top 10. Below you’ll find the entire list in no particular order. Enjoy!

Thursday, December 14, 2006

The CSS History hack is a well-known brute force way to uncover where a victim user has traveled. Great Firefox extensions like SafeHistory are helping protect against this simple hack, but the cat and mouse game continues. Despite this tool, I’ve found a new way to tell where the user has been AND also if they are “logged-in”. People are frequently and persistently logged-in to popular websites. Knowing which websites can also be extremely helpful to improving the success rate of CSRF or Exponential XSS attacks as well as other nefarious information gathering activities.

The technique uses a similar method to JavaScript Port Scanning by matching errors from the JavaScript console. Many websites requiring login have URL’s that return different HTML content depending on if you logged-in or not. For instance, the “Account Manager” web page can only be accessed if you’re properly authenticated. If these URL’s are dynamically loaded into a tag, they will cause the JS Console to error differently because the response is HTML, not JS. The type of error and line number can be pattern matched.

Using Gmail as an example,

If you are logged-in…

If you are NOT logged-in…

I mapped the error messages from a few popular websites and made some PoC code.Firefox Only! (1.5 – 2.0) tested on OS X and WinXP. I don’t want to hear it about IE and Opera. :)

Wednesday, December 13, 2006

Research in 20061) I think their is going to be a lot of research, on the white hat and black hat side, in the area of web- based worms. Lots of creating and trading of JavaScript exploit code once an XSS issue is found.

Right on the money. Of course this might have been a self-fulfilling prophecy. :) Those are the best kind.

Commercial landscape in 2006Personally I think compliance, specifically PCI, is going to be a big driver to improve web application security.

Blech, way off. PCI is a good standard with decent web application security components, but the enforcement of validation of compliance leaves something to be desired. When network scanning vendors can meet the minimum webappsec criteria with only the most rudimentary checks, then clearly there is improvement required. Checkbox != security. Maybe PCI will be a real driver by 2008. Time will tell.To meet the requirements, I expect vendors will combine various types of vulnerability assessment products through innovation or acquisition. Current product/service offerings separate network, cgi, and web application assessment layers. Some combine 2, but not all three.

Off again yet again. I stil think this will happen, just don't know exactly when. I thought it would have taken place already.

To pass PCI quickly, we'll see people looking for simple solutions or hacks to clean up their vulnerabilities. Not everyone has the resources available to fix their web app code the right way. As a result, I expect new web server add-ons (or WAF's) and configuration set-ups will be employed as band-aids to prevent the identification of vulnerabilities. This is create an interesting challenge for the industry.

Let's call this a 50/50, I was correct about a huge increase in web application firewall deployments in the market led by ModSecurity and other commercial players. Way more WAF's on the Web than there were in 2004 and 2005. However, this didn't have anything to do with meeting PCI or a band-aid approach as I guessted. Most deployments I've seen have been towards defense-in-depth, bravo, but I was wrong in the prediction. :)

Then a few other predictions:* a variety of different product/service standards

Friday, December 08, 2006

Compelling real-world examples of business logic flaws in web applications are hard to come by. Most of the time we can't talk about specific instances because they’re typically unique to a company and protected under NDA. So when I read the editors article in CSO magazine (A Nation of Cheaters?) describing his experience with a logic flaw in the Yahoo Games ladders, I was immediately interested. Specifically because Yahoo was my previous employer and I was personally involved with situation described.

"A few years back, Yahoo Games instituted an online chess ladder. A ladder system essentially ranks all the players from top to bottom, and you move up by beating people ranked higher on the ladder. Losing (or not playing) slowly lowers your ranking.

I'm a decent player—I won the state championship of Kentucky in my salad days—but couldn't begin to approach the top of Yahoo's ladder. But guess what? The people at the top weren't playing chess at all!

They were cheaters, a closed circle of players passing the crown around by systematically losing one-move games to each other. Player No. 2 challenges Player No. 1, makes one move to start the game, and then Player No. 1 resigns the game and they switch rankings on the ladder. "

This is what we call Abuse of Functionality, "an attack technique that uses a web site's own features and functionality to consume, defraud, or circumvents access controls mechanisms." Most of the time we can't find these issues by scanning, we have to find them by hand, or from customer support when they receive hundreds of calls from pissed-off users because they can't improve their chess rank. There is more to this hack.

There are literally thousands of people (or more) with an amazing about of free time to do the most mundane tasks for the most inane rewards. “Cheating” players would code purpose built programs to bot 100’s of chess games 24x7 simultaneously. They’d sit up late into the evening because every so often the ladder ranks would be reset, and when they did, they’d snatch the top spots. And once they owned a block of the top spots they’d only play within their controlled accounts to rise slowly in ranks. The way the ladder logic worked, “legit” ranked players must play against other equally or higher rank players, and since cheaters wouldn’t play against them, legit players would drop in rank.

All that just to be at the top of the Yahoo Chess games ladder. No monetary reward, no praise, no nothing. Makes you think where else this is going on doesn’t it?

Thursday, December 07, 2006

Ryan Barnett, author of Preventing Web Attacks with Apache, is Breach Security's new Director of Application Security Training. If you recall Breach acquired ModSecurity earlier this year and is a WAF product vendor. Ryan, a long time friend, is a master of web application defensive strategies and techniques. In a real world sense, he knows what it takes to keep a website from getting hacked. He's finally entered the blogging realm and it'll be interesting to see what he has to say over 2007.

Wednesday, December 06, 2006

Update 2: It looks like our little survey here is reaching some critical mass. Kelly Jackson Higgins from Dark Reading posted 'Not Much Resistance at the Door' covering the results. During the article one of the areas we haven't asked questions about came up, intranets:

"The scary unknown is intranet Website vulnerability, however, which the survey did not address. "There are no good metrics for how many intranet Websites there are, or how vulnerable they are. That's a big unknown in the industry," Grossman says. "It's a whole other world inside the firewall."Update: Once again a great survey turn out. A total of 63 respondents. Thank you everyone whom responded and those who helped me out with the questions. We didn't reach my 100 prediction, but that's OK because I'm not very good at those anyway. :) We'll try to get there in January. The problem is its getting difficult to manage this by email, I'll have to figure out some way to remedy that. The data collected certainly did not disappoint and some interesting things bubbled to the top.

My ObservationsGood representation from both security vendors and enterprise professionals. Most of which have several years of experience, a significant percentage of their time dedicated to web application security, and performed 1 - 40 assessments in 2006. An experienced bunch I’d say.

About half of organizations have vulnerability assessments performed see security measurement as the primary security driver and about a quarter say compliance. I would have figured measurement would have scored higher and compliance lower. Maybe we’re seeing a shift in the industry.

The vast majority of webappsec professionals believe assessments should be performed after each code change or "major" release, takes a week or two to complete, and rarely encounter multi-factor authentication or web application firewalls. About half of people using commercial scanners say scanner complete about half or less of their workload. The other half of people who don’t say assessment are faster to do by hand, have too many false positives, or too expensive. There is much more to talk about here, but that’ll come in another post.

Question 12a on disclosure yielded some interesting results. There majority of people are evenly split between “responsible” and “non” disclosure. Think about that. Just as many as are disclosing as those who don’t because there is inherent risk. As I’ve said before, discovery is going to be a big issue moving forward. We’re going to loose the check and balance we’ve relied upon with traditional commercial and open source software.

DescriptionIts been a month already since the last survey. In November we got a great turn out doubling the response from October. Maybe this time we'll reach 100 respondents. Anyway...

If you perform web application vulnerability assessments, whether personally or professionally, this survey is for you. 15 multiple choice questions designed to help us understand more about the industry in which we work. Most of us in InfoSec dislike taking surveys, however the more people who respond the more informative the data will be. So far the information collected has been really popular and insightful. And a lot of people helped out with the formation of these questions.

Daniel Cuthbert: "WALK AWAY! spending 1 year fighting the british government over this exact thing made me realise this lone cowboy approach will never work :0)"12b) How has the security of the average website changed this year (2006) vs. last year (2005)?

This might not be web app. related until after Vista is released (yeah, right).I found interesting the concept of how to discover whether Visual Studio binaries have been /GS compiled; Used to mitigate local stack variable overflows.

Sorry, I'm restricted from saying. :/ I guess my best/most valuable tip that I use every time is don't become dependent on any one tip/trick/idea/concept/hack. :)

What cross domain restrictions? The web security model was completely smashed up this year and I don't pretend to claim to be smart enough to fix it. But what we got isn't working the way we thought it did.

I've found some new tools to try out, such as TamperIE, which I picked up from the "Hacking Web Applications Exposed 2nd Edition" book I purchased, and I believe you wrote the foreword. Plus I have to hand it to RSnake, id, maluc and the other people on sla.ckers.org, they just keep coming up with new attack vectors. Their disclosures can be a little frightening. As you said in the foreword, sometimes we just want to bury our head in the sand because we know most of the sites out there have vulnerabilities.

CSRF (Just after I tested a bloody forum too!)

impoving my XSS knowledge with the awesome help of ha.ckers.org and sla.ckers.org

Learning more about web servicves, SOAP, XML, etc.

That demonstrating issues to a vendor/customer is much less effectivethan expressing the business liabilty and risk expressed in $s :-)

Fully understanding AJAX, which is important, even if all I learned was that it wasn't as big a deal as I expected it to be (from a security standpoint it is a much bigger deal from developer and user viewpoints).

There is nothing new under the sun. People still do dumb things.

XSS + Ajax avoids the same-domain security sandbox.

Bypassing filtering mechanims by UTF-16 encoding URLs even when there's no need to

Using POST content as a query string in most cases won't effect the way the receiving application reacts. Attacks are more portable and easier to demonstrate in link format.

Implications of Flash 9 crossdomain.xml, including flaws in the implementation and severe lack of best practice standards. The floodgates may be open on client-side code with cross-domain privileges (Quicktime, etc.), but it's good to see a misstep at least happening in the right direction.

This is my statement for the web application security year 2006:Everyone can find a XSS vulnerability but fortunately only a few people can imagine what this really means.

Sunday, December 03, 2006

I’ve gotten an overwhelming response to my Myth-Busting AJAX (In)Security article, even a nice slashdotting to go with it. The vast majority of the feedback was positive, some negative, others said “you make a good point, but…”. Though one blog post compared me Donald Rumsfeld. Now come on, that’s the plain mean! :) Anyway, this subject has been on my mind and apparently many others since Black Hat (USA) 2006. There needed to be another perspective voiced since not everyone agreed. So now people have a more complete set of viewpoints to consider and can make up their own minds. That's the important thing.

Anyway, as RSnake pointed out it’s been a busy week with a ton of new tricks posted. Maybe someone is going to starting combining these into something better. JavaScript Malware continues to evolve.

About Me

Jeremiah Grossman's career spans nearly 20 years and has lived a literal lifetime in computer security to become one of the industry's biggest names. He has received a number of industry awards, been publicly thanked by Microsoft, Mozilla, Google, Facebook, and many others for his security research. Jeremiah has written hundreds of articles and white papers. As an industry veteran, he has been featured in hundreds of media outlets around the world. Jeremiah has been a guest speaker on six continents at hundreds of events including many top universities. All of this was after Jeremiah served as an information security officer at Yahoo!