Version 1 of WASSEC (Web Application Security Scanner Evaluation Criteria) is (finally) out! I'm not going to say which section I wrote, but the document is (as far as I know) the first attempt to comprehensively list the features that should be considered when evaluating appsec scanning tools. Check it out. It's worth a read.

Security PS has continued to pursue plans for growth in 2009 by expanding and strengthening their presence in the Midwest and Southern regions in the US. As key part of this plan, Security PS announced the creation of a local presence in Austin, TX.

"While we do business across the US, our clients continue to place a high value on our local presence as they partner with us for application security services." commented Kris Drent, President and CEO of the firm. "This expansion allows us to provide that level of partnership with business in Austin and in the surrounding areas." he concluded.

Heading up the firm's Austin expansion is manager and consulting veteran Tom Stripling. According to Drent, Stripling's accomplishments and business knowledge made him the perfect team member to spearhead and manage the new location.

While Stripling will continue to serve key clients in the Midwest, his relocation to Austin has enabled him to take a more active role in the regional market. "Given the level of technology in this area, there is a significant need for experienced application security services here. I'm excited that we can be provide a local presence to clients in Austin and better support this region." Stripling said.

In conjunction with the expansion, Security PS continues to expand their reach in the Midwest from their headquarters in the Kansas City area and announcing new application services designed to meet emerging needs.

Robert Hansen (RSnake) and others have been working with Mozilla for years to develop a working solution to the problem of user-submitted active content. Well, they're finally close to a solution. Mozilla is releasing their Content Security Policy in version 3.6 of Firefox, which will help to "prevent the creation of JavaScript code from potentially tainted strings", among other things. Basically, this means that if you're a site like eBay that wants to allow users to enter certain "safe" HTML, but not run scripts, you will be able to use Mozilla's Content Security Policy to help ensure that (at least in Firefox).

I sincerely hope other browser vendors follow suit. If this type of thing can become the standard, it will provide a powerful tool for interactive, customizable sites to protect their users.

One of the many interesting discussions at Defcon recently was a discussion of CSRF by Mike Bailey and Russ McRee. They talked about a variety of real-world CSRF attacks and pointed out that a lot of companies simply accept CSRF vulnerabilities because they don't understand the risks they pose. To help people prevent this attack, frameworks like ESAPI have implemented CSRF token generators that produce a long random value to be included as a hidden field in each sensitive form. This value is then checked when the form is submitted, preventing attacks like CSRF from working.

While I was reading about this, I came across an interesting attack against CSRF tokens that was published last month. It's a cool idea. He uses the CSS properties of the browser's history to brute force token values. This works a little better than previous techniques because it generates no traffic to the server and likely isn't detectable by server-side defenses.

As other authors have already pointed out, this doesn't doom CSRF tokens to uselessness. They're still the most effective defense against CSRF attacks as long as they have a sufficient key space to prevent brute force attacks. Just make sure you use a long random value for your CSRF tokens and you'll be fine.

It's been seven years now since the release of the first .NET framework. Throughout all that time there are few aspects of the framework that have been continually misunderstood like the View State. It's a common thread throughout many of the ASP.NET applications that I assess; the View State is misused a lot.

View State misuse leads to a few different problems. Bloat is a common one, and so is accidental disclosure of sensitive data. Every once in a while I'll even see an application with the View State message authentication code (MAC) disabled, allowing attackers to manipulate the View State's values. Of course, the problems with information disclosure and manipulation can be easily solved by turning on View State encryption (and hopefully, setting the ViewStateUserKey* as well). But that's not really addressing the problem at its source.

These problems seem to arise from a basic misunderstanding of the purpose and function of the View State. The standard statement that everyone hears about the View State is that it persists state across postbacks. This simply means that programmatic changes to the properties of any controls will be stored and persisted in the View State across postbacks. Changes to the default properties that are made declaratively or before the control is added to the page's control hierarchy will not be tracked. This means that, ideally, control properties should be set in the ASP code itself instead of in the code-behind file. That way, they won't have a chance to pollute the View State.

This is a pretty key idea and the following two articles, by Scott Mitchell and Dave Reed, respectively, should be required reading for understanding the whole concept.

Both of these articles describe what the View State really is, and more importantly, what it isn't. Applying the information presented in the articles above will go a long way towards reducing security problems with the View State. However, in the cases where sensitive data is still leaking, it may be easier to not use the View State at all. In many cases, it can be completely turned off without impacting the functionality of the application. In other cases, the View State can be disabled on a per-control basis. More good news on this front is that ASP.NET 4.0 will give developers more control over exactly how the View State is used on a page.

* Setting the ViewStateUserKey can help protect against POST-based cross-site request forgery (XSRF) or one-click attacks. However, in order for the ViewStateUserKey to work, the View State MAC must be enabled. Depending on the application, this feature alone may be worth leaving the View State enabled. More information on the ViewStateUserKey property can be found at the following site:

The team has returned from Defcon unscathed. Well, maybe only slightly scathed. We caught several really great presentations. Watch this space for summaries of our favorites. The conference was not without its usual hacker stupidity, though. Some highlights:

A fake ATM was found in the lobby of the hotel and quickly carted away. There are claims that this was the work of credit card scammers not affiliated with the conference. If so, it was very poorly timed.

Someone attempted to bungee jump from the roof of the Riviera. He was stopped by hotel staff before he was stopped somewhat more abruptly by the wall of the building.

One person decided to try to hack an elevator and got himself and 14 other people stuck for an hour.

And a large number of people connected to the wireless network and logged in somewhere. Or brought something to the conference with an RFID tag in it. Or left Bluetooth enabled on their phone. All of them ended up on the Wall of Sheep.

Not much has been said as to the security of Adobe ColdFusion cfinsert and cfupdate tags. These functions transform input from a POST request into a database insert or update query. Essentially, a developer specifies a database table and creates a form field on a page. When a user submits those values, ColdFusion automatically creates a query to insert or update the corresponding values in the database. In essence, these tags give the developer the ability to execute SQL without writing SQL.

Looking through information available online and through Adobe, it is mentioned that these functions are vulnerable to SQL injection but there is no mention as to how. The majority of discussion pertaining to this potential security issue stopped abruptly around 2007. Even an article written by the 0x000000 hacker webzine in the summer of 2008 about ColdFusion security did not mention cfinsert or cfupdate vulnerabilities.

After looking in to the documentation of these tags, something just did not sit right. As a programmer, I generally desire very fine grained control over the flow of computation. As a security consultant, I generally fear an application which “writes the code for you.” It is very difficult for an application framework to draw the line between robust and secure.

Overview

For practical reasons, I decided to only look at cfinsert’s behavior with regards to Microsoft SQL Server 2000. The test lab consisted of two virtualized Microsoft Windows 2003 Server machines. One machine ran Microsoft IIS 6.0 with ColdFusion 8.x. The other machine ran Microsoft SQL Server. The reason for the breakout is Microsoft Windows does not take kindly to packet sniffing across loopback interfaces.

The general approach taken was to observe database commands sent from ColdFusion to SQL Server under three basic query scenarios. The scenarios identified were a regular (vulnerable) query, a parameterized query, and a cfinsert query. The first two queries where chosen to serve as a baseline against the potentially vulnerable cfinsert query. The tools used included Wireshark and the Microsoft TRACE tool, available as part of SQL Server. In the interest of simplicity, the database has no primary keys or relationships defined. Below are the observations of these commands.The Results

Standard query with no protection

As expected, the standard ColdFusion cfquery function offers no protection from injection vulnerabilities. The TDS packet (RPC packet for SQL Server databases) sent inserted the user controlled parameters verbatim in to the SQL query allowing injection.

Standard query with parameterized values

The first time a parameterized statement is run after a ColdFusion server reboot, it will make use of the sp_prepexec command to dynamically create and execute a parameterized SQL statement in Microsoft SQL Server. Subsequent calls make use of the sp_execute command. User input is properly separated from the SQL logic. Therefore, typical SQL injection attacks are not possible.

cfinsert

This is the case that is of most interest to us and also the most complicated. The cfinsert (and cfupdate) statements work in the same way as the parameterized queries with one additional step. Before a call to either sp_prepexec or sp_execute is made, the sp_columns prepared statement is called.

The first statement executed is sp_columns N’tmptable’, NULL, NULL, N’%’, @ODBCVer = 3. One can only assume that ColdFusion is verifying names of the POST variables align with the column names in the table. By adding or modifying the names of POST variables to nonexistent columns, a call to the cfinsert query fails.Conclusion

To answer the original question, there is no black and white answer as to whether cfupdate or cfinsert are injectable. If the application is not coded to strictly validate the POST data before use in cfinsert queries, you can add information to other columns the original developer never intended. Although this is not as severe as a more traditional SQL injection attack, depending on the data modified, this can still introduce a large amount of risk to the application. For example, when creating a new user and adding to an authentication table, I can specify additional fields and arbitrary information in the POST request that modify my account type or permissions.

Recommendations

My recommendation is to rewrite your cfinsert and cfupdate statements to use parameterized queries or prepared statements. Even if there are no columns for which a user should not be able to access at the moment, there is no guarantee that a database change will not introduce problems in the future. Futhermore, this allows the developer to introduce integrity checks which is a cornerstone of a good application security strategy. If this is not an option, Adobe ColdFusion also supplies a method to specify a whitelist of form fields to allow. This is essentially a requirement for using cfinsert of cfupdate securely. An example of using specifying a cfinsert whitelist is shown below:

Recently at CanSecWest 2009, Microsoft released their internal !exploitable Crash Analyzer to the general public using their Microsoft Public License (Ms-PL). This tool plugs in to the Windows debugging extension (Windbg) and attempts to both uniquely identify and assign an “exploitability” rating to program crashes. Essentially, the end goal of !exploitable is to group crashes by location in code and classify them by severity. Both the CanSecWest presentation and tool can be found on the Microsoft CodePlex website at: http://msecdbg.codeplex.com.

Where will this be used?

An effective bug and vulnerability management program is one sign of a mature and security-aware product program. In a perfect world, resources will be unlimited, developers will always have time to go through all the proper security training, every line of code will be peer reviewed, there will be time to properly validate both the design and implementation before release, products will scale well as new features are added, and there will be plenty of time to perform a full application security assessment and remediate any issues identified. In reality, security vulnerabilities do happen. Furthermore, they are not always easily segregated from other bugs (especially in the finite amount of time dedicated to remediation). By classifying program crashes as Exploitable, Probably Exploitable, Probably Not Exploitable, or Unknown; this tool aims to help organizations triage their bug reports. These ratings tie in directly to the Microsoft Exploitability Index now included with security bulletins. More information about this index can be found on Microsoft Technet at: http://technet.microsoft.com/en-us/security/cc998259.aspx.

How does it perform?

After reading about this tool, I was curious as to how well it performs. I ran a couple of binaries through the tool that were compiled using MinGW (GCC for Windows). The binaries with buffer overflow and format string vulnerabilities had stack protections disabled. The following GCC compilation options were used to disable stack protections:

-fno-pie -fno-stack-limit

Buffer Overflow, No Stack Protection, MinGWA stack based buffer overflow is a condition that arises when a program is allowed to store data beyond the bounds of the pre-allocated memory space reserved. In this situation, the data will overwrite other values on the program stack. Typically, an attacker will leverage this vulnerability to control program flow by modifying local variables, function pointers, or the return address of the stack frame.

The following excerpt is the vulnerable code from the application used to force a program crash from a buffer overflow condition.

char buf[8];if(argc>1) strcpy(buf, argv[1]);

The following is the rating and information outputted by the Microsoft !exploitability tool.

Format String, No Stack Protection, MinGWA format string vulnerability arises when a program uses unfiltered input in the format specifier of certain formatted output functions such as printf, fprintf, or scanf. An attacker can supply his or her own formats to write an arbitrary value to an arbitrary location. Many attacks involve an attacker overwriting a function pointer to control program flow. The following excerpt is the vulnerable code from the application used to force a crash-dump from a format string vulnerability.

The following excerpt is the vulnerable code from the application used to force a program crash using format string specifiers.

if(argc >1) printf(argv[1]);

The following is the rating and information outputted by the Microsoft !exploitability tool.

Function Pointer Manipulation, Default Stack Protection Enabled, MinGWI compiled a program where a user can directly control the pointer to a function. An attacker would leverage this overwrite the funptr to point to the address of his or her shellcode thereby executing arbitrary instructions on the machine. This was compiled using the standard GCC flags to enable stack protection. The rating of this is interesting in that it changes based on the address of the function pointer. Moving the function pointer a small difference (one) elicits a Probably Exploitable rating. Moving the function pointer a larger difference (ten) elicits an Exploitable rating.

The following excerpt shows a vulnerable program moving the function pointer to a user supplied location.

It is important to note that this tool relies on analyzing program crashes to generate a rating. By that definition alone, there is a huge range of attack vectors that will not be covered. Not every vulnerability will crash a program, and for this reason, !exploitable can never be looked at as a “find all tool.” Of the conditions that !exploitable can analyze, it did not accurately identify the format string vulnerability. This worries me as I am now concerned as to what other severe problems it may miss.

With these limitations in mind, I would treat the !exploitable tool as a guide to raise the awareness of those vulnerabilities that are exploitable. I would not rely solely on this tool to categorize or assign risk ratings to program crashes. As long as one only relies on this tool to move a small selection of vulnerabilities to the top of the list, it can be a great asset to an organization. Hopefully, Microsoft will continue to invest in this tool as it has the potential to become a good weapon in a security organization’s arsenal.

As a part of its search functionality, Google creates redirection links that send users to other sites on the Internet. Although the search engine giant has some simple measures in place that attempt to prevent tampering with these links, it's possible to create URLs that appear to go to www.google.com, but actually send a user to an arbitrary site on the Internet. Consider this example (link will probably no longer work):

That link starts with www.google.com, but (if you had clicked on it within the first few minutes after it was created) it would actually take you tohttp://3mu.us/ts/google.html, which is a page I constructed to look exactly like the iGoogle login page. (Don't worry, it doesn't actually capture any information… but it could!)

Although Google would have a hard time preventing me from trying a phishing attack on their users, allowing me to use their own domain as the phishing URL helps increase the potency of my attack tremendously. Basically, they are letting me use their users' trust in the google.com domain against them.

Their mitigation strategy appears to be that they set a timeout on the link (which is why the above example probably won't work). Of course, the most common phishing attacks are propagated through email. Users who are sitting at their computers when they receive an email warning them of a "serious problem with their iGoogle account" might be enticed to log in immediately to check it out.

This vulnerability is obvious enough that I'm betting I'm not the first one to find and report it, but I notified Google just the same. I'll post an update when I have their response.

A 17 year old has admitted to creating the attack to promote his website (and out of boredom). While his site will undoubtedly get more traffic, I wouldn't be surprised if he also gets a felony charge for his trouble. Twitter has an explanation of the event and severalblogs have an explanation of the offending JavaScript.

If you're not familiar with iGoogle (www.google.com/ig), it's a Google service that allows you to create customizable home pages by including gadgets that were contributed by the user community. These gadgets do anything from display the weather to providing news or stock reports. There are even mini Flash games you can play.

This seems harmless enough, but Google's gadget security model gave user-created (and therefore untrusted) gadgets access to your data at Google like gmail, Google docs, etc. I brought up some concerns with this model at the OWASP conference in San Jose a few years ago. Since then, Google has updated their security model to remove some of the more blatant weaknesses.

Recently, I came across a particular gadget created by the user community. It provides a login form that looks like this:

This gadget uses JavaScript to open an iframe to the eTrade mobile login form:

So, users of this gadget are allowing their login form for eTrade to be controlled by some random bozo on the Internet. The author of this gadget could update it at any time to change https://wireless.etrade.com/etrade to https://reallyevilphishingsite.com/etrade and the user wouldn't notice a thing.

Now, is it really Google's responsibility to prevent users from using gadgets like this? I say it is. Phishing attacks come down to an issue of education and trust. A user that knows to check the location bar of their browser, look for the lock icon, and so on may still fall for a phishing attack that comes from Google. After all, the location bar says "https://www.google.com/ig", the certificate is correct, and the lock icon is right where it should be. If Google wants to host user-provided content, they should prevent gadget writers from abusing users' trust in the Google name.

A few weeks ago I gave a presentation at our local OWASP chapter on the current state of access controls. We see access control problems to some extent in nearly every application we assess. They're hard to get right, and they're hard to detect when you've done them wrong. The talk was aimed at exposing why so many developers have a hard time getting them right, and what it takes to avoid problems with them in the first place.

The biggest trick with implementing proper access controls is that they must be done consistently and systemically. There's no room for an ad hoc approach here; the larger an application gets, the harder it is to track where each one-off check has to go. A much better approach is to generalize and standardize the checks as a single global framework.

This problem is compounded by the inability of application security scanners to accurately detect access control problems. They're simply not designed for it. Automated scans absolutely have their place in a secure development cycle, but they're not going to find your authorization problems. For this, you need manual testing and code review.

We've posted the slides used for the presentation on our site. If you're interested in seeing more, I encourage you to check them out.

After a very close election (in which I ran uncontested), I have been re-elected as President of the Kansas City Chapter of ISSA. If you're not familiar with the ISSA, our goal is to provide a forum for like-minded security professionals to interact, network, and share ideas.

Our chapter hosts a lunch meeting every month with a speaker from the information security industry. The topics are different each month and they range from detailed technical presentations to enterprise risk management strategies. Read more about the chapter and upcoming events at http://issa-kc.org.

The consulting team headed up to Iowa State for the national Cyber Defense Competition last weekend. It's a pretty cool idea. There are two main teams: the Blue team, comprised of students from different universities, and the Red team, made up of professional hackers (like us).

The Blue team gets some time to build and secure a network of servers. The day of the event, the Red team shows up and starts to break in. Aside from being extremely fun for everyone involved, this is a great way to introduce students to practical matters in information security. They did surprisingly well, but just like in the real world, a single small error often led to a much larger compromise.

I was working on a team that had locked down their web server fairly well, but had misconfigured a single permission setting, allowing me to read a file out of their web root. I chose a configuration file for one of the web applications they were hosting, which contained the username and password to the database. I logged into the database and replaced the password hash to the Administrator account for that application. I could then log into the application and use the administration features to write files to the server and compromise it further.

Overall, I was impressed by the ways the students found to secure their systems in the face of an onslaught of professional hackers. The real world is a lot more complex, but they're certainly getting a good head start into the security industry.

To say we've fallen behind on posting to the Security PS blog might be understating it a bit. The demands of a growing company have unfortunately pushed it to the back burner. Well, we're going to change that. Look for a lot more activity in this space over the coming weeks as we talk about all the projects, research, and cool ideas our consultants have been working on.

We also take requests. If you're interested in hearing more about a particular topic, let us know.