November 16, 2011

Through the Looking-Glass

In recent years, I've heard many industry luminaries lament the untimely death of black box web application security assessment tools. Most of them did so, on the basis that white box tools (static analysis code scanners) will eventually rule the world and bring order to our galaxy.

Lack of code-level information for issues reported (mainly, vulnerable line of code)

Regardless of these problems, many still consider black box testing tools to be more accurate, practical and mature tools. Issues reported by black box scanners are usually real and exploitable, and they are less prone to generate noisy results than white box testing tools (taint analysis covers all code paths, which can be a double-edged sword)

Our team has been working extremely hard to remediate these shortcomings, and find creative ways to make automated black box testing more efficient, accurate and to make our customers more successful in securing their web applications. Our latest innovation was dubbed Glass box testing. Actually, it's not such a new idea - this is something that we have been toying around with for many years (we eventually filed the patent back in Feb. 2008)

Finally, it is here.

One of the main drawbacks of black box testing is the fact that the tester (or tool) is completely oblivious to the inner workings of the tested application it is facing. It is unaware of the programming language, the operating system, database server and so forth. The entire process of issue validation in black box testing relies solely on how the application reacts to certain HTTP requests. For example - you send a SQL injection test and then expect to see a SQL syntax error sent back in a subsequent response. Sadly, many of the most critical issues may never reflect back in some scenarios.

Glass box testing, avoids the classic black box pitfalls, by deploying a server-side agent, which gleans critical application information in runtime, and sends it back to the black box scanner. The type of information relies on the type of agent being used, and the location at which it operates, within the application or near it.

For example, imagine a glass box agent, which performs code instrumentation to the tested web application, it sits in critical points and monitors certain events. When a black box scanner sends a SQL injection test, that will reach a sensitive method (sink), our glass box agent will be able to notify the scanner that a vulnerability exists [Problem #2 solved]. Moreover, it will be able to report back on the vulnerable file name, line number, enclosing class and method, as well as the library class and method that was vulnerable –e.g. Java's executeQuery [Problem #3 solved]

In order to solve application coverage issues, various different kinds of glass box agents can be used, the simplest would be an agent that monitors the web application's file system, and submits back information about its structure and files. A run time code instrumentation agent similar to the one mentioned earlier, could also detect certain unreferenced parameters - those that are mentioned in code, but never referred to by HTML - like 'debug' parameters [problem #1 solved]. There are many other things a glass box agent could do to improve and complement black box testing, this is merely the tip of the iceberg.

Up until now, I discussed the merits of glass box testing in general. Now I would like to briefly discuss the unique capabilities we've built into AppScan Standard 8.5 that was released yesterday.

Monitoring of sensitive methods (sinks) for a range of application layers attacks, bringing unparalleled ultra high accuracy to existing issue detection, as well as the capability to detect issues that black box scanners simply can't spot

Other than the cool detection capabilities that were implemented, we had to make sure our users are successful in deploying and using glass box. We can't simply build a half-baked prototype, right? So we baked into it some nifty and important features such as:

Simple agent installation, through an automatic installer for Windows, Linux and Unix

Reduced agent footprint. When the agent is not being actively used, it stops monitoring the application to avoid even the minimal effect on performance

Real time agent status monitoring. From as early as the scan configuration phase, and through the entire scan, AppScan users know exactly what is going on with the agent. Is it up, down, monitoring or configured improperly

Agent management. A cross-scan persistent management of known agents, so you won't have to configure the same things over and over again.

As you can see, we take glass box testing very seriously. This is not just a new feature in the product.

We believe glass box is the future of dynamic analysis.

We continuously benchmark our glass box testing capabilities on many different web applications, trying to assess the exact increase in scan accuracy and coverage. In reality, it really depends on the specific application that is being tested. But when we ran AppScan Standard 8.5 with and without glass box, on a sample set of the standard known vulnerable web applications, we found that it brings an increase of at least 180% in real true findings, and removes false positives completely.

Glass box testing provides the best of both white and black box worlds. It enjoys the merits of dynamic analysis - the fact that it is testing a real live web application, and provides real exploitable issues without any noisy ‘theoretical’ results. And at the same time, it enjoys code-level information, similar to static analysis - the ability to look at the application's code and inner workings, and tie each issue to the specific location where the defect lies.

In fact, this reminds me that glass box testing is actually not our first attempt to marry white box and black box – our previous product release, included the highly successful groundbreaking JavaScript Security Analyzer, which uses a hybrid dynamic and static analysis engine, to analyze JavaScript code for client-side vulnerabilities. Back when it was released last year, we reported that JSA found JavaScript issues in 14.5% of the Fortune 500 web sites – in AppScan Standard 8.5, we actually went back and improved our analysis algorithms, and we now know that JSA finds security issues in 40% of the fortune 500 web sites (as reported in the latest 2011 IBM X-Force trend analysis report). That is an incredibly awesome yet disturbing piece of data.

In order to give our customers the best possible way to secure their applications, we are continuously blurring the lines between the different types of analysis approaches. I personally think that the entire black box vs. white box debate is way past us. Security testers shouldn’t really care what analysis they are using, as long as it produces accurate and actionable results.

Glass box testing is available as a feature of AppScan Standard Edition 8.5, and does not require any integration with other IBM products. Simply download and install AppScan Standard 8.5

Pages

Become a Fan

Comments or opinions expressed on this Weblog are the opinions of the authors alone. They are not necessarily reviewed in advance by anyone but the individual authors, and neither IBM nor any other party necessarily agrees with them. The views expressed by outside contributors and links to outside websites do not represent the views of IBM, its management or employees. All content on this Weblog has been made available on an “as-is” basis, and IBM shall not be liable for any direct or indirect damages arising out of use of this Weblog.