Open source software security

OWASP Philadelphia March 7, 2011 Meeting

Dave Wichers spoke at the OWASP Philadelphia meeting on Monday, March 7, 2011. He gave a slightly modified version of his presentation at OWASP AppSec DC in November of 2010 titled The Strengths of Combining Code Review with Application Penetration Testing. Although the presentation to Philly OWASP wasn't recorded, you can view the recording of the AppSec presentation at http://vimeo.com/groups/asdc10/videos/19104928 or review the original presentation slides. The presentation focused on the power of code review and went over the comparative advantages of doing a hands on code review (white box testing) to an interactive penetration test (black box testing). Dave stressed that for the sake of the comparison he was only examining situations in which a tester would be involved, and the tests were not simply automated analysis. Dave did, however, enumerate that in both cases he was considering, a tools assisted analysis was his model. He discussed the value added to code analysis by utilizing tools as well as for an external penetration tester. It was agreed that a certain amount of automation was not only beneficial, but in most cases, essential to a thorough product evaluation.
Although most of the examples discussed throughout the presentation were drawn from Java based application stacks, most of the lessons were applicable to almost any web application configuration (PHP, .Net, Cold Fusion, and so on). Dave also mentioned his role in the ASVS project (Application Security Verification Standard) as well as with the OWASP Enterprise Security API (ESAPI - http://www.owasp.org/index.php/ESAPI).
Dave began by asserting that if customers are willing to pay for an assisted penetration test then they should invest in a code analysis as well. He conceded that sometimes source code isn't available (such as in the case of many commercial product evaluations) so penetration testing is the only option. He noted that his company, Aspect Security (http://www.aspectsecurity.com/), offers code analysis and penetration testing services as a package and that the price of one, the other, or both was the exact same (leading one to ask why anyone would not avail themselves of all options given an even cost structure).
In my own personal experience most organizations are reluctant to perform code analysis because they feel doing so is cost prohibitive. It was enlightening for me to hear that an organization offered code analysis services at no additional cost to penetration testing. Hopefully this will force the market to offer this option across vendors as I feel that code review is invaluable in threat assessment and application security.
Dave began by discussing the difficulty in locating the source of a vulnerability in a penetration test. Although a penetration test may be extremely effective in identifying a problem, it cannot easily point to the source of the issue. This point was critical to Dave's analysis: in order for a review to be effective it is essential to not only identify vulnerabilities, but also report them in a meaningful, actionable way to a client so they can evaluate risk, as well as provide tools and information so that a client can fix problems. Source code review is uniquely suited to identifying problems as well as pointing clients to the source of the problem and suggesting fixes. Dave pointed out that while paying for a vulnerability assessment (penetration testing or code review) can be expensive, it is almost universally more expensive to remediate issues uncovered during such an assessment. Dave stressed that it was important for organizations to evaluate this continued cost when considering services. If the cost of a fix is greater than the cost of a find, shouldn't clients insist upon an evaluation that maximizes their ability to implement a mitigation as well thus lowering the cost to fix issues? This hidden cost is often not part of an expense equation when evaluating security review services, which can lead to a false price comparison between vendors.
Dave discussed several different scenarios and compared code review to penetration testing in terms of effectiveness. While the indices he used weren't based on any scientific method, they allowed for easy quantification and evaluation of the various aspects of application security review. Code analysis was, not surprisingly, the clear winner in almost every category. It was easy to see how code review could provide more demonstrative results, especially in the realm of code coverage, vulnerability confirmation, including assertions of lack of vulnerability, and the ease with which code review assisted by tools could quickly pick out potential problem spots in a blink of an eye. For instance, because specific method calls are used for almost every database interaction, it is easy to search source code for every instance of database interaction and quickly identify potential SQL injection vulnerabilities.
Dave addressed one particularly thorny area of software security assessment with particular acumen. It is often the case, especially with code review, that the assessor spots potential areas of trouble. Normally, the reviewer then spends quite a bit of time determining whether or not a problem piece of code is actually exploitable. Developing a proof of concept during this phase can be time consuming and frustrating. Dave likened the situation to a monkey. The assessor identifies the monkey and tells the client about the monkey and how to fix it, which is all too often met with the client insisting the assessor prove the vulnerability is actually exploitable, thus passing the monkey back to the assessor. Dave quickly described an easy way to diffuse this all too common scenario by saying that it was important to determine how expensive it was for each party to deal with the monkey. Was it going to be simple to fix the issue and cheaper for the client to address the problem, or would it be cheaper for the assessor to actually prove a vulnerability as exploitable, potentially saving the client the cost of a fix? This equation, rather than ego or principle, should guide the decision as to how to proceed. It was a valuable lesson and one I'm certain to apply in the near future during a software security evaluation.
One area where code review did measure as equally, if not less, effective to penetration testing was situations dependent on configuration. There are certain scenarios in which a piece of software is made vulnerable based on it's configuration. For instance, a SQL statement that cannot be exploited on MySQL due to the lack of compound query support, but which is easily exploitable with other database support. Alternatively, different operating systems handle file system operations is different way (say Windows versus Unix) that might make a certain piece of code exploitable on one platform but not another. It was extremely interesting and valuable to remember this fact, especially with respect to manual code review, as all too often assessors (and developers) make assumptions about deployment considerations that may not bear out to be true.
Dave also discussed other scenarios in which a penetration test might be more effective. For instance, during very short engagements where there isn't time to evaluate source code. However, in most cases, in order to be effective an evaluation must involve a human tester to assist with automated tools. In these circumstances Dave demonstrated that a code reviewer is almost always more valuable than a penetration tester. In hitting on this point, however, Dave did expose one of the challenges of doing code analysis, and that is the fact that it takes more skill to do an effective code analysis. In order for a code analysis to be performed properly, it takes a programmer who has been trained in security. Dave made the point that it is much easier to teach a programmer the necessary security skills than it is to take a security professional and teach them how to program. Unfortunately there is a limited pool of programmers who are interested in working with security (although I suspect OWASP has a fairly large contingent of such folks), so although they might be more valuable, they are probably also more difficult to find.
Dave's conclusion was that we need to train customers to expect both a code review and a penetration test. Given the value both provide it isn't unreasonable to expect both services, and clients should get used to getting both services for the costs they are currently paying for penetration testing services. Code reviews could be more expensive up front, but given the reduction in costs to remediate problems this initial investment should make sense. Dave also stressed that reports turned over to clients after retaining services for application review should not only include an itemized list of problems, grouped by severity, but they should also include instructions on how to fix the issues, to empower the client to make meaningful change.