Rather than interspersing security in your existing activities, factor security into its own set of activities. Factoring security into its own workstream of quality control, keeps the activities lean and focused. Because you’re leveraging high ROI activities, you’re increasing the likelihood of influencing the shape of the software at strategic points. You create an engineering system that helps you address security throughout your software development vs. up front or after the fact. Using multiple activities vs. a single big bang effort up front or at the end creates an approach that scales up or down with project complexity and size.

The trick is to not over-invest at any one stage – stay leveraged. Rule out losing strategies early in the analysis but still cast a wide net. Progressively more costly analysis happens later and is much more likely to be on the correct path. Don’t spend a lot on costly late activities until you’ve passed muster on much less costly activities. Start with low cost, high roi activities, learn along the way, iteratively add more time and expense as you better understand what you are doing.

Simply factoring security into its own activities doesn’t produce effective security results. However, factoring security into focused activities does create a way to optimize your security efforts, as well as create a lean framework for improving your engineering as you learn and respond.

The problem is, you may have an approach that isn’t working, or it’s not as efficient as it could be, but you may not even know it. Let’s take a quick look at some broken approaches and get to the bottom of why they fail. If you understand why they fail, you can then take a look at your own approach and see what, if anything, you need to change. The more prevalent broken approaches include:

The Bolt on Approach

The Do It All Up Front Approach

The Big Bang Approach

The Buckshot Approach

The All or Nothing Approach

The Bolt on ApproachMake it work, and then make it right. This is probably the most common approach to security that I see, and it almost always results in failure or at least inefficiency. This approach results in a development process that ignores security until the end, usually the testing phase, and then tries to make up for mistakes made earlier in the development cycle. This is one way of addressing security. This is effectively the bolt on approach.

The assumption is that you can bolt on security at the end, just enough to get the job done. While the bolt on approach is a common practice, the prevalence of security issues you can find in Web applications that use this approach, is not a coincidence.

The real weakness in the bolt on approach is that some of the more important design decisions that impact the security of your application have a cascading impact on the rest of your application’s design. If you’ve made a poor decision early in design, later you will be faced with some unpleasant choices. You can either cut corners, further degrading security, or you can extend the development of your application missing project deadlines. If you make your most important security design decisions at the end, how can you be confident you’ve implemented and tested your application to meet your objectives?

The Do It All Up Front ApproachThe opposite of the bolt on approach is the do it all up front approach. In this case, you attempt to address all of your security design up front. There are two typical failure scenarios:

You get overwhelmed and frustrated and give up, or

You feel like you’ve covered all your bases and then don’t touch security again until you see your first vulnerability come in on the release product.

While considering security up front is a wise choice, you can’t expect to do it all at once. For one thing, you can’t expect to know everything up front. More importantly, this approach is not as capable of dealing with decisions throughout the application development that affect security, as an approach that integrates security consideration throughout the life cycle.

The Big Bang ApproachThis can be similar to the do it all up front approach. The big bang approach is where you depend on a single big effort, technique or activity to produce all of your security results. Depending on where you drop your hammer, you can certainly accomplish some security benefits, if some security effort is better than none. However, similar to the do it all up front approach, a small set of focused activities outshines the single big bang.

The typical scenario is a shop that ignores security (or pays it less than the optimal amount of attention) until the test phase. Then they spend a lot of time/money on a security test pass that tells them all the things that are wrong. They then make hard decisions on what to fix vs. what to leave and try to patch an architecture that wasn’t designed properly for security in the first place.

The Buckshot ApproachThe buckshot approach is where you try a bunch of security techniques on your application, hoping you somehow manage to hit the right ones. For example, it’s common to hear, “we’re secure, we have a firewall”, or “we’re secure, we’re using SSL”. The hallmark of this approach is that you don’t know what your target is and the effort expended is random and without clear benefit. Beware the security zealot who is quick to apply everything in their tool belt without knowing what the actual issue is they are defending against. More security doesn’t necessarily equate to good security. In fact, you may well be trading off usability or maintainability or performance, without improving security at all.

You can’t get the security you want without a specific target. Firing all over the place (with good weapons even) isn’t likely to get you a specific result. Sure you’ll kill stuff but who knows what.

The All or Nothing ApproachWith the all or nothing approach, you used to do nothing to address security and now you do it all. Over-reacting to a previous failure is one reason you might see a switch to “All”. Another is someone truly trying to do the best they can to solve a real problem and not realizing they are biting off more than they can chew.

While it can be a noble effort, it’s usually a path to disaster. There are multiple factors, aside from your own experience, that impact success, including maturity of your organization and buy in from team members. Injecting a dramatic change helps get initial momentum, but, if you take on too much at once, you may not create a lasting approach, and will eventually fall back to doing nothing.

A Web application is not a component is not a desktop application is not a Web service. If I gave you an approach to threat model a Web application, you can probably stretch the rubber band to fit Web services too. You could probably even bend it to work for components or mobile applications. The problem is that type and scenario really do matter and can sharply influence your technique. If you generalize the technique, you produced generalized results. If you increase the context and precision, and factor that into your technique, you can deepen your impact.

For example, if I'm threat modeling a Web application and I know the deployment scenario, I can whittle my way through there. If I'm threat modeling a reusable component that may be used in a variety of situations, I would actually start with the top 3-5 likely deployment scenarios and play those out. This sounds obvious but I've seen folks try to model all the possible variations of a component in a single messy model, or I've seen them give up right away saying there's just too many. The irony is that a quick 3-5 little models, usually tell you very quickly what the dominant issues and themese are.

Categories for Context

Context-precision is simply a term I give to the concept of evaluating a given problem's context using a few categories that impact the approach:

For application type, you could focus on CRM or some other business vertical. I dumb it down to the architecturally significant set that I've seen have immediate impact on the activity. For example, while a Web application and Web service seem like you could just use the same pattern-based frame above for Web applications, I would argue you can create a better one optimized for Web services. For example, a Web service involves a proxy. Proxy is a great category to evaluate attacks, vulnerabilities, and countermeasures. For another example, take input validation. For a Web application, you're likely talking about the HTML stream. For a Web service, we're focused on XML validation.

One one extreme, you don't want to invent a new technique for every context. Instead, you want to pay attention to the context and ask whether or not the technique was actually designed for what you're doing. Could you further refine or optimize the technique for your product line or context? Asking these questions sets you down the path of improved software engineering.

I see a lot of confusion over terms when it comes to threat modeling. The terms matter because they shape focus. For example if you confuse threats with attacks, you've limited what you're looking for.

Asset. An asset is a resource of value. It varies by perspective. To your business, an asset might be the availability of information, or the information itself, such as customer data. It might be intangible, such as your company's reputation. To an attacker, an asset could be the ability to misuse your application for unauthorized access to data or privileged operations.

Threat. A threat is an undesired event. A potential occurrence, often best described as an effect that might damage or compromise an asset or objective. It may or may not be malicious in nature.

Vulnerability. A vulnerability is a weakness in some aspect or feature of a system that makes an exploit possible. Vulnerabilities can exist at the network, host, or application levels and include operational practices.

Attack (or exploit). An attack is an action taken that utilizes one or more vulnerabilities to realize a threat. This could be someone following through on a threat or exploiting a vulnerability.

Countermeasure. Countermeasures address vulnerabilities to reduce the probability of attacks or the impacts of threats. They do not directly address threats; instead, they address the factors that define the threats. Countermeasures range from improving application design, or improving your code, to improving an operational practice.

Rather than get caught up in the definitions, you can focus on intent:

Asset. What do you value? What do you prioritize? What do you not value?

Threat. What's a potential negative effect or outcome?

Vulnerability. Where's the weakness? How could a threat be realized?

Attack. How to take advantage of the weakness?

Countermeasure. How to plug a hole or reduce the damage?

An example putting this all together would be, my asset is my customer information. My application faces the threat of injection attacks. My application's lack of input validation is a vulnerability. SQL Injection and Cross-site scripting would be attacks. Countermeasures would be validating input and keeping user input out of the control channel.

There's a couple of interesting points here:

Assets tend to be very much a point of confusion. It's tough to put boundaries here. For example, show me which pages or components you don't want to protect. Do you really have a page or component you don't care about? Is it an asset or not? This is why we moved to identifying security objectives in our threat modeling approach. This was a lot more tangible. Using security objectives also allowed us to incorporate assets without pinning your threat modeling success to you being able to identify your assets. However, assets do have their place. I think they're best use is to identify priorities and values. Do you care more about your shed or your garage? Your garage or your house. OK, let's start w/your house and prioritize there ...

Attacks tend to be the domain of subject matter experts. We don't expect typical practitioners to know the realm of attack possibilities. That's why we try to provide a picklist where possible.

Vulnerabilities are your most valuable fallout of your threat modeling exercise. While threats help you see what's within the realm of possibility and to prioritize, vulnerabilities are a clear cut action item. You can use a list of vulnerabilities to drive action. Given enough relevant info, a developer can analyze and address a vulnerability from their code's perspective. Testers can scope their work testing that the developer addressed the vulnerability.

Countermeasures could also be called mitigations. You mitigate a risk, not a threat though. You can counter an attack. At the end of the day, we went with countermeasures because enough customers liked the idea of being empowered to defend their code against evil doers as well as non-malicous threats. Put it another way, countermeasures resonated with practitioners.

Threats are particularly interesting. You can slide and dice them many ways. You can also choose classes of threats. For example you may view threats strictly as those with business impact. I think it helps to broaden, yet scope this to negative impact against the confidentiality, integrity, or availability (CIA) of your information system.

What's important in all this is that your security objectives are the ultimate scoping tool and that by understanding the relationships between the terms, you produce more effective results when you threat model.