Thursday, March 26, 2009

Website security needs a strategy

Someone begins watching a basketball game and asks who is winning. You might helpfully answer, “Lakers up 76 to 64.” Imagine if instead you said, “The Lakers are 60% from the field, have 12 rebounds, are 8 of 10 from the line, and the average height of the starting lineup is 6’7.” Sure, these are important statistics, but they certainly do not answer the question. (Inspired by Richard Bejtlich)The person listening would probably think you were trying to be funny, a jerk, or perhaps both. Yet, this is how the Web security industry responds when businesses ask about the security of their websites. “We identified 21 security defects including eight Cross-Site Scripting and four SQL Injection, we are improving our SDL processes, and most of our programmers have been through security training.” Again, important metrics, but still not answering the most important question -- how well defended is a website from getting hacked.

The Web security industry supposedly advocates a strategy based upon risk reduction, but predominately practices defect reduction as the measuring stick. This is NOT the same and provides no assurance that a website is more secure against an attack of certain capability. Then in the next breath, as Pete Lindstrom points out, we ironically consider those with the most identified/patched vulnerabilities as the least secure. Simultaneously the community engages in endless ideological debates about black box testing versus source code reviews, the value of SDL pitted against Web Application Firewalls, certification as opposed to field experience, and vast collections of “best-practices” suggested as appropriate for everyone in every case. Confusion and frustration is a reader’s takeaway. It must be understood that each component can be seen as a piece of the puzzle, if only done so without losing sight of the bigger picture -- which is...

How to conduct e-commerce securely and remain consistent with business goals

To be successful companies need a plan -- a common sense approach to building an enterprise risk-based strategy. A system enabling solutions to be implemented in the time and place that maximizes return, demonstrates success, and by extension justifies the investment to the business in the language they understand. A strategy that perhaps begins by addressing the most common question, “Where do I start?” One simple answer is to locate, value, and prioritize an organization’s websites. Go further by assisting the business units in understanding the relevant issues such as “What do we really need to be concerned about first?” Describe the most likely threats (attackers), their capabilities, motives and which vulnerability classes are most likely to be targeted. Only when you know what you own, how much it’s worth, and what attack types must be defended against, according to business objectives, can security be applied in a risk-based fashion.

The problem CIOs and CSOs are facing is that the pseudo Web security standards available are completely inadequate for accomplishing the task. I am not the first to have called out this need. This is what Arian Evans has been talking at length about. As have Rafal Los, Ryan Barnett, Boaz Gelbord,Michael Dahn, Rich Mogull / Adrian Lane, Wade Woolwine, Nick Coblenz, Richard Bejtlich, Gunnar Peterson and likely many others. Many of the building blocks necessary for building a standard are scattered around the Internet including secure software programs, testing guides, and top X lists. These tactical documents could potentially be leveraged into a higher-level framework and serve as the basis for a mature risk-based enterprise website security strategy. The OCTAVE Method, FAIR, ISO 27001/2, among others, also contain well thought out and accepted concepts which we could use as a model.

It is imperative now more than ever that such a resource exists to satisfy a clearly unfulfilled need. CIOs and CSOs know there is a Web security problem. Now they seek guidance in how to develop a program that is flexible enough to meet their individual needs, which can also demonstrate success in manageable increments. I’ve been in contact with several industry thought leaders and enterprise security managers who have expressed personal interest, even excitement, in building out such a system. It is time to start helping ourselves answer the question, “Who is winning the game?”

I feel very strongly about the importance of this effort and I'll be dedicating personal time to see the idea go forward. To move ahead quickly, the Web Application Security Consortium (WASC) and The SANS Institute are planning to initiating a joint open community project (to be named later). If you would like to get involved, please stay tuned for more details.

21 comments:

As something of an outsider/newcomer to the contemporary world of infosec, I have developed some fairly strong opinions about certain issues, like risk assessment and risk management.

If you listen closely to almost any presentation on risk, you will easily determine that the meaning of "risk" keeps moving - sometimes it involves a probability of occurrence and harm, sometimes it means a threat, sometimes an exposure of some sort, sometimes a vulnerability, sometimes just about anything you like.

When you start talking with people, thought leaders excepted, about how to assess risk, the approaches are all over the map - probably because the definitons are all over the map. If you bring up quantitative approaches to the measurement of risk, most infosec people seem either to glaze over, or else, to issue categorical statements about lack of data, difficulty of analysis, etc.

I agree with you that it's a huge problem and I don't know what the answer is.

I wonder if it's got something to do with the fact that so many infosec people came up through networking or cryptography, and just don't have the background to take other disciplines seriously.

By way of my qualifications to have any opinion at all, for whatever it's worth, I have spent 28 years in and around IT, and part of the last 17 of those years in the analysis of medical treatment outcomes, which is just another kind of risk assessment/management activity. I am also certified in the FAIR methodolody, so I have a bit of bias in that direction.

You spend too much time watching ball games, and not enough time playing the actual game (or thinking about how others might play the game). Metrics like you described are important to the players of the game, and they are the ones who have to work together to actually win!

What we need is risk analysis measurements (i.e. operational metrics) that lead both UP and DOWN to strategic and tactical guidance, as well as tactical metrics that lead up to the operational.

Applications need to be asset tagged and managed as an inventory, yes, but from a strategic perspective NOTHING happens unless it's governed and funded.

Architectures, not "web sites" need to be assessed for risk (additionally they must meet compliance regulations, contracts, and standards -- which can be risks in and of themselves) -- and these architectures include important things like DATA. Runtime applications came from somewhere and even when built still consist of SOURCE CODE at one point in time.

P.S. I think you mean 27005 instead of 27001/27002, but it's not that great. Sure, FAIR is a pretty good risk analysis framework, as is OCTAVE. Not going to argue there, but realize that FAIR is for architectures and OCTAVE is for pre-built software, so you're talking about using risk analysis frameworks for simple solutions.

You are watching the game from the bleachers, wearing your team jersey, and shaking your Big Foam Hand -- but you don't know anything about the players or the game, nor do you really seem to care.

@Patrick, thanks for taking the time to comment. And you are exactly right, our definition and/or view on risk does sway and subject to interpretation -- part of the industry maturity I guess. I think for most organization they just have to pick a definition that suits them and go with it.

For myself, I like understanding the attacker, what they can and are motivated to do, and finally how my security stacks up against them. And then constantly asking the question, "Are the costs of doing X to stop them worth it to me?" THEN, should my defenses fail, what is my survivability going forward.

As you suspect, infosec is littered with technologist and engineers who haven't learned or don't know how to speak to the business in term they understand. It's cultural I think. Those that are able to bridge that gap will do very well for themselves.

I have read through OCTAVE, and did not really come away with the impression that it applied only to pre-built software.

With regard to FAIR, I spent three days last December in a classroom with Alex Hutton and Jack Jones, working through FAIR concepts and secenarios in some detail, and do not recollect hearing the term "architecture" a single time.

So help me here - what do you mean when you say that Octave is for sofware and FAIR is for architecture?

And, @ Jeremiah -

I am not trying to pick on network people or applications coders or cryptographers or anyone else - I am really just trying to understand why there is a problem.

With regard understanding the mindset and motivations of attackers - what a fascinating and difficult task this is.

Defects and vulnerabilities in hardware and software can often be discovered, remediated or exploited, and described with some precision.

The motivation of any given attacker, and the likelihood of any given attack at any given moment, is much harder to assess with any accuracy. All of the historical data in the world may or may not help to predict some kinds of attacks.

One of the things I really like about FAIR is that it is firmly grounded in probability estimates and gaussian distributions. I don't want to start a war about black swans here - trying to predict unknown unknowns has a place, too - but so many problems in infosec risk seem to model well using this approach.

Just to cite the 2008 Verizon report - 83% of the attacks that Verizon documented required, per their definitions, no skill, low skill, or moderate skill. Only 17% of attacks required high skill. Or to cite another example from the same report - 90% of exploits were made against machines/environments where patches had been available for 6 months.

Numbers like these conform to the notion of normal distributions and ranges of attacker skill or exploit within a single standard deviation of the mean.

Sorry to go on and on - it's an area that really interests me.

I hope that your prediction about bridging the gap and doing well is true - it's been a while since I had any "paying" work :)

@Patrick, please get in touch with me directly via email (jeremiah -at- whitehatsec.com). Clearly you have given the (risk) matter a lot of thought, have passion for the space, and are an analytical thinker -- not to mention the business experience to back up your conclusions.

With regard to FAIR, I spent three days last December in a classroom with Alex Hutton and Jack Jones, working through FAIR concepts and scenarios in some detail, and do not recollect hearing the term "architecture" a single time

FAIR is risk analysis, which is an operational activity. It rolls up well to strategic activities such as security architecture. My favorite place to put the output from a risk analysis framework such as FAIR would be into Gunnar Peterson's Security Architecture Blueprint.

This isn't a FAIR activity, it's a strategic activity that would follow (or coincide) with a risk analysis. Where do you see FAIR feeding into?

Jeremiah,I could not agree more. I would also say we need to engage others in the community who are closer to the metrics we are looking to collect. I am speaking of the Altiris, BMC, and Rational, etc. of the world. If you have a moment, pop over to a brief post I wrote yesterday morning (no doubt as you were editing the above post):

Jeremiah,One more thing. When I look at the large (and growing) mass known as risk management, I can't help but think of the CEO with fingers in his ears, unable to make heads or tales of the terminology or understand the nebulous concepts.

I also can't help but think how there is an easy way to grab his attention...

Show a businessman how to make a clear and informed decision on something he thinks is intangible or show him how much it costs, and you have his undivided attention. Make it easy, and they will come.

@Ron, I think we are both trying to tackle the very same problem. Everyone appears to be capture more metrics of different types, a very good thing, the challenge is assigning meaning and effectiveness. You called this problem out directly in your post.

I read your post as a call for Risk Management solutions for websites.

Practicing Risk Management for web applications and/or web sites should not be different from other methods of Risk Management. There are different methods to perform it.

Personally, I like the following approach:

Risk management is a systematic and analytical process to consider thelikelihood that a threat will endanger an asset or function and to identify actions to reduce the risk and mitigate the consequences of an attack.

I believe that we (our industry) have the different tools and knowledge to perform those assessments (sure, we still need to continue our research efforts).

We also have some agreements in place how to rank the threats and criticality.

Some of us do not agree on how to identify the actions to reduce the risk and mitigate the consequences of an attack (Jeremiah, did you really used the W#$ word :-)) and I do not think that this is a problem, since actions should be taken based on the availability of resources.

@Sharon, hey there! That's exactly what it was. I agree, Risk Management strategies are there, but for whatever reason they haven't been operationally applied to Website security yet. I think (others too) its time to change that.

Your bullet point model isn't all that different than what I have in my head, and Im sure with consultation with others in the field something very useful could be flushed out.

Time to measure the effect of certain controls on the output of security readiness.

@Jeremiah, I'm reminded of a concept you eluded to: What would have happened if the industrialists added pollution to their model of expansion at the turn of last century? What if the web security community did the same with information? That appears to be the problem you are trying to solve here.

I see the focus on risk management in web security as a very important part of the strategy (and grassroots movement) you have outlined above. As for the other parts of the strategy, metrics are another part of the puzzle... leading of course to the determination of results.

You asked, "What is my next step?" I have few resources or contacts in this space, and I created the outline in a bubble. This is why I contacted you. However, these would be my next steps (if I had resources or contacts):1) Fill in gaps on the framework and create a rough draft XML Schema for collection of data generated during the web application security management cycle.2) Discuss the current/future plans for data collection with compliance/configuration management vendors 3) Draw similarities between the models of data collection (taking COBIT, ITIL, et al into consideration)4) Modify rough draft XML Schema5) Provide draft XML Schema to web security community and vendors for comment/discussion.

Some questions I have for you:1) What existing frameworks do you see as affecting the metric model?2) Do you think risk management is solidified enough to use the metric methodology I've outlined?3) Do you have any critical critique or thoughts concerning the structure of the framework I've outlined?

Regardless of where this goes, I agree with you that we are all trying to get to the same place. For me, it is starting here.

I believe that the most effective information and technology operations risk management today happens because of the joint efforts of serious information security professionals and leaders (formal and informal) across the various organizations that make up modern corporations in most fields today. Sure, execution of the day-to-day information and application security operations are still critical. But are they more noise without leadership and "connectivity" with the rest of business operations? Depending on the given corporate culture, this is less or more process-driven.

* Sometimes it is strictly a matter of personal relationships (a risk-elevating situation). * In other situations, project processes link these communities for long enough to work out understandings and plans that may often facilitate effectively dealing with risks. * Some organizations have broad and deep formalization of their organizational relationships, and the processes and information flows to maintain a shared understanding of threats, risks, controls & mitigations, current state, etc.

I believe that the first two situations above dominate, and that the third is an exception. As a result, what ever we do to support creation of a "risk-based enterprise website security strategy" or to find a new broad description of what about information security is valuable, it needs to be useful in those organizations that depend heavily on cross-domain relationships between serious professionals to prioritize risk management investments.

Get a new model for "selling" information security as an enabler, or a new enterprise website security strategy into their hands, and I believe that you will begin to get traction.

I have also found, at least in the corporate world, that many "risk assessment/management" programs look to gather metrics that are nearly impossible to come up with. Such metrics generally stem from questions like I know how many viruses/spam messages/intrusion attempts/etc that we stopped today, but how many did we miss?. I have heard many a groan from my team members when trying to phrase an answer to these types of questions. It also appears that these numbers are the used to form some type of ratio where the lower the number missed equates to a better security posture, and therefore a more secure infrastructure.

Judging from the thoughtfulness of your post, clearly you have put in more detailed thought on a metrics framework than I have. What I've seen is a big missing piece in operational decision making data from managers over website security. I'm trying not to have preconceived notions about what should or shouldn't go into such a model.

To answer your questions, the best I can...

1) I am not well versed with existing frameworks, but if someone good can be leveraged, I'm all for it.

2) It could certainly use more solidification, that's for sure. But yet, I think we have enough to go forward and produce something of value in the website security space. The bar is low, because essentially there is nothing.

3) None of particular value, except to say such a model would need to be customizable to suit organizations large and small.

From a clash of opinions comes a spark of truth.

@Matt, thanks the comment and linking to your blog post. I'll give it a read today. It is amazing that so many people are witnessing the same challenge. The more I discuss the subject with people, the more it looks to come down to a "Operational Guide to Website Security". Maybe we can start there, with some foundation in risk.

About Me

Jeremiah Grossman's career spans nearly 20 years and has lived a literal lifetime in computer security to become one of the industry's biggest names. He has received a number of industry awards, been publicly thanked by Microsoft, Mozilla, Google, Facebook, and many others for his security research. Jeremiah has written hundreds of articles and white papers. As an industry veteran, he has been featured in hundreds of media outlets around the world. Jeremiah has been a guest speaker on six continents at hundreds of events including many top universities. All of this was after Jeremiah served as an information security officer at Yahoo!