OWASP - Michael Coates

Shared posts

I have a thing for over-the-top cryptography headlines -- mostly because I enjoy watching steam come out of researchers' ears when their work gets totally misrepresented. And although I've seen quite a few good ones, last week WIRED managed a doozy.

In contrast to the headline, which is quite bad, the article is actually pretty decent. Still, the discussion around it has definitely led to some confusion. As a result, many now think an amazing breakthrough has taken place -- one that will finally make software secure. They're probably right about the first part. They may be disappointed about the rest.

The truth, as usual, is complicated. There is, indeed, something very neat going on with the new obfuscation results. They're just not likely to make software 'unhackable' anytime soon. They might, however, radically expand what we can do with cryptography. Someday. When they're ready. And in ways we don't fully understand yet.

But before I go into all that, it's probably helpful to give some background.

Program obfuscation

The Wired article deals with the subject of 'program obfuscation', which is a term that software developers and cryptographers have long been interested in. The motivation here is pretty simple: find a way that we can give people programs they can run -- without letting them figure out how the programs work.

Note that the last part necessarily covers a lot of ground. In principle it includes aspects ranging from the nature of specific secret algorithms used -- which may be proprietary and thus worth money -- to secret information like passwords and cryptographic keys that might be hardcoded into the program.

If you're like me, you probably wrote a program like this at some point in your life. You may have eventually realized how ineffective it would be -- against a smart attacker who could dump the program code. This is because most programs (and even compiled binaries) are pretty easy to read. An attacker could just look at this program to recover the secret password.

Program obfuscation is motivated by the idea that many useful programs would benefit if we could somehow 'stop' people from doing this, while still letting them possess and run the code on their own computers.

In real world software systems, 'obfuscation' usually refers to a collection of ad-hoc techniques that turn nice, sensible programs into a morass of GOTOs and spaghetti code. Sometimes important constants are chopped up and distributed around the code. Some portions of the code may even be encrypted -- though only temporarily, since decryption keys must ship with the program so it can actually be run. Malware authors and DRM folks love this kind of obfuscation.

While these techniques are nice, they're not really 'unhackable' or even secure against reverse-engineering in the strong sense we'd like. Given enough time, brains and tools, you can get past most common software obfuscation techniques. If you have, say, Charlie Miller or Dion Blazakis on your side, you probably do it quickly. This isn't just because those guys are smart (though they are), it's because the techniques are not mathematically rigorous. Most practical obfuscation amounts to roadblocks -- designed to slow reverse-engineers down and make them give up.

So what does it mean to securely 'obfuscate' a program?The poor quality of existing software obfuscation set cryptographers up with a neat problem. Specifically, they asked, could we define a strong definition of program obfuscation that would improve on, to put it politely,the crappeople were actually using? Given such an obfuscator I could hand you my obfuscated program to run while provably protecting all partial information -- except for the legitimate inputs and outputs.

If this could be done, it would have an incredible number of applications. Stock traders could obfuscate their proprietary trading algorithms and send them to the cloud or to end-customers. The resulting programs would still work -- in the sense that they produced the right results -- but customers would never learn anything about 'how they worked'. The internals of the program would be secret sauce.

Unfortunately, when the first researchers started looking at this problem, they ran into a very serious problem: nobody had any idea what it meant to securely 'obfuscate' a program in the first place.Do think about it for a second before you roll your eyes. What do you think it means to obfuscate a program? You're probably thinking something like 'people shouldn't learn stuff' about the program. But can you explain what stuff? And does 'stuff' depend on the program? What about programs where you can efficiently learn the whole program from sending it inputs and seeing the results? What does it mean to obfuscate those?

Clearly before progress could begin on solving the problem, cryptographers needed to sit down and figure out what they were trying to do. And indeed, severalcryptographers immediately began to do exactly this.Black-box cryptographic obfuscation

The first definitions cryptographers came up addressed a very powerful type of obfuscation called 'virtual black box obfuscation'. Roughly speaking, it starts from the following intuitive thought experiment.

Imagine you have a program P, as well as some code obfuscation techniqueyou've developed. It's important that the obfuscation be efficient, meaning it doesn't slow the program down too much. To determine whether the obfuscation is successful, you can conduct the following experiment.

Give Alice a copy of the obfuscated program code Obf(P), in a form that she can look at and run on her own computer.

Take the original (unobfuscated) program P, and seal it in a special computer located inside an armored 'black box'. Let a user Sam interact with the program by sending it inputs and receiving outputs. But don't let him access the code.

Roughly speaking, what we want from secure obfuscation is that Alice should learn 'no more' information after seeing the obfuscated program code, than could some corresponding user Sam who interacts with the same program locked up in the black box.

What's nice here is that this is we have the beginnings of an intuitive definition. In some sense the obfuscation should render the program itself basically unintelligible -- Alice should not get any more information from seeing the obfuscated program than Sam could get simply by interacting with its input/output interface. If Sam can 'learn' how the program works just by talking to it, even that's ok. What's not ok is for Alice to learn more than Sam.

The problem with this intuition is, that of course, it's hard to formulate. One thing you may have noticed is that the user Alice who views the obfuscated program trulydoes learn something more than the user Sam, who only interacts with it via the black box. When the experiment is over, the user who got the obfuscated program still has a copy of the obfuscated program. The user who interacted with the black box does not.

To give a practical example, let's say this program P is a certificate signing program that has a digital signing key hard-coded inside of it -- say, Trustwave's -- that will happily attach valid digital signatures on certificates (CSRs) of the user's devising. Both Alice and Sam can formulate certificate signing requests and send them into the program (either the obfuscated copy or the version in the 'black box') and get valid, signed certificates out the other end.

But here's the thing: when the experiment is over and we shut down Sam's access to the black box, he won't be able to sign any more certificates. On the other hand, Alice, who actually who learned the obfuscated program Obf(P), will still have the program! This means she can keep signing certificates on and forevermore. Knowing a program that can sign arbitrary certificates is a pretty darn significant 'piece' of information for Alice to have!

Worse, there's no way the 'black box' user Sam can ever learn a similar piece of information -- at least not if the signature scheme is any good. If Sam couldask the black box for a bunch of signatures, then somehow learn a program that could make more signatures, this would imply a fundamental weakness in the digital signature scheme. This cuts against a basic property we expect of secure signatures.

Barak et al. proposed a clever way to get around this problem -- rather than outputting an arbitrary piece of information, Alice and Sam would be restricted to outputting a single bit at the end of their experiment (i.e., computing a predicate function). This helps to avoid the problems above and seems generally like the "right" definition.

An impossibility result

Having proposed this nice definition, Barak et al. went on to do an irritating thing that cryptographers sometimes do: they proved that even their definition doesn't work. Specifically, they showed that there exist programs that simply can't be obfuscated under this definition.

The reason is a little wonky, but I'm going to try to give the flavor for it below.

Imagine that you have two programs A and B, where Ais similar to our password program above. That is, it contains a hard-coded, cryptographic secret which we'll denote by password. When you run A(x), the program checks whether (x == password) and if so, it outputs a second cryptographically strong password, which we'll cleverly denote by password_two. If you run A on any other input, it doesn't output anything.

Now imagine the second program Bworks similarly to the first one. It contains both password and password_two hardcoded within it. A major difference is that Bdoesn't take a string as input. It takes another computer program.You feed a program in as input to B, which then:

Executes the given program on input passwordto get a result r.

If (r == password_two), it outputs a secret bit.

It should be apparent to you that if you run the program Bon input the program A -- that is, you compute B(A) -- it will always output the secret bit.* It doesn't even matter if either B or A has been obfuscated, since obfuscation doesn't change the behavior of the programs.

At the same time, Barak et al. pointed out that this trick only works if you actually have code for the program A. If I give you access to a black box that contains the programs A, B and will simply let you query them on chosen inputs, you're screwed. As long as passwordandpassword_twoare cryptographically strong, you won't be able to get Ato output anything in any reasonable amount of time, and hence won't be able to get B to output the secret bit.

What this means is that Alice, who gets the obfuscated copies of both programs, will always learn the secret bit. But if Sam is a reasonable user (that is, he can't run long enough to brute-force the passwords) he won't be able to learn the secret bit. Fundamentally, the obfuscated pair of programs always gives more information than black-box access to them.

It remains only to show that the two programs can be combined into one program that still can't be obfuscated. And this is what Barak et al. did, completing their work and showing the general programs can't be obfuscated using their definition.

Now you can be forgiven for looking askance at this. You might, for example, point out that the example above only shows that it's hard to obfuscate a specific and mildly ridiculous program. Or that this program is some kind of weird exception. But in general it's not. There are many other useful programs that also can't obfuscate, particularly when you try to use them in real systems. These points were quickly pointed out by Barak et al., and later in other practical settings by Goldwasser and Kalai, among others.

You might also think that obfuscation is plain impossible. But here cryptography strikes again -- it's never quite that simple.

We can obfuscate some things!Before we completely write off black box obfuscation, let's take a moment to go back to the password checking program I showed at the beginning of this post.

This function is an example of a 'point function': that is, a functions that returns false on most inputs, but returns true at exactly one point. As you can see, there is exactly one point (the correct password) that makes this routine happy.

Point functions are interesting for a two simple reasons: the first is that we use them all the time in real systems -- password checking programs being the obvious example. The second is that it turns out we can obfuscate them, at least under some strong assumptions. Even better, we can do it in a way that should be pretty familiar to real system designers.

Let H be a secure hash function -- we'll get back to what that means in a second -- and consider the following obfuscated password checking routine:

// Check a password and return 'true' if it's correct, 'false' otherwise// But do it all *obfuscated* and stuff//bool ObfuscatedSuperSecretPasswordProtectedStuff(string passwd) {

Note that our 'obfuscated' program no longer stores the plaintext of the password. Instead we store its hash only (and salt), and wrap it inside a program that simply compares this hash to a hash of the user's input. This should be familiar, since it's the way you should be storing password files today.**

A numberofcryptographers looked at formulations like this and showed the following: if the password is very hard to guess (for example, it's secret and drawn at random from an exponentially-sized space of passwords) and the hash function is 'strong' enough, then the hash-checking program counts as a secure 'obfuscation' of the basic password comparison program above it.

The intuition for this is fairly simple:imagine Alice and Sam don't know the password. If the password is strong, i.e., it's drawn from a large enough space, then their probability of ever getting the program to output 'true' is negligible. If we assume an ideal hash function -- in the simplest case, a random oracle -- then the hard-coded hash value that Alice learns is basically just a useless random string, and hides all partial input about the real password. This means, in practice, that any general question Alice can answer at the end of the experiment, Sam can answer with about the same probability.***

Finding better definitionsMore than a decade after the definition was formulated, there are basically two kinds of results about 'strong' virtual-black-box obfuscation. The first set shows that it's impossible to do it for general programs, and moreover, that many of the interesting functions we want to obfuscate (like some signatures and pseudorandom functions) can't be obfuscated in this powerful sense.

The second class of results shows that we can black-box obfuscate certain functions, but only very limited ones like, say, point functions and re-encryption. These results are neat, but they're hardly going to set the world on fire.

What cryptographers have been trying to achieve since then is a different -- and necessarily weaker -- definition that could capture interesting things we wanted from obfuscating general programs, but without all the nasty impossibility results.

And that, finally, brings us to the advances described in the WIRED article.

Indistinguishability ObfuscationIn shooting down their own proposed definitions, Barak et al. also left some breadcrumbs towards a type of obfuscation that might actually work for general programs. They called their definition "indistinguishability obfuscation" (IO) and roughly speaking it says the following:

Imagine we have two programs C1, C2 -- which we'll describe as similarly-sized circuits -- that compute the same function. More concretely, let's say they have exactly the same input/output behavior, although they may be implemented very differently inside. The definition of indistinguishability obfuscation states that it should be possible to obfuscate the two circuits C1, C2 such that no efficient algorithm will be able to tell the difference between Obf(C1) from Obf(C2).

While this idea was proposed years ago, nobody actually knew how to build any such thing, and it was left as one of those 'open problems' that cryptographers love to tear their hair out over. This remained the case until just last year, when a group of authors from UCLA, UT Austin and IBM Research proposed a 'candidate construction' for building such obfuscators, based on the new area of multilinear map-based cryptography.

Another interesting variant of this notion is called extractability obfuscation (EO), which implies not only that you can't distinguish between Obf(C1) and Obf(C2), but moreover, that if you could distinguish the two, then you could necessarily find an input value on which both C1 and C2 would produce different outputs. Moreover, other work indicates IO and EO give essentially the 'best possible' obfuscation you can provide for general programs.

The question you're probably asking is: so what? What can we do with indistinguishability obfuscation?

And this is where the WIRED article differs substantially from the reality. The truth is that IO will probably bring us major cryptographic and software advances -- it's already been shown to bring about succinct functional encryption for all circuits, as well as advances in deniable encryption. Moreover it can be used to build known forms of encryption such as public-key encryption based on new techniques that differ from what we use today -- for example, by obfuscating 'symmetric' primitives like pseudorandom functions.****

And if this doesn't seem quite as exciting as the WIRED article would imply, that's because we're still learning what the possibilities are. The new techniques could, for example, allow us to build exciting new types of security systems -- or else show us radical new ways to build systems that we already have. Even the latter is very important, in case advances in our field result in 'breaks' to our existing constructions.

What IO and EO will probably not do is make programs unhackable, since that implies something a whole lot stronger than either of these techniques currently provide. In fact, it's not clear what the heck you'd need to make programs unhackable.

And the reason for that is quite simple: even obfuscated software can still suck.

Notes:Thanks to Susan Hohenberger and Zooko for help with this post. * The bit in the Barak et al. proof doesn't have to be secret.

** With a pretty damn big caveat, which is that in real life (as opposed to cryptographic examples) users pick terrible passwords, which makes your password hashes vulnerable to dictionary attacks. We have a variety of countermeasures to slow down these attacks, including salt and computationally intensive password hashing functions.

*** God I hate footnotes. But it's worth unpacking this. Imagine that Alice learns 'some information' from seeing an average program containing the hash of a strong password. If the password is strong, then Alice can only guess the right input password with negligible probability. Then Sam can 'use' Alice to get the same information as follows. He formulates a 'fake' program that contains a completely bogus password hash and mails it to Alice. Whatever information she learns from his program, he takes and outputs as the information he 'learned'. It remains to show that if the password hash is strong (a random oracle), Alice isn't going to be able to learn more from a random hash than she would from the hash of a strong password.

**** The basic idea here is that symmetric encryption (like AES, say) can't be used for public key encryption, since anyone who learns your encryption key also learns your decryption key. But if you can obfuscate an encryption program that contains your symmetric encryption key, then the new program becomes like a public key. You can hand it out to people to encrypt with.

The XML data (for version 1.03) is an extract of all the information included on the playing cards included in the source word processer document. Going forward I intend to maintain both versions in parallel.

I am hoping the XML version will allow people to consume the data in other documents, applications and systems, or help them create their own printable versions more easily. Like everything else in the project this is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.

As a demonstration of using the XML file, the Cornucopia project now has a Twitter account (@OWASPCornucopia), which tweets the attack text from a pseudo-randomly selected card twice daily. For example, the sequence of three (this time) tweets from a couple of hours ago today:

The card for Friday morning (GMT+0) is the Nine of Cryptography, which reads "Andy can bypass random number generation, random GUID...

...generation, hashing and encryption functions because they have been self-built and/or are weak"

Currently the card is selected from the whole pack each time, but this could (should?) be changed to randomly select a card from the deck until all cards have been dealt. The account's profile photo is updated to match the card for an hour, before it reverts to a more generic image. The tweets might just about be helpful as an application security awareness resource — perhaps as "appsec requirement of the day".

A trivial use, but it was fun doing some coding. And working on this helped me come up with a solution for another problem I have been thinking about.

"For an organization to really mature around application security, they need to be building security into their software from day one." -- Jim Manico
Jim Manico started the OWASP podcast series in 2008. In that time, he has recorded close to 100 interviews to keep the community updated on the lastest project development within OWASP. As Jim reaches his 100th episode, he reminisces about how the series was started, what his original vision was and what he's going to do now that he has passed the reins over and moves on to other projects. We start with a question about the origins of the project and how it grew.
"It's easy to talk about to talk about the 'purity' of software development, but managing a fleet of already insecure apps is an equally difficult problem." -- Jim Manico
About Jim Manico
Jim Manico wasl elected as an OWASP Global Board Member as of January 1, 2013. He been an active member of OWASP since 2008. He is the VP of Security Architecture at WhiteHat Security.
Jim's main passion at OWASP is supporting projects that help developers write secure code.

"There are a lot of security flaws in websites like Facebook and WordPress applications. Most of those flaws are because the developers first create the application and then consider the security." -- Abbas Naderi
PHP is one of the most used programming languages for the web. The problem with PHP has always been that it's easy to get started programming with PHP, but that's also one of its biggest flaws when considering application security. Abbas Naderi leads the OWASP PHP Security Project, which is a sample framework to demonstrate proper usage of the tools and libraries, as well as providing guidelines for new PHP projects. In this segment of OWASP 24/7, I talk with Abbas about the PHPSEC project as well as one of his other project, RBAC.
About Abbas Naderi
Abbas Naderi Afooshteh is a renowned security expert in the middle east, he has ranked first in many national and global CTFs and has been in the field for more than 8 years. He is the current Iran Chapter Leader at OWASP, and has 5 years of activity in OWASP resulting in many projects such as OWASP RBAC Project, OWASP PHP Security Project, OWASP WebGoatPHP Project and etc. He has participated in many other projects such as Cheat Sheets and ESAPI.
Abbas has studied software engineering and information technology in his BS and MS and is now going to CMU to study Information Security for MS+PhD. He spends many hours daily leading OWASP projects and mentoring new enthusiastics that join projects, as well as shaping bright ideas into OWASP projects.More can be found at https://abiusx.com/cv

Four years ago we announced SPDY, an experimental protocol designed to make the web faster. It has matured quickly since then: it’s been adopted by Chrome, Opera, Firefox and Internet Explorer, dozens of server and middleware vendors, and many large sites. SPDY also became the foundation of the HTTP/2 protocol developed by the IETF, and is continuing to serve as an experimental ground for prototyping and testing new features for HTTP/2.

When we set out on this journey the objective was to make the web faster and we wanted to share our latest results. Of course, as with every aspect of performance, the numbers vary for each site, based on how it is constructed, the number of downloaded assets and dozens of other criteria. That said, we’ve observed significant performance improvements in latency—as measured by time from first request byte to onload event in the browser—across all of Google’s major web applications.

Google News

Google Sites

Google Drive

Google Maps

Median

-43%

-27%

-23%

-24%

5th percentile(fast connections)

-32%

-30%

-15%

-20%

95th percentile(slow connections)

-44%

-33%

-36%

-28%

The above results were measured for Chrome version 29 and compare HTTPS vs. SPDY for each application across millions of real user sessions with various connectivity profiles. SPDY delivers significant latency savings for users with fast connections, at the median, and for the long tail users with high round-trip times.

In parallel with our contributions to the HTTP/2 standard, we continue to prototype further SPDY performance improvements through smarter compression, flow control, and prioritization. Our hope is that each of these will deliver better and faster SPDY and HTTP/2 protocols. We aren’t done yet—there are still many opportunities and ideas to explore to make the web faster!

Ethical Hacking Tips

Manipulating Numeric Parameters

Numeric parameters are control variables that drive almost every web application. As ethical hackers, modifying these numbers is one of the most basic but also most important tasks we perform. During a vulnerability assessment, parameters are modified thousands of times in different ways to test the security controls of a web application. Over the years, one of these ways has remained the most common and prevalent way to bypass access restrictions of an application: incrementing or decrementing numbers.
One of the most common places we see parameters we wish to modify are in simple GET requests:

Because this type of testing is such a repetitive task, we will talk for a moment on efficiency. While we normally use a proxy to intercept, modify, and send parameters to the server, we can modify these faster and easier than that. Because we encounter examples of this on almost every assessment, saving one or two seconds per test case is a valuable benefit. For this we turn to a small Firefox utility called URL Flipper. With this small add-on we can use Ctrl-Shift-Up or Ctrl-Shift-Down hotkeys to quickly “flip” through many parameters as well as view the output rendered in the browser.

As shown above, we used this to quickly flip through 10 or 20 “quantity” values. While in this example is useful for relatively small increments, it won’t serve us well for all test cases.

Incrementing numbers may often require us to search a far greater span of numbers than is feasible by this method. For very large groups of numbers where we may be searching for account numbers, profile IDs, etc, we can use a more powerful automated assistant. Burp intruder provides one of the most powerful scripted search and replace functions available to ethical hackers.

Here we use Burp intruder to specify a position (the ‘quantity’ parameter), and we will then set a type of payload. Below shows the payload configuration where we specify ‘Numbers’ as the payload type, and configure the range of values we wish to search for.

This is an extremely powerful way for us to search a larger set of parameters. But as we scan a larger range, making sense of the results becomes increasingly difficult. We don’t have the time to manually examine 200 or even 2000 results, so we will examine differentiation. Sorting by response length is an easy way to look for pages that might be different than the others. While this example does not show us a result set where we can sort for anything of value, sorting will be required in most real life testing. Burp also has a filter where we can search by string, regex, or response code.

There are many situations where application arithmetic can be abused to bypass business logic. One classic financial example is using fractions of cents. Imagine we have a money transfer function that transfers money from one account to another, how can we be sure the application properly handles all decimal values? What if an application ignored the values past the second decimal, but the database still honored the transaction? To test this we will replay the function several hundred times and compare the amounts in both accounts to what they were previously. Below shows an example of a request where such an attack might be feasible:

We can feed this again to Intruder to replay this. Instead of ‘numbers’ as a payload we will use ‘null payloads’:

These techniques can be applied to real attacks on the vast majority of web applications. And now that we have the technical methods covered, we can focus again on the logical attacks. When reviewing an application, we must constantly be aware of the function we are performing, and what control variables we possess. Are we viewing an order? Do we control an order ID parameter? Are we transferring money? Do we control the source account ID? This is what we must determine for every secure function within an application.

Another way we can sometimes subvert application arithmetic is by attempting to overflow stored integer values. We must remember that not all systems may treat integers the same. A front end Java application may handle the number 2,147,483,648 without any problem, while this same number may overflow a 32 bit signed integer on a backend system; this number may now be represented as a negative value.

In summary, below are four key areas to look for when the user controls numeric values used by the application:

While this is not a complete checklist for testing these parameters, it is something testers should keep in mind. Every application is unique and has a highly subjective threat model. In ethical hacking assessments, performing a thorough analysis of the functionality and then developing custom attacks must always be the first priority.

Sarah Baso is the Executive Director of OWASP. Her day to day responsibilities include managing a membership of over 43,000 people in 100+ countries. What does it take to run an organization this size and how do you prepare for the future without getting bogged down in the details.
About Sarah Baso
Sarah is based in San Francisco, Californa, USA and has been the Executive Director of the OWASP Foundation since April 2013. In this role, she supervises the paid OWASP staff in addition to administering all programs and operations of the OWASP Foundation, reporting to the OWASP Board of Directors.

As the Projects Manager for all projects at OWASP (the Open Web Application Security Project), Samantha Groves has deep visibility into the 140 or so projects currently on the boards at OWASP. We start our discussion with what her typical day looks like and then move into how OWASP is changing and the different models for project frameworks.
About Samantha Groves
Samantha Groves is the Project Manager at OWASP. Samantha has led many projects in her career, some of which include website development, brand development, sustainability and socio-behavioural research projects, competitor analysis, event organisation and management, volunteer engagement projects, staff recruitment and training, and marketing department organisation and strategy implementation projects for a variety of commercial and not-for-profit organisations. She is eager to begin her work at OWASP and help the organisation reach its project completion goals.
Samantha earned her MBA in International Management with a concentration in sustainability from Royal Holloway, University of London. She earned her Bachelor's degree majoring in Multimedia from The University of Advancing Technology in Mesa, Arizona, and she earned her Associate's degree from Scottsdale Community College in Scottsdale, Arizona. Additionally, Samantha recently attained her Prince2 (Foundation) project management certification.

On today's segment, we're going to take a different approach from our normal format. I was at the AppSec USA Conference in New York City last week and was asked to chair a panel for the game show "Wait, wait... don't pwn me!". This is the full recording of the session. As you listen, keep in mind, every situation described within the game is true. Let's start first with the introductions of Chris Eng, Josh Corman and Space Rogue.

Kate Hartmann is Operations Director of OWASP. She is responsible for creating and maintaining the platform for the OWASP organization Kate has a unique perspective on how virtual meetings are becoming an important tool for the global community. We start our discussion with Kate talking about her typical day at OWASP... which begins with a full pot of coffee to get her jumpstarted.
About Kate Hartmann
Kate joined the OWASP Foundation May 2008. Her work within the OWASP Foundation includes supervising and facilitating the completion of operationally critical tasks. She provides direction to the operational team by mapping out cross-committee objectives and identifying opportunities that promote the Foundation's short term and long term strategic goals.
Kate has a B.A. in English and History from VA Tech in Blacksburg, VA. Prior to joining the OWASP Foundation, she worked with Government funding sources in the Healthcare Industry.

On today's segment, we're going to take a different approach from our normal format. I was at the AppSec USA Conference in New York City last week and was asked to chair a panel for the game show "Wait, wait... don't pwn me!". This is the full recording of the session. As you listen, keep in mind, every situation described within the game is true. Let's start first with the introductions of Chris Eng, Josh Corman and Space Rogue.

Every time a major password breach occurs the compromised emails addresses and passwords are available for hackers or criminal enterprises to download and analyze. Unfortunately, the breached companies often improperly protect their passwords and as a result it is easy for hackers to obtain the original password for each user. Attackers will collect and store these compromised credentials and then use this information to take over the user's account anywhere else on the web where the user has reused the username and password.

Account Take Over is Distributed and Automated via Botnets

Armed with millions of email addresses and passwords from the breached website, attackers use these credentials to programmatically attempt to login to websites all over the web. This activity is not conducted by a single individual sitting at their computer and manually entering usernames and passwords. Instead criminal enterprises will leverage scripts, automation, and botnets to distribute the attack across many computers all around the world. This automation allows the attacker to cover their tracks by initiating the login attempts from real machines all over the world.

This type of attack is known as credential stuffing also called account takeover

A real world example - How Facebook Is Protecting Their Users

Facebook was not compromisedin any of these recent attacks; however, as a large target and an organization that is accurately aware of the risk of third party breaches, their security group took immediate action. Facebook mined the compromised data from the adobe breach to identify Facebook accounts that were potentially at risk. Facebook enabled additional security controls for any account within the adobe breach that used the same password on Facebook.

What You Can Do - Comparing Compromised Passwords with Your Web Applications Users Info

Here's how to check if users the password information within a data breach may put your users at risk. Note: This may not be realistic for an organization to perform due to the technical requirements and resources needed.

Obtain the compromised user data - Download a data dump of the compromised information. This may take some searching but the information is available online.

Determine the passwords associated with user email addresses - This step is straight password cracking. The work required will depend upon on the original method used by the website to protect their passwords. Unfortunately, in many cases the passwords are poorly protected with either encryption or a weak hash such as md5. The current best practice for password storage is bcrypt or PBKDF2. Read here to find out how sophos analyzed the adobe breach.

Test Your User Passwords - Next we need to compare the compromised data with your web application's usernames and passwords. Important, this step does not require you to view the passwords of your users. Instead, we'll simulate the login process in your application to validate if the compromised password from the breached website matches the user in your web application. Here are the steps:

Compare the usernames within the breached data (from step 1) with usernames in your web application. Note any matches. These are the accounts we want to test in your application.

Work with your development team to identify the authentication routine for your web application. This will include a step where the password provided by the user is hashed and then compared against your data store of usernames and hashed passwords

Build a script to perform the hash and database comparison. The purpose of using a script is to avoid having to manually interact with your website UX for each test.

Take the list of impacted usersnames (from step 3.1) and their actual passwords (from step 2) and run them through the script (from step 3.3). If a login is successful then we've identified a reused password that is at risk.

Protect your users - For any matches in step 3.4 you'll want to immediately take action to protect their account. This can include locking their account, forcing a password reset, or whatever actions are typically taken by your organization in the event of account takeover.

What You Can Do - Securely Store Your PasswordsEnsure you protect password data in your application by using an appropriate hashing algorithm. Approaches such as encryption, md5 hashing or any sort of home made manipulation are not sufficient. Instead you should use scrypt, bcrypt or PBKDF2. More information on password storage can be found at the OWASP Password Storage Cheat Sheet.

In this segment of OWASP 24/7, I talk with Kelly Santalucia about what it takes to grow OWASP, how she's working with the outreach foundation, the outreach program for kids, the diversification of the membership... things that are helping the community grow. We also talk about what OWASP will look like in the future as virtual chapter meetings become an integral part of the platform. I began by asking Kelly what her job responsibilities are with OWASP.

A new book about application security for Chief Information Security Officers (CISOs) has been announced by OWASP.

The Application Security Guide For CISOs seeks to help CISOs manage application security programs for their own roles, responsibilities, perspectives and needs. The book examines these aspects from CISOs' responsibilities from the perspectives of governance, compliance and risk. The primary author Marco Morana is a leader in application security within financial services sector, but the book is sector, region and technology agnostic. It is also completely technology and vendor neutral. Along with fellow contributors Tobias Gondrom, Eoin Keary, Andy Lewis and Stephanie Tan, I had the pleasure of contributing to Marco's content, reviewing the text and producing the print version of the book.

The 100-page document provides assistance with justifying investment in application security, how application security risks should be managed, how to build, maintain and improve an application security programme, and the types of metrics necessary to manage application security risks and application investments. There is a standalone introduction and executive summary. The core of the book is then collected into four parts:

I : Reasons for Investing in Application Security

II : Criteria for Managing Application Security Risks

III : Application Security Program

IV : Metrics For Managing Risks & Application Security Investments

There are also additional appendices with information about calculating the value of data and cost of an incident, as well as a cross-reference between the CISO guide and other OWASP guides, projects and other content.

Whenever you roll out a new security architecture, the collaboration with the architecture and development team is fundamental to success. Push back from those teams can come in all sorts of ways, they may think the security team is over reaching. Development teams justifiably worry that the security requirements will swamp the budget and make them blow their timeline.

I was on a project and we did a review with tech leads and the comment at the end was "I am surprised it was so boring", I said "I take that a compliment." Security architecture shouldn't be about making people's headpsin or trying to be a security rockstar. Just because 80% of the industry does it, doesn't mean its the most effective choice. Make things simple to understand, simple to build, simple to integrate. Boring is good.Better yet, your designs may actually make it into a real, live production system.

On a related note, an interesting company that excels in risk management is Markel (a so called baby Berkshire), and on Markel's Q3 earnings call, Tom Gayner picked up on a similar theme (and managed to work in a Princess Bride reference to boot):

"Prior to this call, I was speaking with one of our long term shareholders about the conference call process. He told me that he has owned the stock for about 20 years and called us boring. He said that he really couldn’t imagine us saying anything in the call that would change his mind about Markel and his long term ownership of the stock. I thank him for his honesty and actually I agreed with him.

Our number one goal is actually to still be here 20 years from now and delivering a report just as boring as this one. I suspect what he told me was true for our loyal and long-term owners to provide us with the capital we need to run this business. I also suspect that it’s true for shorter term followers of the stock that usually issue a sell recommendation immediately following this call.

As the character Inigo Montoya, said in The Princess Bride, "You keep using that word. I do not think it means what you think it does." I will leave it to those of you who would access to the long term part of the Markel to decide which unchanging point of view you wish to embrace. The force that propelled the 27 year line on the chart up into the right cannot be found within the sales of the spread sheet.

Boring works for me when it comes to talking about our financial results. We shouldn’t be that excited. I am all in favor of grinding it out along the same lines that we have through our 27 years as a public company. We’ve looked after the capital that you entrusted to us and we produced wonderful returns for the owners of this company.

Roughly speaking, the longer you run Markel the more money you can make. And by the way while it may look and sound boring, I can promise you that we’re having a lot of fun doing this. There is not a day that goes by when I don’t hear laughter at this office. In addition, they are some days when we are simply stunned by what happens. I promise you that we are not bored."

There's one more Markel story worth sharing, and that is when Tom Gayner, noticed that whenever the phone rang in Steve Markel's office, Steve would wince. Tom asked why and Steve said "there's no such thing in insurance as a good incoming call."

So it is with security, strive for boring.

I went to the Cloud Identity Summit in Napa this year. Just like every year there were great talks that showed new ways to solve old problems. One of my favorite was from Amazon on their cloud identity and security work. It was an incredibly boring talk actually. Watching Amazon's IAM progress can be like watching grass grow.

Too true. So why was the Amazon talk one of my favorites? It was talking about a real system, deployed, at very large scale, that normal users can build, deploy and run. That counts for much, much more. Its substance. Plow horse, not show horse.

Josh Corman's recent tweet is true for sure:

The reverse of that is also true - the difference between doing lots of little things better (checklists) versus el grande silver bullet "solution."

I was talking with someone who spends time on visualization, and I mentioned how helpful better metrics on control efficacy would be, measuring their resilience in different scenarios. The response I got was "that's not sexy" and that it was better to focus on threats and graphics. Lots of people in the industry think that. I do not see that way - that's not engineering and finding and building margins of safety, its a fashion show, tailored for security conferences.

Security doesn't need new protocols as much as it needs way better integration on the ones we already have. Integration is a slog but the long run payoff of having a more resilient system is worth it. Stay boring, my friends.

As the Projects Manager for all projects at OWASP (the Open Web Application Security Project), Samantha Groves has deep visibility into the 140 or so projects currently on the boards at OWASP. We start our discussion with what her typical day looks like and then move into how OWASP is changing and the different models for project frameworks.
About Samantha Groves
Samantha Groves is the Project Manager at OWASP. Samantha has led many projects in her career, some of which include website development, brand development, sustainability and socio-behavioural research projects, competitor analysis, event organisation and management, volunteer engagement projects, staff recruitment and training, and marketing department organisation and strategy implementation projects for a variety of commercial and not-for-profit organisations. She is eager to begin her work at OWASP and help the organisation reach its project completion goals.
Samantha earned her MBA in International Management with a concentration in sustainability from Royal Holloway, University of London. She earned her Bachelor's degree majoring in Multimedia from The University of Advancing Technology in Mesa, Arizona, and she earned her Associate's degree from Scottsdale Community College in Scottsdale, Arizona. Additionally, Samantha recently attained her Prince2 (Foundation) project management certification.

OWASP is a worldwide nonprofit organization with a mission of making application security visible for all. In short, we're trying to make the world a better place by providing free security resources and communities.

If OWASP has helped you or your organization please consider supporting our nonprofit. Here are a few ways to help:

If we reach 100 supporters ThunderCloud will send a single message via the supporters chosen option (facebook, twitter, etc). That's it and we can potentially reach 50k+ people. However, if we don't reach the minimum supports we get nothing.

https://www.thunderclap.it/projects/6403-hackers-hit-time-square-nyc

Attend OWASP AppSecUSA. The event is next week in NYC and will be the most concentrated group of application security professionals in the world. There is an amazing lineup of speakers and events

Support your local OWASP chapter. We have chapters in over 100 countries around the world. Find your local OWASP chapter here.

I used to think we had security problems, and then we figured out how to integrate the security solution.

Actually, the security basics are long figured out, its the integration that's killing us. We don't have a security problem with integration requirements. We have an integration problem with security requirements.

The why is pretty easy to understand.

Security is mainly an isolated department. Not really ops, not really arch and definitely not dev

Security is audit focused

Secuity is swimming in "products"

These problems are a direct result from how most security organizations and the industry as a whole functions. The combined effect leads to security's integration problem.

Because information security is separate it does not get to collaborate in the key architecture, design, and deployment decisions in an effective way. At best, most groups function like a QA team that runs black box tests for vulns. Its an important activity, but if that is the apex then you are left Security as QA. A point in time activity.

Audit is of course a great example of an important but limited, point in time activity.Its not integrated along process, technology or organizational lines.

And so how does the security industry resolve its tactical role? With a panolpy of products, of course. Sure the security product markets has come a very long way over the last decade, but all the key questions with these tools are not so much on their efficacy, but rather on will they integrate to my process and my architecture? Sure static analysis can generate findings, but how do I parse through them? How do associate countermeasures? And by the way who does this?

Sure a WAF can catch some bugs, but will it work with my session manager? Sure the next great identity protocol can give me more fine grained access control, how do I get it work on iOS? And on it goes.

To resolve these issues, I would suggest we go back to the root of the problem and tackle them head on.

Security is mainly an isolated department. Not really ops, not really arch and definitely not dev

Security is audit focused

Audit will always have a role to play in security, there has to be an objective score card and someone who sets the hurdles. This function will remain, but it needs to be put in the right context to make it useful. The outputs of audit should over time lead to better design patterns that are based in a long term mindset for evolving the security architecture. Not auditor-driven fire drills - "ok quick, we gotta go deal with privileged accounts because even though we have had this problem for 20 years and so has everyone else, the auditor flagged it. Quick I need a seven figure budget authority - Aux Barricades!"

Secuity is swimming in "products"

The audit focus leads directly to Infosec's product problem. The vendors are nothing if not observant. They know what's on the auditor's checklist this year. They will build products (or at least marketing materials) that will solve those problems or at least enough to get the auditor to cross off the check box. Besides the obvious cost and resource problem here, there's a quality problem. Audit isn't architecture, its more a fashion show with problems du jour and acceptable modes of solution.

Again audit is not going away and neither are products. But just like audit, products need to be put in the proper context. That context is not what your audit checklist says, its your business process, your users, your customers, your threat model, and your system architecture.

Reorienting means spending as much time on product selection and architecture on APIs, the integration points, the message flows. When I do Web services security design work, a pattern is almost always the security pipeline pattern. Why? its boring, its where the security enforcement happens. But that's why its important! Its where the security enforcement happens. Not the token or protocol patter, but is it a filter, a container, an agent, a standalone server. That's as critical as any other decision.

Why did Active Directory win? Sure they have smart programmers. So do IBM, so did Sun (Rest in Peace), so did lots of others. AD has connectivity and integration to users, to sessions, to servers and so on. Lots of people tried to get Kerberos to work. The differentiator wasn't Kerberos, Microsoft did the hard, crucial intgeration parts.

Security Token Services (STS) are a subtly powerful use case. Give me a Kerberos ticket, I give you a SAML token. Give me an Oauth token, I give you a SAML token. Support many different clients and keep the back end simple. Integration wins, not slamming pizza boxes in racks and telling scary threat stories.

POCs should never be isolated. They have to be run against as production-like system as possible. Every interface and API needs to be combed over to look for how exactly its integrating with your system. What system has session management? Where are the users logging on? Do they push or pull policies? What tokens are supported? Are agents required? The list of architecture questions is logn, but its path that must be walked. Security projects and security architectures succeed or fail not on the control or solution quality, but on how well the security system integrates with the system as a whole.

Below are the setup instructions to configure a virtual security training lab that runs within an isolated virtual machine. Using this lab you can perform hands on security testing that leverage a variety of prominent application security flaws including those mentioned in the OWASP Top 10.

In this segment, I talk with Tom Brennan, the organizer of AppSecUSA 2013 in New York City. The conversation centers around what's going on in New York, why Tom took on the project and what makes AppSec conferences special.
About Tom Brannen
Tom Brennan is volunteer to the OWASP Foundation since 2004 when he founded the New Jersey Chapter after serving on the Board of Directors for the FBI Infragard program in New Jersey. The NJ OWASP Chapter later merged with the New York City Chapter in 2006.
Tom was appointed to the Global Board of Directors in 2007 by his peers and was re-elected by the membership in 2012 for another two year term.
During his leadership of OWASP Foundation he has led many global and local initiatives for OWASP including governance, fund raising via conferences and membership and business marketing.

I used to think that we had security problems to solve, and that the role of a security architect was to identify threats and to design, implement, and integrate controls. I was wrong.

We don't have security problems. Not really. Our protocol life span is measured in decades. SSL, Kerberos, SDSI and SPKI, these have been around for decades. The problem is not the protocols.

The hard part isn't mapping threats to controls either. True the industry as a whole isn't great at this, but it can be learned and its getting better.

No, the problem is integration.

I used to think we had security problems, and then we figured out how to integrate the security solution.

Actually, the security basics are long figured out, its the integration that's killing us. We don't have a security problem with integration requirements. We have an integration problem with security requirements.

When you go to RSA or any security conference, the show floor is awash in products. They all do something other but can you get them to work in your production system? And will you get the same quality as they show in the cooked up demos? Probably not. Why not? It works fine in the demos, aren't those real examples? Sure, but they are not your examples. We can think of a solution, but will it work with Google? Does Websphere support? What about the keystore? Can we bind it to OTA? And on it goes.

Its not just solutions, its threats, vulns and countermeasures too. Go right down the OWASP list.

SQL Injection? Need to integrate input validation to front end handlers and to the data model. Need to integrate prepared statement, parameterized queries, and stored procs to the backend DAO.

CSRF? Need to integrate a nonce to every request. Need to convince the team that spent the last decade keeping state out of the middle tier, that we should put it back in. Need to find a way to reboot a server even with the middle tier state.

Elevation of privilege? Need to insert a security pipeline to extract the attributes from the signed token and integrate that to the internal authZ code.

The mapping from the problem to solution is figured out, so are the protocol choices, what is left is the hard part - integrating to real systems. The security industry has not really tackled this problem at scale, and that's where we need to go next - security integration patterns - what's necessary for this to collaborate in the system factoring in the first mile and last mile integration, communication protocols, persistence, session state, and interfaces.

"Schipol established itself as the world’s most admired and eagerly studied airport. It grew rapidly and became a central hub of Holland’s economy and transport system. In the late 1980′s, to resolve the problems of ever-growing congestion, architects Jan Benthem and Mels Crouwel were given the job of enlarging and improving Schipol.

The partners have been dubbed ‘the Houdinis of the Polder’ by critic Art Oxnaar for their ability to solve complex architectural problems. They have designed such high-prestige buildings such as the Anne Frank Museum and are now integrating a new subway line and bus station into Amsterdam’s Central Station, a projec likely to change the face of the city…instead of using separate buildings (or parts of buildings) for separate functions (arrivals, departures, shopping, etc.), the architects insisted on just one sleeks grey-white steel and concrete building, in which everything was integrated…’Normally, everything is split up and problems are solved separately,’ says Jan Bentham, the airport’s chief architect since 1985. ‘That makes individual problems easy to solve, but the connections between problems become very complicated and something simple ends up in a real mess. If you integrate it in the first place, that turns out to be the most simple solution.’

Schipol’s integrated structure allows huge volumes of freight and passengers to circulate at high speed and with remarkable precision. The simplicity and flexibility of its basic grid design (the grid is even visible on the airport’s floor tiles) means different elements in the building can be switched around constantly to meet ever-changing needs. The complex and huge flows of people and cargo are shifting constantly. Even small changes in one area will ripple consequences through the entire system. For example, if fewer passenger use one ‘finger’ of the site, the customs desks, shops, or bus station all have to be modified. The key to solving these problems is a mixture of quick thinking and careful preparations. ‘You must have a plan, but you also have to be ready to change it at the last minute or to make a decisive, sudden completely unexpected movement to arrive at the place you want.’ A rigid approach would be doomed. ‘You must never say: “I’ve done my work in advance and nothing will keep me from my path.” We don’t plan the track; but we plan where we want to be. We have several tracks in mind and we are always ready to change track or re-group or have a new solution or be able to react at the last moment. We tried to make Schipol so flexible that you can always change course. You need a simple system where, if something goes wrong, you always have a second, third, or fourth solution at hand. For example, we always insist that the buildings have strong floors. When you build an area you must always expect that it will be used for something else. It starts out as a waiting area, but maybe they want to build shops there. Maybe they want a bank, too, which has a heavy safe in it. When the traffic flow changes, it becomes perhaps a baggage-handling area with heavy machines or a big hole on the floor…no grand visions, but clever solutions’

Schipol, meanwhile is grappling with barely less complex questions of identity of scale. As it grew from seventeen million passengers in 1989 to thirty-eight million a decade later Schiphol mutated into a small city, with burgeoning numbers of offices and cafes and a shopping centre. the airport has become so large – and its attendant congestion and pollution so irksome – that plans even floated (before being rejected as too expensive) to relocate it to an artificial island twenty miles into the North Sea connected to land by a high-speed rail-link. The culture is changing too. “The whole airport environment has changed in the last ten years. From a functional machine for traffic it has become much more of an environment for spending time and and money in.’ says Bentham. ‘Airports of the past were places where you basically didn’t want to be, just a space to pass. It’s nice to make spaces where people enjoy themselves and like to be.’

The authority at Schiphol has come up with a radical new problem. ‘They always said to us: “We want sober and functional.” Now the new manager thinks the time for sober and functional is over. He says he likes the cosy atmosphere of the airport at Christmas. It’s very nice at Christmas. We have lights everywhere, trees every ten metres. The manager said: “We need to have the airport like Christmas all year round.” Well, Holland is an entirely artificial country. You want Christmas all year round? We can fix that, no problem.

The airport has fundamentally altered, though. It makes most of its income form shopping rather than anything related to flying. That means Bentham must juggle totally contradictory imperatives. ‘In an airport you want the best flows, the most obvious route from one point to another. In a shopping centre, you want people to get lost. You let them in and never let them go out again! So you have to combine that in the airport where it is changing from a machine for traffic into a machine to generate money. So you have to have a fine balance between finding your way and losing your way! You have to realize the problem and make the best of both worlds. What is the shortest way from your car to the airport bu that makes it impossible to miss the shops? That’s the clever solution.’

Solutions, solutions. Problems, problems.

In football, Johan Cryuff says, ‘Simple play is also the most beautiful. How often do you see a pass of forty meters when twenty meters is enough? Or a one-two in the penalty area when there are seven people around you and a simple wide pass around the seven would be a solution? The solution that seems the simplest is in fact the most difficult one.’ Benthem takes the idea a stage further: ‘I think it is very Dutch to look for a simple solution. And the biggest thrill in our work is to find an even simpler solution. That is what we like. In the end the most satisfying solution is the one where you have cleared everything away and there is no solution at all anymore but, at the same time, the problem has been solved. That’s the nicest way of doing it.’”

Mark O'Neill and I just published Top Ten Security Considerations for Internet of Things. It was very a lot of fun to work on this on a personal and professional level. I have been a big fan of Mark's work for along time. I only got to work with him once before when we did a full day of Web Services security at the OWASP AppSec conference, we had a full day just on WS and had a great lineup of speakers including Mark and Brad Hill.

Turns out that Web services play a leading role in Internet of Things, and the space is so full of amazing use case right now, its great to be able to tackle the security issues and work to get more secure applications than the industry managed to do for the original Internet.

We have learned a lot in 20 years on the web and now is the time to apply it to IoT. Its not so simple actually because some new curveballs come into consideration due to the nature of IoT systems. The paper is available here, and below is a summary of the top ten.

The Internet of Things (IoT) is an industry megatrend that
holds the promise to open up new ways of doing business and communicating. The
core difference between Internet of Things and previous computing revolutions
is that the human user is usually the catalyst, we go to our PC to do some work, we go to the Web to do some research or check mail. In the Internet of
Things, the thing talks back.

Context matters in security and that goes double for IOT.
The IOT use cases that have appeared so far tend to be very domain specific.
That trend looks set to continue. We will examine scenarios in the Automotive
and Utilities industry to look at the unique security considerations in those
IOT environments.

So let’s “State the problem” by listing the Top Ten Security
Considerations for the Internet of Things:

1. Protocol Proliferation

This is the Web but not as we’ve known it. The myriad of IOT
protocols make the security architect’s job vastly more complicated than web
app security which deals primarily with HTTP.

2. Initiation

IOT has many different ways to initiate the protocol dance:
active clients, passive clients, client initiation, and server initiation.

3. Keys

Many current IOT devices rely on hard coded access keys,
leaving them vulnerable to brute force, spoofing and other attacks.

4. Names

The IT industry has become reasonably good at identifying
human users, with Active Directory, LDAP and application user databases, but
objects? Not so much. Consistent naming is the key to defining and enforcing
policy.

5. Constrained Devices

Despite the challenges in IOT we do have many security
protocols to choose from, however the deployments are limited by the processing
power on the device side.

6. Time

There are many IOT technologies, such as NFC (Near Field
Communication), for smartphones and similar devices that do not have the
concept of time. This may not seem like a security issue, until you realize
that virtually all authentication protocols use time as a primary defense
mechanism.

7. Usability

Will existing protocols like Kerberos, X.509, Federation,
OAuth, SAML and others be up to the challenge of securing Machine to Machine
communications when there is not a user present to initiate?

8. Patching

Vulnerabilities will be found in IOT systems, but how will
they be patched? IOT systems require management systems for patching and
versioning.

9. Stunt Hackers

Hacking IOT is a great way to generate headlines, there will
be an endless flow of security research, the more interesting the device the
more the attacker interest.

10. Ugly failure modes

IOT apps are real things, when they fail so does your power,
your supply chain, your fleet tracking and so on. Worse as we discussed in #7
Usability retry and restart may be somewhere between difficult and impossible.

Sarah Baso is the Executive Director of OWASP. Her day to day responsibilities include managing a membership of over 43,000 people in 100+ countries. What does it take to run an organization this size and how do you prepare for the future without getting bogged down in the details.
About Sarah Baso
Sarah is based in San Francisco, Californa, USA and has been the Executive Director of the OWASP Foundation since April 2013. In this role, she supervises the paid OWASP staff in addition to administering all programs and operations of the OWASP Foundation, reporting to the OWASP Board of Directors.

Have you ever wondered about your application’s attack surface? What URLs will respond to requests? And what HTTP methods will they respond to? And what parameters can be passed in? You probably think you know what is exposed but do you really?

Why is this something you should even care about? I’d suggest a couple of reasons:

Your attack surface is where bad guys can reach out and touch your application. More attack surface = more possible inputs to worry about. More attack surface means more input validation and probably more contextual encoding to avoid common problems like cross-site scripting (XSS) and SQL injection. Attack surface isn’t necessarily bad but it is something you need to be aware of.

Attack surface has a tendency to creep up on you. Things get added to solve a particular problem and then they are forgotten as developers move on to new user stories and new functionality. I once tested a web application where, if you passed in the parameter “d” to any page in the application, it would delete the order referenced by that parameter value. The “d” parameter never showed up in any HTML rendered by the (non-administrative) pages in the application, but the functionality had been cut and pasted into every single page regardless. So if a bad guy got lucky and tried to send in a “d” parameter…

Knowing where you are exposed lets you better understand what your application looks like to potential adversaries and is essential if you want to do thorough security testing. The question is – what can you do to enumerate potential attack points?

We are currently finishing up the first phase of some research funded by the US Department of Homeland Security (DHS) under their Small Business Innovation Research (SBIR) program to develop Hybrid Analysis Mapping (HAM) capabilities for software assurance. You can read more about the scope of this research and some of the coverage it has received so far. The object of the research is to map the results of static application scanners (like Fortify SCA) to the results of dynamic application scanners (like IBM Rational AppScan). We’ve been pretty successful at that and we’re including that in future versions of ThreadFix. In addition, an interesting – and unanticipated – result of this work is that we created a utility that can do a lightweight scan of an application’s source code and dump out an enumeration of the application’s attack surface.

The endpoint enumerator takes a look at the application base and tries to determine the language and framework that is in use. Once the framework has been determined, the tool develops an analysis model with the following information:

Relative URL in the application that should respond to requests

HTTP methods for each URL

Parameters each URL might access during page execution

It then dumps out a listing to STDOUT where you can review it, parse it, or do whatever you like. This can provide a great start for manual penetration testing activities by highlighting potentially interesting information.

So how does this work for Java applications using JSP? Calculating the relative URLs for JSP is pretty simple. Basically we just need to find the common code root and the relative URLs are extrapolated from there. JSP makes it pretty simple. (Spring and other frameworks make it far less simple and we will post more about that later.) To determine the HTTP methods we currently just assume that JSPs will respond to GET and POST methods. (Making this more comprehensive will require a bit more analysis which we currently do for the Spring framework) Finally, determining the parameters that a JSP page will use requires some simple static analysis where we track down calls to request.getParameter(key). If “key” is a String literal, then that is the parameter name. If “key” is a String variable then we do some data flow parsing to see if we can trace the variable back to its value. It isn’t perfect but it works reasonably well.

Let’s take a look at the output of the tool when it is run against the Bodgeit Store. Bodgeit is an intentionally-flawed application written in Java using JSPs that is intended as a training tool for penetration testers. Running the command:

java –jar endpoints.jar ~/git/bodgeit

results in the output:

[POST, GET],/about.jsp,[]

[POST, GET],/admin.jsp,[]

[POST, GET],/advanced.jsp,[q, debug]

[POST, GET],/basket.jsp,[update, productid, quantity, debug]

[POST, GET],/contact.jsp,[anticsrf, debug, comments]

[POST, GET],/footer.jsp,[]

[POST, GET],/header.jsp,[debug]

[POST, GET],/home.jsp,[debug]

[POST, GET],/init.jsp,[]

[POST, GET],/login.jsp,[username, debug, password]

[POST, GET],/logout.jsp,[]

[POST, GET],/password.jsp,[password1, password2]

[POST, GET],/product.jsp,[typeid, prodid, debug]

[POST, GET],/register.jsp,[password1, username, password2, debug]

[POST, GET],/score.jsp,[debug]

[POST, GET],/search.jsp,[q, debug]

Most of this is pretty basic and shouldn’t be terribly surprising. However, the “debug” parameter that appears in a number of pages is a cause for some further inspection. Also, that “admin.jsp” page might not be something that everyone should have access to. These are exactly the kinds of thing that get placed into applications, and then find their way into production to cause trouble down the road. Keeping an ongoing eye on your application’s attack surface can help you catch things like this early – before they make their way into production and before they result in problems.

So what’s missing? We’ve hit the major points, but there are some limitations:

The support is very language and framework dependent. Right now, we have support for Java/JSP and Java/Spring and we’re working on others. Also the code is all open source and we’re structuring it so that it is easy for folks to get involved and add support for their favorite languages and frameworks. If you’re interested in helping, just drop me a line on Twitter or via email.

The attack surface model is limited. In the future, we’re looking at adding support for cookies and other HTTP headers as well as other input points such as environment variables, command-line arguments, inbound (but non-HTTP) network traffic, and so on. For right now though, URLs and their GET and POST parameters provide a solid starting point.

This is an area where other folks have done work as well. For more information, check out:

So – take some time and think about your application’s attack surface. What might be lurking about that you haven’t thought about in a while? Run the endpoints.jar utility through its paces and feel free to post any bugs or feature requests to our issue tracker. Bad guys are undoubtedly thinking about your application’s attack surface. Shouldn’t you be thinking about it as well?