Good news - Google is shipping OAuth 2.0 tools via Google Play. Wish this had happenedyearsago. when the Android platform shipped but its good its happening now.

OAuth 2.0 is not perfect from a security perspective but as Tim Bray says this is Pretty Good Security meets Pretty Good Usability. Makes sense to me - we have to stop using passwords and we have to do so in a way that won't have developers rioting in the streets and burning cars. But why be happy about shipping something that has a 70 page long threat model in its wake? This dev comment from the blog announcement says it all- "After implementing my own authentication for my app, I really would have appreciated something like this!"

So yes its progress, Why did it take so long - who knows? But here we are.

Its helpful to track evolution through a Crawl - Walk - Run maturity curve.

From where I sit, Crawl has been achieved with this release - a standard way to register your app, get a token and use it - plus many future apps that do not rely on passwords, but what about walking and running?

Walking should be about not just using a standard protocol as an improvement over ad hoc access control but also using the protocol safely. Its an access control protocol after all, its failure modes are ugly and have consequences to users and platforms. A chainsaw is great for cutting timber and its an excellent way to cut off your own limb(s). Use of a safer protocol is desirable but guidance on safe use is required to get full value. This release is not quite there yet. OAuth tokens, like anything else, have vulnerabilities large and small, but in removing crypto and signature functions the implementation increases its reliance on TLS for security. Fair enough for many apps, but there is no way to discern this from the documentation, SDK, and APIs. The OAuth 2.0 protocol, by itself without TLS, is not good enough.

"The sign above the players' entrance to the field at Notre Dame reads 'Play Like a Champion Today.' I sometimes joke that the sign at Nebraska reads 'Remember Your Helmet.' Charlie and I are 'Remember Your Helmet' kind of guys. We like to keep it simple."- Warren Buffett

OAuth 2.0 should be shipped with a 'Remember TLS' reminder stapled to each and every release. Otherwise, numerous threats are in play. OAuth 2.0 with TLS meets the Pretty Good Security bar for many apps, without TLS its playing without a helmet.

Further, both the client and server side developers have some work to do to avoid shooting themselves in the foot with the protocol, for example the client developer may not realize the sensitive nature of the token and how best to protect its storage. The server side developer deals with a myriad of concerns like session management, linking the token to access control, replay and others that in most/all cases mirror the issues for most webapp security. Here we face two challenges though developers not being trained up on security protocols and so miss a lot of the subtleties and nuance in deploying security protocols. And infosec blithely assuming that silver bullet - this all singing all dancing protocol solves my problem is all too common. Not saying Google is fomenting either of these but I see it in the trenches every single day. I would prefer to see Google include a short and sweet Security Checklist to make sure people remember their helmets. Do not have to reinvent the whole Threat Model but guidelines for safe use would get this a long way towards Walking in my view.

The worst security posture is not being insecure, all systems have vulnerabilities, the worst security posture is to assume you are secure when in fact you are not. Here the current implementation is lacking and tailored guidance and/or checklists from client and server side developers' perspective to know what the protocol is doing and what it is not doing would be very useful. I know this just shipped, but this gap should be closed soon. As a group, developers across the globe have had zero training in secure coding. When I go into train a dev team on secure coding, even those with decades of programming experience, I am likely teaching them their first day of secure coding. You cannot expect them, even good developers, to know all the right things to do, and to pick up on the subtleties at work in implementing security protocols. I am all for finding the balance on Pretty Good Security and Pretty Good Usability - that's a worthy goal, but the dots need to be connected. There's a world of difference between https://sites.google.com andhttp://myappisowned.com, Google's Android team should help to close these gaps out, clearly state what can and should be done to foster safe use of OAuth 2.0.

Implementing security protocols is a new proposition for most developers, they were never trained but back in the day it never mattered, the container or server did for them and the threat was not high. Neither of these is the case today any more. This stuff matters. We could easily do a "how to break Android" class and get the security people all fired up to attend, but what would that solve really? We need to start building better stuff and we need developers in the game to make progress. This is Why We Train. OAuth 2.0 and TLS can improve the security in most mobile apps, implemented wrong they can also make it worse. There are design and implementation things to consider from Crawling to Walking, but developers need to know what they are to make it happen- we tackle these on Day One of Mobile AppSec Triathlon.

**Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

In the last twoposts, we explored what goes into building an Android Security Toolkit, these are tools that developers can apply to minimize the amount of vulnerabilities in their Android app and, because no app is perfect, to lessen the impact of those that remain.

So far we focused on access control, which helps to establish the "rules of the game" authentication and authorization controls who is allowed to use the app and what they are allowed to do. If you read the Android Security documentation, access control concepts dominate, but this is only part of the security story. Access control enforces the rules for customers, employees, and users who are effectively trying to get work done; however access control does little to mitigate threats of people deliberately trying to break the system.

It pays dividends to learn and apply access control services because a vulnerability here will cascade across the system and be available to attackers as well, but it pays to go further just access control in your mobile security design and development. I usually describe this situation as - I would bet a lot of money that I can beat both Garry Kasparov and Michael Jordan in a game. The way I would do this of course is to play Kasparov at basketball and Jordan at chess.

This is what attackers do, they change the rules of the game or change the game entirely. So while access control gives us the According to Hoyle security rules that the app would like to play under, the attacker makes no such assumption, the asserted rules are the beginning of the game not the end.

All security is built on assumptions, when these fails so does the access control model. For example, as we discussed in the last blog the Android access control policies are enforced in the kernel so the assumption is that the kernel hasn't been directly or indirectly subverted.

So if an app cannot be secured by access control alone, what's an Android developer to do? The requirements for access control are fairly straightforward on first pass - who is allowed to use the app and what are they allowed to do? Sure, it gets more complex from there, but the start and even endgame are fairly clear.

What's the starting point (much less endgame) in defensive coding? Threat models like STRIDE make an excellent starting point for finding requirements. Identify the key threats in the system and what countermeasures can be used to deal with them. STRIDE recommends, and I concur that data flow analysis is a practical way to begin modeling your application to discover where threat and vulnerabilities lie.

The mindset of the Defensive Coder is fundamentally different than the access control mindset. The Defensive coder assumes compromise attempts and possible success at each layer in the stack. This includes standard techniques such as input validation, output encoding, audit logging, integrity checking, and hardening Service interfaces applied to local data storage, query and update interfaces, interaction with Intents and Broadcasts. Not just publishing these resources for use, but factoring in how they may be misused. How is the app resilient to attempts to crash it, an attacker impersonating a legitimate user, a malicious app with backdoors running on the device, or attempts to steal or update data?

The Threat Model cannot answer all these questions completely but it does lead the development effort in the right direction to finding ways to build margins of safety into the app.

**Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

In the last post, we started building out an Android Security Toolkit, things every Android developer should know about security. Access control is fundamental to application security. In my perfect world, when a developer learns a new language they first learn Hello World, the next thing a developer learns should be how to implement who are you and what can you do in the langauge - authentication and authorization. The AndroidManifest.xml file describes the access control policy that forms the application boundary, but where is this boundary enforced and what services does it provide?

The access control chain consists of

1. Defining access control policy

2. Enforcing access control policy

3. Managing access control policy

The AndroidManifest.xml defines the permissions that the application requires, such as:

The user is able to confirm or deny installation (but not change permissions) based on the AndroidManifest.xml file, this defines step 1 above. The policy is distributed with the application so policy management is under control of the distribution point such as AppMarket. This leaves step 2, enforcing access control policy.

Android apps run in the Dalvik VM, however IPC is not managed in the VM, instead its managed further down in the stack in the Binder IPC Driver which resides in the Linux kernel. Not sure, but I suspect the reason is that there are a number of permissions that requires lower level access.

The binder maps the permission and either the caller's identity or binder reference to verify access privileges. From a design standpoint, permission boundaries can be defined and enforced at different layers in the App including Content Provider, Service, Activity, and Broadcast Receivers.

Access control is the beginning of thinking about security but its not the endgame, the next step to building an Android security toolkit is defensive coding, how to deal with cases like code injection that are designed to subvert the access control scheme.

**Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Ken van Wyk asks mobile developers - what's in your bag of tricks? From a security perspective Ken lists a number of critical things for developers to protect their app, their data and their users; these include protecting secrets in tranist and at rest, server connection, authentication, authorization, input validation and out put encoding.

These are all fundamental to building a secure mobile app. Over the next few posts, I will address the core security issues from an Android standpoint and what security tools shold be in every Android developer's tookit.

First, with regard to security for Android I think there are three key areas:

Identity and Access Control - provisioning and policy for how the system is supposed to work for authorized users

Defensive Coding - techniques for dealing with malicious users

Enablement - getting the app wired up to work in a real world deployment

So onwards to policy for Identity and Access Control, a good place to start is with AndroidManifest.xml.

There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton

AndroidManifest.xml provides the authoritative source for package name and unique identifier for the application, this effectively bootstraps the apps' activities, intents, intent filters, services, broadcast receivers, and content providers. These show the externaly interfaces available for the application.

The next step is assigning permissions. Android takes a bold stance by publishing the permissions that the app requests before its installed. This has the positive effect of letting the user know what they are permitting, but at the same time the user cannot change or limit the app. If they want to play Angry Birds (and who doesn't?) they choose to install Angry Brids with the permissions set by the developer or they choose to live an Angry Birds-free existence. So the overall effect is to inform the user but not let the user choose granular permissions (this last has the positive effect of not turning the average user into a system adminisrator for a tiny Linux box).

The AndroidManifest.XML contains the request for access to system resources such as Internet, WIFI, SMS, Phone, Storage, and other

<uses-permission android:name="android.permission.

ACCESS_WIFI_STATE" />

<uses-permission android:name="android.permission.

CHANGE_WIFI_STATE" />

<uses-permission android:name="android.permission.

CHANGE_NETWORK_STATE" />

<uses-permission android:name="android.permission.

INTERNET" />

<uses-permission android:name="android.permission.

WRITE_EXTERNAL_STORAGE" />

The first step for App Developers here is to only request the least amount of privileges necessary for your app to get the job done. Saltzer and Schroeder first defined the principle of Least Privilege:

Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error. It also reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to misuse of a privilege, the number of programs that must be audited is minimized. Put another way, if a mechanism can provide "firewalls," the principle of least privilege provides a rationale for where to install the firewalls. The military security rule of "need-to-know" is an example of this principle.

Notice the two facets of this principle. The first is the conservative assumption to limit the damage of accident and error. This margin of safety approach should be near and dear to every engineer's heart. The second part of the principle is simiplicity - if its not needed turn it off, or in this case do not publish or request access to it.

From a security point of view, the AndroidManifest file helps to reduce your applications' attack surface. If you don't need SMS or Internet or Wifi, don't ask for it.

Android has a pretty interesting approach to access control from some under involvement to declarative permission to capabilities, and we will dig deeper into this in the next post.

**

Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Over on the Mobile App Sec blog, Ken van Wyk asks what is in your Mobile App Security toolkit? I had planned to write a post responding to that, but saw the tweet below from two of my favorite people in the industry and thought I would expand on this:

The first part, mostly, makes sense. Training developers is not an instantaneous fix, to be sure. In my training for developers we look at concrete ways for developers and security people to improve their overall security in their apps. The ways to do this vary, some are short term design/dev fixes (improving input validation for example) and some are longer term (swapping out access control schemes). There is some latency from the time you train developers til the time you realize all the benefits in your production builds. However, unless you roll code at a glacial pace, I do not believe it takes 18 months for training to pay off. Should happen way faster.

The second part of the tweet boils down to the old adage - "what if you train them and they leave?" The counter argument to this is simple and serious - "what if you don't train them and they stay?" Believe me I have seen plenty of the latter and lack of clue does not age well.

So while I agree with the spirit (but not timetable) of the first part of the tweet, I definitely disagree with the second part of the tweet. We need more training, better educated developers and security people, not less.

Specifically, we need hands on security engineering skills - the basic principles of security are not rocket science, the challenge is all in how do you apply it in the real world?

Despite increasing budgets, the security industry has not solved many problems in the last decade, but one thing the industry absolutely excels at is - conferences!

900 - NINE HUNDRED - Infosec conferences! This is not a record to be proud. Granted there are a handful of very good conferences, but the security industry's conference problem is that the industry as a whole is geared to talking not doing. We've all seen the conference hamster wheel - oh big problems, oh solutions that seems hard, when is beer? You get on the plane home with the same problems (or more) than you left with. Repeat.

Many years ago, I was working on a project at a large company with thousands of developers, and they wanted to tackle software security. The company put its top architect on the project, a software guy not a security guy. We met early on the project, he was very talented one of the better architects I have worked with, and like is the case with all such people was very curious, he really wanted to learn.He asked me - how do I get up to speed on security matters? I told him to read Michael Howard's books, Gary McGraw's books and Ross Anderson's books. I cam back a month or two later, to his credit he had plowed through, they were piled up behind him. He looked at me seriously and asked - "I see where the problems are, but what do I do about them?"

The what do I do question has haunted me ever since. we got down and worked on a plan for this company, but the industry as a whole glamorizes the oh so awful security problems at conferences but leaps over the what do I do part.

This is where training comes in. I am not naive enough to believe training is all we need to do, but I definitely believe that education for security people, architects and developers has a major role to play in improving our collective situation. We need better tools and technologies, advances in vulnerability assessment tools, identity and access management, these have all helped a lot over the decade, we need better processes on how to apply them in real world systems, your SDL matters. But so do your people! Without basic training you won't know what tools to use and where, how to apply them and what traps to avoid. This is why we train.

The first time I went to Black Hat, I was intrigued and impressed by the depth of FX's and other presentations, but I was also horrified. There was simply no one in the software world (at that time) talking about this stuff, it was clear the problems would just keep getting worse and they did. But enumerating problems decade plus later is not good enough, we need time materials, resources and people on what to do about them - how to fix. Out of 900 conferences, there is no equivalent "how to fix" conference that is akin to Black Hat. If you plant ice, you're gonna harvest wind.

By the way, waiting to deal with problems is a proven way to fail, and there is nothing more permanent than a temporary solution. Ken and I started on Mobile because now is the chance, the initial mobile deployments for many enterprises, to get it in right, with some forethought on security.

The last thing we need is more hand waving, bla bla and power point at a conference on "the problem" we need to get busy engineering better stuff, and that is where training comes in. As the USMC says the more you sweat in training, the less you bleed in battle. You might ask - with so many problems, can we really engineer our way out? Let me ask then - if we had 900 cons a year on how to build better stuff would be better off or worse?

Security always lags technology. In the early days of the Web, the security was egregious. But this did not matter so much because the early websites were brochureware. The security industry had time to catch up (though still behind) and learned over time how to deal with SQL Injection et al.

In Mobile its much worse. The security industry is behind the technology rate of change as always, the developers are untrained, but the initial use cases for Mobile are not low risk brochureware, they are high risk mobile transactions, Banking, and customer facing functionality. Security's window to act on building better Mobile App Sec for high risk use cases is not 3 years away, its now.

**

Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Jim Bird and Jim Manico are working on a new addition to the OWASP Cheat Sheets family, they have a draft cheat sheet on Attack Surface in process. The Attack Surface helps you see where your system can be attacked, from the Cheat Sheet:

"Attack Surface Analysis helps you to:

identify what you need to review/test for security vulnerabilities

identify high risk areas of code that require defense-in-depth protection

identify when you’ve changed the attack surface and need to do some kind of threat assessment"

I would add a 4th - it helps you see where you can defend. I use the Attack Surface Model in combination with a Threat Model to identify and locate countermeasures. The Threat Model helps to identify and the Attack Surface model helps to locate. This point is important because while you can't do much to control the attacker, you can control your defensive posture.

Eoin Keary wondered if there were some special considerations for attack surface analysis on Mobile, and I think there are plenty. Mobile attack surface is one of the main areas that changes the nature of the threat and the field of choice for defenders.

I have no quibble with this high level model, in particular there are logical extensions for security people to use at the OS and app layers, but at the same time the Hardware and Infrastructure layers need some elaboration to see why mobile is different. The Hardware is in motion, the Infrastructure layers include many byzantine protocols and format such as GPS, NFC, SMS and harware implementations vary greatly.

Standard use of STRIDE Threat Model + Attack Surface shows how each threat is dealt with so for apps that are using SSL you can see where its mitigating threats across the attack surface

Threat

Countermeasure

Data

Method

Channel

Spoofing

Authentication

Tampering

Integrity Hash

Repudiation

Audit Logging

Information Disclosure

Encryption

SSL

Denial of Service

Availability

Elevation of Privilege

Authorization, Hardened Interfaces

Note that the above should not be viewed as Checkbox Olympics where six STRIDE Threats times 3 attack surface pars always yield 18 countermeasures, this is basically never the case. But what it does do is show where and how countermeasures play in the stack and give you ideas on the most cost effective places to defend.

So our foundational Threat Model + Attack Surface needs some extenstion to deal with Mobile which could include new protocols like GPS, SMS, MMS, and NFC, and which will vary by HW type. Also new application distribution models through App Stores/Markets and updates. Finally there are different assumtions to be made around physical access and the like through the Lost/Stolen scenarios. So a basic extension to Threat Model + Attack Surface view could yield something like the below:

Threat

Counter

measure

App

Distro

GPS

SMS

MMS

NFC

Lost/

Stoten

Spoofing

AuthN

Tampering

Integrity

Repudiation

Audit

Logging

Inforomation

Disclosure

Encryption

DoS

Availability

Elevation of

Privilege

AuthZ

Hardened

Interfaces

Again, as above its not a case of Checkbox Olympics and there are limitations in what can be done in any protocol, but using this combination helps to where we can reasonably expect to place countermeasures. In addition I think a big takeaway for most people is that when you start with the view that Tyler Shields' post showed, you assume that the variability is more on the top layer, but in Mobile you need to assume there is room for much or more variability lower in the stack too.

**

Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.