Are your apps giving one device a favourable security position over the other?

14 August 2015

I run a workshop which I often do privately for organisations or as a part of various conferences which I title “Hack Yourself First”. I wrote about what I do in these recently in relation to my upcoming US workshops next month and the ones I’ll be doing in London in Jan but in short, it’s a couple of days of very hands-on exercises where we look at a heap of different aspects of security in a way that’s designed to really resonate with developers. I did a massively compressed version of this at the DDD Melbourne event on the weekend and there was an outcome which I thought was worth sharing. It’s both an interesting illustration of a risk and an exemplary example of how the organisation involved dealt with it.

One of the exercises I do (and the one we focussed on at DDD) involves looking at how mobile apps communicate across the web. We look at how to intercept traffic (both HTTP and HTTPS) using Fiddler (we use Charles for the Mac folks) and identify common security anti-patterns in the way apps talk to APIs. Participants do this with their own devices and use their usual apps installed on the device; they just use them in the way they were intended to be used and simply watch the traffic. There’s often some pretty interesting stuff found and this session didn’t disappoint.

One of the guys opened up the realestate.com.au app which is one of our leading property rental and sales services down here in Australia. This is what it looks like:

You may notice this is on an Android – more on the significance of that shortly. Anyway, he puts in a username and password, hits the “Sign in” button and sees this in Fiddler:

He immediately spotted that the first request to /login/session was sent over HTTP or in other words, an unencrypted connection. Here’s the raw request itself:

So there you have it – credentials sent over an unencrypted connection! Actually, that’s what happens in the first request in that Fiddler trace above. That request results in a 301 “permanent redirect” which then causes the client to post the creds to the secure scheme in the third request (the second one is an HTTP CONNECT tunnel). The account is invalid hence the 400 response but the point is that they went out across the wire insecurely in the first place.

But here’s what I found most interesting and worthy of writing about – I then tried doing the same thing from my iPhone where the app looks like this:

Obviously it’s a different app running on iOS rather than on ‘droid but clearly it has the same intended purpose. Here’s what happens when you attempt to login from iOS:

The first request is for Omniture analytics (now part of Adobe) which is not unusual. It then creates the tunnel after which the interesting one is the request to /login/session which looks like this:

Just in case it’s not already obvious what the significance is in this request versus the Android one, let’s do a diff on the two:

By default, the iOS app goes out over HTTPS from the very beginning compared to the Android app that begins communicating insecurely and then redirects to the secure scheme. It’s exactly the same resource being requested, it’s just not secure by default with the Android app.

Let me quickly clarify why I’m able to see traffic sent over HTTPS in the that iOS request: my iPhone was configured to use Fiddler as a proxy which meant all HTTP/S requests pass through it. Now under normal circumstances, a proxy like this (which is effectively a man in the middle) can’t see the contents of HTTPS traffic and indeed that’s precisely why we have SSL in the first place. However, if you install Fiddler’s root certificate on the device as explained in that link, Fiddler can now go backwards and forwards with the server using the legitimate certificate and also go backwards and forwards with the device using it’s own self-signed certificate which the device now trusts because of the installed root certificate. The request above from iOS is secure, but that doesn’t mean that I can’t intercept, read and manipulate it when I also control the device. That in itself is a poignant observation that often comes as news to developers.

Here’s why I find the Android app situation significant: it’s a dead easy mistake to make. There is literally one character missing in someone’s config file for the Android app and that makes the difference between having a strongly encrypted connection from the device to the server versus having zero encryption at all on the first request. The fact that the login page simply redirects the insecure request then hides the misconfiguration, but it’s an understandable pattern because that’s what we normally do when we want to enforce HTTPS on a website – we 301 to the secure scheme if it’s accessed over HTTP.

There are multiple things that can be done to greatly reduce the risk of something like this making it through into the wild:

Don’t support the insecure scheme on POST requests to sensitive resources. This one could have easily been averted on the server side simply by not redirecting the request. Identifying that a sensitive resource like a login should only ever be called securely and failing immediately if it’s not would have caused this problem to bubble up much sooner. Fail early – the developer will quickly work out why their request isn’t working during build time.

Lock down APIs to precisely the way you expect them to be called. References to APIs are coded into consuming apps so it’s not like you need to cater for the user typing in http://… or the browser defaulting to the HTTP scheme. In fact I’d take it a step further and even put those APIs on their own domain (or subdomain). You could then turn off the HTTP scheme for the entire thing and also get some other benefits in terms of how you might tune and scale the API independently of the companion website.

Always proxy your apps and look at the traffic. It took one guy a few minutes to identify this problem just by knowing the pattern to look for. It’s much easier for risks like this to be hidden behind rich client apps because there’s not the same visibility of requests as in the browser where you can easily see them in the URL or the dev tools.

But I also want to give credit to the REA Group who runs realestate.com.au. I emailed the details of this to a contact I have in the security team on Sunday morning and I had a response seven minutes later. On a Sunday! The very next day they had the issue fixed and an updated app in the Google Play store. I also told them I was going to write this up once the issue was resolved and they were absolutely fine with it. An issue like this is so easy to slip through into production, how it’s handled is the real test of the organisation IMHO.

So check your apps folks, most of the time when I run these workshops people find something of a similar nature just because it’s so easy to make the mistake. Then again, it’s just as easy for you – or anyone else – to find those mistakes!