Software development and daily life in Seattle

Menu

breaking the private network oasis addiction with OAuth

Most reasonably (and larger) sized operations organizations have a pretty standard networking setup – or at least some close variation on the theme. ARIN wants public IP addresses behind load balancers, so most orgnizations front up their services through software or hardware load balancers. From there, it goes back to a highly response “web tier” – the spewing of content and the CDN source systems. Those back in to application tiers, and behind those are data persistence tiers (typically, your classic RDBMS). The only thing on the internet is often those load balancers. The networks are often segmented between app and data as well, often with firewalls, to “reduce potential intrusion”. It’s a good plan and pattern – it generally works. And while you own it all, you own your own network oasis of happiness.

Economic pressures being what they are, it is getting more effective to own what you absolutely need of infrastructure, and rent the rest. Shoot – if you’re small enough, owning anything at all for infrastructure may not make sense.

The problem comes when you want to start moving between those network oasis’ of happiness. Like, oh say, to a cloud provider. The end result is our services have all become addicted to this concept of secure, high-bandwidth, reliable access between tiers – that happy oasis. And that addiction is one that isn’t healthy – at least not in a world of elastic scaling architectures. We’re getting used to breaking that addiction when we integrate with external services – facebook, twitter, etc. The mashups are breaking the mold of how this has been done in the past. And we need to break our addiction!

Why am I asserting this? As I look at a large number of applications, they don’t fit a “cloud provider” very well – at least not when you start to get into the realm of dynamic or elastic scaling. Most providers have something akin to an “internal network” which we can leverage as consumers. As we get to the logical conclusion we will find ourselves wanting to shift work between one “oasis of network happiness” to another. The solution today? Expand to fill all available resources and then buy some more at the same place.

That ain’t gunna to cut it.

For us as consumers of infrastructure of a service, we want one provider to be as good and flexibly as another. That means commoditization of the infrastructure (which I believe we’re starting to see now, although the infrastructure providers will fight it tooth and nail). That means we need to be able to shift from one to the other at a moments notice.

Here’s an example:

You have enough resources for 100 units at provider A, and you are running up to a high level usage that looks like it’ll exceed that. You call up Mr Infrastructure A, who reluctantly informs you “Sorry, all sold out – it’ll take 6 weeks to add capacity”. You also happen to have a not-quite-as-good deal with provider B that costs a little more. All sounds good so far – your uber-cool software provisioning system smacks down a couple of VM images and spins them up…. except it’s a got a problem: connecting the stuff running at provider A to the new stuff you’ve just spun up.

And that’s assuming that you’ve solved all sorts of somewhat evil configuration problems knowing to indicate to a component what it is and who it should be talking to to get its work done. That’s a topic for a different post though.

I don’t have THE answer, but I do have AN answer. Follow the mashup leaders: take the OAuth & REST pattern back into your office, workspace, colleagues, whatever. That’s what we’re doing after all – mashing up some services. It’s just that we normally think mashup means “with someone else’s stuff”.

OAuth and REST work together beautifully and with some coordination can be the answer to providing the answer to the security question “Should I allow this external request to access these resources?” The OAuth 2.0 spec has a segment in it – section 1.4.4 (version 10) – that walks through the flow for this to happen. Routing over SSL does a pretty reasonable job of getting the data encrypted as well. The cost: running an Authorization server.

(You can pull off this same trick with OAuth 1.0: the not-quite-standard-but-defacto “2 legged OAuth” routing. The problem: not everyone and all the libraries agree on how to make that work seamlessly.)

In order to allow our ourselves to start treating everything – infrastructure as well – as software with it’s speed of change we need to be able to dynamically allocate resources. Once we have services that are all chatting across OAuth authorized links (encrypted or not), we’ve removed a huge impediment to being able to elastically scale our services.