Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter tramp writes
"The Register reports, 'Eran Hammer, who helped create the OAuth 1.0 spec, has been editing the evolving 2.0 spec for the last three years. He resigned from his role in June but only went public with his reasons in a blog post on Thursday. "At the end, I reached the conclusion that OAuth 2.0 is a bad protocol," Hammer writes. "WS-* bad. It is bad enough that I no longer want to be associated with it."' At the end of his post, he says, 'I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.'"

Oh please, you arrogant twats.
This web services sector is such a huge over-engineered mess of enterprisey consultant circle-jerking,

Talking about going off the fucking tangent. Who the hell says I or anyone else is proud of the WS-* shit? Do you have to love a stupid acronym to know how to google it? It's not about whether WS-* is good or bad. It's about posters of a site whose motto is 'News for Nerds' who need 3rd parties to google acronyms for them.

I'm actually *proud* I'm not having any relationship with it.

In practice, it's one of the dumbest things out there.

Preaching to the crowd buddy. You ain't the first one who found out the flaws of it. Though don't let that get in the way of making you feel intelligent by repeating what most people know

Did you actually look at the fucking results from what you googled? Or were you just in such a hurry to be an arrogant twat that you couldn't bother?

Yes, and the results right on top contains, among other things... tada... web services. Shit, let's forget about google. About wikipedia, that oh so not new and wonderful site that lists almost all type of shit, including... tada... an entry for WS-*.

So what's your grip anyways, that people think WS-* is a good thing (in which case, you are building a strawman because no one is making that claim here, certainly not me), or that the google results didn't spoon feed you the precise answer of your liking?

It refers to the plethora of web-services specifications, most of which take a fairly complicated protocol (XML over HTTP) and add huge new layers of mind-boggling complexity.

You don't ever need WS-*, except when you find you do because you're dealing with the situations that the WS-* protocol stack was designed to deal with. When that happens, you'll reinvent it all. Badly. JSON isn't better than XML, nor is YAML; what they gain in succinctness and support for syntactic types, they lose at the semantic level. REST isn't better than SOAP, it's just different, and security specifications in the REST world are usually hilariously lame. Then there's the state of service description, where WSDL is the only spec that's ever really gained really wide traction. WS-* depresses me; I believe we should be able to do better, but the evidence of what happens in practice doesn't support that hunch.

REST is better than soap because it uses the features of the transport instead of ignoring and duplicating them in an opaque fashion. SOAP is like having every function in your program take a single argument consisting of a mapping of arguments. Or a relational database schema with only three tables: objects, attributes, and values. In other words, SOAP is an implementation of the Inner Platform antipattern.

As a regretful author of several WS-* specs, after I got sucked into the vortex of IBM and MS when they passed too close to our academic lab, I felt exactly as Eran Hammer stated in his blog. He wrote, "There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts,... It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career." I have used so many of those same phrases in reflecting on my experience with other veterans of that period!

And I'll tell you, XML and SOAP have no semantics either. They simply have a baroque shell game where well intentioned people confuse themselves with elaborate syntax. XML types and type derivation are syntactic shorthands for what amounts to regular expressions embedded in a recursive punctuation tree. There is absolutely no more meaning there than when someone does duck typing on a JSON object tree, particularly after the WS-* style "open extensibility" trick is added everywhere, allowing any combination of additional attributes or child elements to be composed into the trees via deployment-time and/or run-time decisions.

As a result, I am rather enjoying the current acceptance of REST and dynamically typed/duck typed development models. It is much more honest about the late-binding, wild west nature of the semantics involved in our everyday web services.

Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP. The overhead of XML -- usually an order of magnitude in data size -- can be a huge, undesirable impact. That said, there's one aspect of SOAP that popular REST specs are missing: a definition language. With the help of the WSDL, SOAP gained cross-platform client generation and type safety. REST protocols would do well to leverage this concept, at least for invocation parameter definitions. In most cases, REST result

Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP.

You don't have to ignore any concerns. SOAP was always a bad idea, as there is nothing to be gained from it you cannot work out by the combination of the HTTP protocol with REST style access.

This was obvious even in the very earliest days of SOAP, when people at that time where noting that REST was so much more practical. I had to use it off and on with various internal IT projects but it was always a bad deal, and just about always was eventually moved to a REST style service so people could get work done.

That said, there's one aspect of SOAP that popular REST specs are missing: a definition language.

As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

But even then, having a documented result schema would be a huge improvement

No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

SOAP got popular because Java and especially.NET promoted it as the way to write web services. So, like XML, it's another case of an overengineered design-by-committee solution becoming popular simply because using it was the path of least resistance due to it being in the standard library. Most people using it that way don't actually have a clue about how it works, and they certainly didn't pick it because of the way it's designed.

Very much so. Starting from simple things like the uncertain difference between attributes and child elements, and down to the unholy mess of DTD. Don't even get me started on some of the associated tech like XML Schema.

Yes I know SOAP is quite widespread. This is do to Java and C# making valiant efforts to build enough tooling around it to reduce the pain, or at least to building a system where you have even odds of making a client that can communicate with a server...

But that does not change the fact that underneath it is a nightmare, things can still go wrong, and that everyones life becomes SO much easier when you go REST with JSON.

The real death of SOAP was the rise of mobile clients, which do NOT have the processing

No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

Couldn't disagree more. Frameworks and protocols are meant to make life easier. What I see with many implementations based on REST are frameworks that, through the lack of a published

As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

FFS! JSON IS NOT A DATA DEFINITION LANGUAGE!!!

Just get a fucking clue. JSON is a syntax, nothing less nothing more. It is up to the client to inspect the packet and has NO WAY to validate that the contents of the packet are indeed correct. Contrast this with an XSD that would outline which elements could exist, which attributes they had, where they could exist, what they could contain and even limit exactly how many could exist.

JSON provides none of that. Also, Javascript, which is what JSON is is a dynamic

What I was arguing against is NEEDING a data definition language. That has ALWAYS been needless overhead for any web service I have ever seen, and in fact you are limiting clients by mandating a single possible data type for a field when a client might want to treat something differently.

And having a Schema is NOT WASTEFUL -- it's a condom to prevent asswipes like you

You do that with REST over HTTP at least using media types and json schema, which are starting to gain more popularity with API developers. I'd argue there's nothing those systems have over what REST+JSON can provide if used properly. The problem is that most things that claim to be RESTful aren't really. The community is starting to move away from using the term "REST" to describe things, especially application APIs, because it has those connotations attached to it (see: facebook, twitter, etc APIs, the

The problem with SOAP and WS-* stuff isn't XML. It's rather that it takes, IIRC, five levels of nesting of said XML to call a simple web service that takes an integer and returns another one. In other words, it's ridiculously overengineered for the simple and common cases, while supposedly covering some very complicated scenarios better - a claim that I cannot really verify since I've never in my life seen system architecture, even in the "enterprise", where that complexity was actually useful.

I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!

You're right about that, they're not the same thing. They're fundamentally different ways of viewing an application on the web (one is about describing things beforehand, the other at runtime; one is about factoring verbs first, the other is nouns first). But from the perspective of the big picture, they're really not that different.

SOAP sucks big monkeyballs and REST doesn't, period.

That's what it seems like to you, but when you're working with applications that you're building on top of these webapps, SOAP works better. The tooling is better. The separatio

It doesn't have to be perfect - only "good enough".
Look at all the technologies we're currently using: The X Server, HTTP, and so on. None of it is perfect, but "good enough".

So instead of moaning, do something, to improve it!

Improvement can only take places when things can be salvaged at a reasonable cost. When the architecture of things is bad enough to cross a certain point, it is best to start over. The software industry has plenty of live examples of this, accumulated for the last 30-40 years.

Nobody uses X Servers for what they were designed (though I don't dislike the concept), and the only problem with HTTP is that people are abusing it for things that it shouldn't be used. By design, HTTP is a stateless pull protocol, and people are abusing it by forcing state, streaming, and pushing for no good reason.

Lack of perfection is not the problem, the problem are high level idiots with influence reinventing high level wheels full of compromises because they don't know better and should have never b

Once a spec has spent too long trying to get from good enough to perfect, often by gluing on so many options, exceptions, and extensions that nearly anything can be said to comply but nothing can be said to implement it comprehensibly, there can be no good enough any more. The closest you can get is to carve a bunch of it away and call a cleaned up subset of it good enough.

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesnâ(TM)t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Sounds familiar. For anyone following the Smart Grid work, this is exactly why Smart Energy 2.0 is a fiasco. All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants -- parasites that feed first on the drawn-out work within the standards organization that results in a "flexible" specification (meaning that it's not a specification at all), then feed on any group that tries to implement the standard because they'll need the "expert" insight in order to make the "flexible" damn thing work at all.

SIP is not only nearly as bad; I would says that SIP is an abomination and that well thought well designed h.323 should have won the soft-phone protocols war. But as usual the Worst is Better [wikipedia.org] approach won...

To be fair, it's a hard problem. Let's take the analogous example of a word processor. Surely, we can come up with something less bloated that Microsoft Word? Let's just get rid of all the arcane features that only 1 percent of the user base wants. That sounds good, until you find that entire industries (such as legal) run their business on Word and depend on those arcane features. Another user base (such as sci pubs) might need an entirely different subset of arcane features. Then there are those glo

Option 4:
- Focus on a speciflc use-case and let others focus on other use cases, rather than trying to make one product that is a jack of all trades and a master of none.
There's no rule that says all problems must be solved with one piece of software.

All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants

Sad but true. About a decade ago I was part of an IETF standards effort that was turning into crap fast, when someone finally decided to run an interop test on implementations the conclusion was "this protocol does not work". The working group chair's comment on this was "we'll push it through as a standard anyway and then someone will have to figure out how to make it work". My (private) reaction to this was "The IETF has now become the ISO / OSI". In other words it had become the very thing that it was

Good article, quite interesting to see the problems a community is faced when going through standards processes.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete. Bringing OAuth to the IETF was a huge mistake.

That is a worrisome situation. With the internet openness being so much based on open standards, the idea that the corporate world is taking over standards and sabotaging them to fulfill their own selfish interests is quite problematic, to say the least.

As for the actual concerns he is raising about OAuth 2.0, this one is particularly striking:

Bearer tokens - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.

The enterprise-use-cases problem is partly for structural reasons. The IETF process makes it most natural to participate if you're a representative of a company, because it is very long, requires many meetings (some of them in-person), and therefore is most feasible to participate in if someone is paying your salary and travel to spend 3 years standardizing a protocol. Sometimes academics participate as well, if it's a proposed standard that is very close to their interests, enough so that it makes sense to

No, it's how it should have been to begin with. Bearer tokens are now pure capabilities supporting arbitrary delegation patterns. This is exactly what you want for a standard authorization protocol.

Tying crypto to the authorization protocol is entirely redundant. For one thing, it immediately eliminates web browsers from being first-class participants in OAuth transactions. The bearer tokens + TLS makes browsers first-class, and is a pattern

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

1.0 had some issues when you moved beyond web apps (JavaScript or mobile apps), but I am much more confident of its security.

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies

TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies used for TLS authentication.

To be more exact, the key to using TLS well is controlling the code that determines whether a particular chain of certificates (the ones authorizing a connection) are actually trusted. HTTPS does this one particular way (a fairly large group of root CAs that can delegate to others, coupled with checking that a host is actually claiming to be able to act for the hostname that was actually requested) but it isn't the only way; having a list of X.509 certificates that you trust and denying all others is far mo

Yeah yeah, I know, if you don't already know and can't be bothered to go looking, you must therefore be a dribbling buffoon who should not dare to even use the internet let alone visit the hallowed and sacred Slashdot, but:

OAuth is an open standard for authorization. It allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their credentials, typically supplying username and password tokens instead. Each token grants access to a specific site (e.g., a video editing site) for specific resources (e.g., just videos from a specific album) and for a defined duration (e.g., the next 2 hours). This allows a user to grant a third party site access to their information stored with another service provider, without sharing their access permissions or the full extent of their data.

I tried to implement OAuth v1 on a mobile device. What a pain in the hole. And it all fell down once you had to get the user to fire up the browser to accept the request. There was no way (I could figure out) to handle the callback so instead it seems to have been implemented via a corporate server thereby defeating the whole purpose of it. The easiest to work with was DropBox.
I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.

I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.

Well it's good that someone else understood it and forced you to do it, then.

But in actual response to your answer: it allows the request signature to be calculated by the server you're sending the request to so that it can ensure that the parameters have not been tampered with.

The reason you had to sort the parameters etc etc was because OAuth 1.0 was designed to be implementable by a PHP script running under Apache on Dreamhost. Which meant you didn't get access to the HTTP Authentication header, and you didn't get access to the complete URL that was accessed. So we had to work out a way to canonicalize the URL to be signed from what we could guarantee you'd have: the your hostname, your base url path, and an unsorted bag of url parameters. Believe me, we *wished* for a straightforward URL canonicalization standard we could reference. None existed. So we cussed a lot, bit the bullet, and wrote one that was fast and simple as possible: sort the parameters and concatenate them.

Go yell at the implementors of Apache and of PHP. If we could have guaranteed that you'd have access to an unmangled Authentication: HTTP header, the OAuth 1.0 spec would have been 50% shorter and a hell of a lot easier to implement.

Hi Mark, thanks for replying.
Do you not think it was a flaw to target a spec towards a specific language/architecture?
Another thing that really pissed me off was the complete lack of help testing my implementation. I'd have given up far sooner if it hadn't been for this site: http://term.ie/oauth/example/client.php [term.ie]

Do you not think it was a flaw to target a spec towards a specific language/architecture?

From the perspective of someone on the outside of the process, it was both a mistake and not a mistake. It was a mistake in that it causes too many compromises to be made. It was not a mistake in that it allowed a great many deployments to be made very rapidly. IMO, they should have compromised a bit less and pushed back at the Apache devs a bit harder to get them to support making the necessary information available.

Speaking of sorting parameters, there is at least one issue I still see in a lot of libraries. The spec says encode things, then sort them. Many of the libs I've seen do it the other way around. Sorting first is the most obvious way to do it, but I guess the spec was trying to avoid issues with locale-specific collations by forcing everything to ASCII first. Most sites uses plain alphanumeric parameter names so people get away with doing it either way.

IIRC, you have to encode the key, encode the parameter, append them with '&' and encode again, and then sort them, generate the signature, encode the signature key and the signature itself. Or something. Oh and the encoding routine is urlencode plus some extra characters so that has to be written from scratch too.

then why didn't you? Last time I checked Apache was open source so you could have submitted your required changes. I'm not quite so sure of PHP, but maybe there is a way to add an extension to it that grabs the unmangled header from your newly customised Apache.

The problem is AIUI the goal was to make things work on shitty webhosts. So working in up to date apache/php with the right settings is not enough, you have to work on whatever old version of apache/php and whatever crummy config the webhost offers.

sure, but if you don't fix things, they'll never get fixed. The OP seemed to just be too whiny about how things were difficult, boohoo.

In a year or two all those old Apache webhosts would be upgraded - or TBH, if he'd made the patch and added it they would pretty much all get upgraded in the next update release. And those that didn't, would be really insecure anyway due to other unpatched vulnerabilities. I think webhosts tend to update their servers reasonably regularly.

I’ve worked on related standards and I can identify with much of Eran’s frustration. Eran’s a smart, dedicated, passionate person who has worked very hard to make OAuth work for everyone - not just those looking to profit from it. And OAuth is currently the best open standard option for securing REST-based web services today. I hope that when he thinks about OAuth, he thinks primarily about the huge contribution he has made, and not with regret.
The standardization process ultimately