As an author I don’t expect everyone to like what I write or agree with everything I write. I’m always up for receiving constructive criticism, as it helps me improve my writing, and when carried out with a positive mindset I think it helps everyone involved in the dialogue learn something.

Unfortunately, not all reviews are constructive, and those that aren’t can be hard to handle. I don’t know why some people have to resort to name calling and harsh words if they have a technical disagreement with you; perhaps those folks lack emotional fortitude and feel big when they hide behind their blogs or email, or maybe they don’t realize how foolish their own words make them look. If you’re an author faced with such a review, do you sink to the level of the reviewer and fire back at them with harsh words of your own to try to put them in their place? Probably not, since that just makes you too look like a jackass. Do you just ignore them? Sometimes that’s the way to go, as you don’t want to encourage your new-found stalker to continue stalking you and getting off on having succeeded at trolling to gain attention. (I use the term “stalker” here because there are definitely people out there who seem to just sit and wait for you to publish something, anything, so they can immediately “review” it to prove to the world just what an idiot you are.) But sometimes ignoring the review is wrong, too, as it might make it seem as though you have no answers for the reviewer’s criticisms.

Steve Jones just published such a gray-area review of my “Convenience Over Correctness” column. I don’t know him but he’s apparently a dyed-in-the-wool SOA fan, so it’s totally unsurprising that he disagrees with me. I started to go through his review paragraph by paragraph and respond to each point, but I found that it wasn’t very productive, mainly due to the personal insults and attacks he decided to throw in, oh, every fourth line or so. I’ll therefore just look at three of his criticisms in response, and leave it at that.

First, Steve chides me for pointing out what he deems to be obvious:

So far these problems [with RPC-based systems] have been detailed [in the column] as

Remote calls have more issues than local ones

Remote transaction processing is a bitch

There are no other issues raised and both of these points fall into the “well duh” school of pointing out the obvious.

But then he says:

I’ve built distributed systems and I’ve had to manage teams who delivered the architectures I created and I’ll say that

60% of the people didn’t understand the challenges and wouldn’t have understood Waldo

30% would have read it and got it wrong

6% Understand the challenges and can make a decent crack at it with minor problems

4% actually understand what it takes

These two sections seem contradictory. How can Steve fault me for “pointing out the obvious” when by his own estimate only 4% of my audience actually understands the issues?

I know for a fact from my columns and my conference presentations that there is a great desire for this sort of information, and that not everyone truly understands the hard issues of distributed computing, so at least Steve and I agree on that. My preference, though, is to help provide that information and help raise awareness, whereas Steve’s seems to be to just assume everyone else is a “muppet,” thus enabling him and his other 4% friends to do all the heavy lifting and spoon-feed everyone else with simple frameworks they might, just might mind you, be able to understand.

This is elitism, pure and simple, and it’s an expensive and non-scalable model. It puts the self-proclaimed 4% experts in control and wastes the vast skills and talents of the majority.

Coincidentally, my September/October column is going to touch on this. It’s already partially written and is due in a few days, and while it’s not at all a response to Steve’s review, it will explain in part why elitist systems simply cannot, and do not, last. I’m sure Steve will completely hate it.

Then there’s this:

What a load of crap. Seriously this is an unmitigated pile of tripe in what it means to write distributed systems. It makes two basic errors

That the architecture and design of a system is focused on a programming language

See number 1

Ignoring the foul language and such, how can anyone claim to be an expert in real-world distributed systems development like Steve does, yet apparently be unaware of the various Java and C# systems out there, for example, that use special meta-language annotations to export and expose language features directly as distributed system features? There are many out there who think you just throw some annotations on a class and it magically becomes distributed — they think only within the confines of their language, and magic frameworks provided by the 4% experts like Steve make all the distribution work under the covers. (Steve claims this approach is necessary because it’s all that the other 96% are capable of understanding, but IMO it’s really just one of the ways the big vendors and big consultants can continue to relieve uninformed enterprisey companies of their money.) Criticizing the column on this basis simply shows that Steve, a self-proclaimed expert, is unaware of the language-specific distribution frameworks out there, which is odd given that they’ve been proliferating for years.

The only other particular issue I’ll remark on in Steve’s review is his idea that he could quickly cobble together some code in his blog to “prove” that RESTful systems can fit in a programming language, and thus also suffer from the “convenience over correctness” problem. The column already states that people are trying to do just that, but that from what I’ve seen they invariably run into problems with various REST constraints such as the hypermedia constraint. If it were so easy, Steve, we’d already be overrun with RESTful language frameworks. Thankfully, though, we’re not, because the two simply don’t mix conveniently, which is why I’ll continue to stand by what I wrote.

For many years I lived in the same enterprisey SOA world as Steve, and so the only positive thing I got out of his review was a reminder of how glad I am to be away from it. I met a lot of bright people there, don’t get me wrong, but I miss neither the elitism nor the “that will neither work nor scale within the enterprise, you complete muppet!” attitude that was doled out when anyone dared make a suggestion that might actually improve things or threaten the control held by those elite 4%. Thankfully, though, that culture can’t last forever.

i guess its more of a matter of your country / society which decides whats considered foul and whats not …

“prove” that RESTful systems can fit in a programming language, and thus also suffer from the “convenience over correctness” problem

He never said that … he said that it is possible to do something like hiding the network even if you are using HTTP/REST at the back .. thats true .. whether or not such an argument is of any use / or whether it matters at all is a different matter (see my comment there)

He never said that they suffer from convenience over correctness problem. he said your article suffered from convenience over correctness problem … in that your arguments are specious (thats what he thinks )

what I’ve seen they invariably run into problems with various REST constraints such as the hypermedia constraint

can you elaborate ? I disagree with you .. I am pretty sure there is an idiot out there who will be able to do this …

heck, I can tell you a very smart thing to do : use Linda and I am sure you can make a RESTful language that hides the network but you are following all the REST constraints (well nearly all).

One point he does make is that it doesn’t matter.. Oracle / IBM / Microsoft are pushing WSDL/SOAP and you will have to use it if you want a job (or you are really talented )…

One insight (if you can call it that) that I am getting is that WSDL / UDDI tried to do what HATEOAS tries to do in REST .. delay the binding of the application / client to be as late as possible .. WSDL/UDDI want to do the same thing .. Fielding’s argument is that the only way you can do this with *any* semblence of normalcy / complexity management for large scale systems is via a uniform interface .. WSDL / UDDI didn’t do a uniform interface and thats where they got killed .. the complexity .. (lets face it .. no one gives a crap about the “against-the-grain-of-the-web” argument .. the only reason UDDI is loosing is that it was a pain in the ass too complex app to use (I don’t mention wsdl here cos I can see many still use it today )

What I meant by elitism being expensive and non-scalable is that it forces businesses to use only the 4% elite to get anything real done. Any business guy worth their salt would immediately find and eliminate that bottleneck. Steve apparently likes to go on and on about “business SOA” and the non-technical side of SOA, so how he thinks this elitist model is at all tenable is way beyond me.

As for frameworks and the hypermedia constraint, I’ve known a few people who’ve been trying to tackle that one for awhile now. Yes, I’m sure they’ll get there eventually, but I doubt that it’ll be worth it in the end.

And yes, Steve’s whole “cool vs. career” line is extremely lame. I feel embarrassed for him that he thinks it’s clever and keeps using it.

BTW, anonymous, I see over on Steve’s blog that you say you’ve been trying to convince me that REST is suitable only for very large-scale systems. Given that I use it every day for systems of various scales, from very large down to pretty small, and it works quite effectively for all, you won’t be convincing me of that argument. Have you actually tried it at various scales, and if so, what problems did you encounter that makes you believe this?

If it were so easy, Steve, we’d already be overrun with RESTful language frameworks. Thankfully, though, we’re not, because the two simply don’t mix conveniently, which is why I’ll continue to stand by what I wrote.
Would we? Could you give a specific example of something in REST that can’t be done via a generator or a programming language? Mainstream investment has yet to be misapplied to this area as it has been misapplied so many other times so unless there is an NP complete problem lurking in there somewhere its going to happen, whether bad or not.

My point is not elitism it is age old Mythical Man Month and Peopleware stuff and I completely agree that people should be educated, but they should not be mislead. I’ve sat through a Doug Lea presentation and understood almost every third word, this doesn’t mean I should kid myself that I can write multi-threaded code like Doug.

A practical and business oriented approach is to think less about technical perfectionism and more about understanding how to blend and deliver with the various different skills that you have at your disposal. This makes the most of the talents and helps them develop and evolve over time which tends to make them learn more.

If you are saying that Peopleware and MMM are wrong and everyone is equal then fair enough, but somehow I doubt that.

Change in an enterprise is about people, process and technology just changing the later and condemning all previous approaches as “failed” achieves the “success” that IT currently enjoys.

@Steve: generally I don’t have time for people who are as disrespectful as you are, but here’s a better idea: show me an existing programming language framework that fully and conveniently supports all of RESTful HTTP in a way that REST is hidden behind programming language constructs. I already gave the example of the hypermedia constraint, and no, your off-the-cuff code in the blog does not come anywhere close to addressing it in a complete and convenient matter. I’ve been watching some pretty smart people trying to build these frameworks for a couple years now, and they’re not having an easy time with it. If you think it’s so easy, then go do it yourself and prove me wrong.

As for Peopleware etc., yes, Steve, it’s quite clear that you’ve sure read a lot into my writing that I didn’t actually put there. Were you aware that in the past I’ve held positions of chief engineer, senior architect, chief architect, vice-president of platform technologies, and head of product innovation, for example — do you honestly think I know nothing about the people side of the business? You apparently have a “business SOA” agenda, so you seem to lash out at any articles that cover only technology, ignorantly accusing them of chasing silver bullets and ignoring the people side of the equation, apparently just so you can lord yourself over their muppet authors. Have you ever read my column from a few years ago entitled “The Social Side of Services,” for example? Or gee, maybe I shouldn’t have mentioned that column, since you’ll now go off and just sort of half read it like you did the most recent one, and then write a new self-contradictory review just so you can inexplicably call me even more names.

You are absolutely correct that all people are not equally talented technically. However, if they are decent programmers and willing to learn there is no reason they couldn’t pick up the necessary skills to be excellent distributed system designers.

You say “A practical and business oriented approach is to think less about technical perfectionism and more about understanding how to blend and deliver with the various different skills that you have at your disposal. This makes the most of the talents and helps them develop and evolve over time which tends to make them learn more.”

This seems to be self contradictory though. If you are willing to bend your design to meet the lowest common denominator, how will the the people at the bottom learn what is correct? Frankly I think it is much better to strive for the highest technical pedigree in your solution and help to work with the less knowledgeable people so that they can grow as developers. This also necessitates a clear path to explain how things work and why they work. This is where REST shines. It is an excellent technical design yet with simple enough constraints to make it easy to explain. With all the useless jargon of WSDL etc.. it is no wonder that you expect only 4% of people to grok your system. A deep hard look at the SOA industry, that clearly Steve Vinoski has already taken, would show that this complexity is there to breed more complexity and syphon money out of enterprisy shops without the fortitude or technical aptitude to show the solutions for the waste that they are.

Being anonymous , I can be the bastard I always wanted to be. :D Ok frankly , I am not like you guys, well respected and adored by so many .. I am afraid that a prospective employer might look on my postings/comments and then kick me out on my ear. So I prefer being anonymous .. but I can assure you that my comments as anonymous are no different than those I would give with me in person (I would probably use better language …. but if I can’t be politically incorrect on the internet then where have we come?)

>REST is suitable only for very large-scale systems

dude , waht you/most people do and talk about (on rest-discuss etc.) is not REST. Using clean URIs and the HTTP verbs correctly is not what makes REST so insanely powerful.

Clean URI design doesn’t achieve much. Even worse it gives people a false sense of security that “ I have followed RESTful design , so my app will scale”. Clean URI design is just plain common sense that is now resurgent with the many books brought forward under REST banner. Its not surprising that clean URI design is good, the KISS principle is much older and much more pervasive in computing that REST. But Clean URI Design is not the property that gives REST its power. That crown belongs to HATEOAS. Look at the web right now as you use it. If you were required to KNOW that google searching is a GET query and ordering something on Amazon is a POST query instead of being told by the server , then that application wouldn’t scale (even if you knew the URI on which you want to do based on clean uri design! ). What makes the web so insanely powerful is that you can just go to google.com ( a bootstrap uri which the TAG requires to be GETtable) and then see the form ,type in what you want and the google.com html page tells you where to do the query, what verb to use etc. The fact that you can also do google.com/search?q=term is nice but its not what you use, and it isn’t something you should hardcode into applications you develop!!

This is an insanely complex thing to do for SOA/machines and I really don’t think it is worth the hassle for small scale systems as the advantages(caching , scalability etc.) offered aren’t of much use for small scale systems.

Once again , just doing PUT/GETS/DELETEs doesn’t imply that your service is RESTful. Calling it RESTful just implies you don’t understand REST.

Think about it … whether in the small scale systems . how much did you REALLY use the HATEOAS constraint!?? If you did (actually use it, the Ruby-Richardson-esque “connectedness” isn’t HATEOAS, for a good understanding of HATEOAS see Stu’s blog as well as Fielding’s rants at rest-discus), was it worth the hassle and time ? (unless you planned for it to be later used on a large scale).

Mainstream investment has yet to be misapplied to this area as it has been misapplied so many other times so unless there is an NP complete problem lurking in there somewhere its going to happen, whether bad or not.

Rofl … lmao .. Maybe it will start with a “rather large systems integrator ” ?

(its a joke .. don’t take it to heart)

A practical and business oriented approach is to think less about technical perfectionism and more about understanding how to blend and deliver with the various different skills that you have at your disposal. This makes the most of the talents and helps them develop and evolve over time which tends to make them learn more.

I think what Vinoski is trying to say is that give up on RPC right NOW. Use RESTful guidelines. Because of the complexity you might not be able to follow all of the REST constraints , but using the guidelines will help in evolvability later on. I agree with him there but not on his argument that RPC in itself is totally wrong … as you said .. “its not a binary thing”

fully and conveniently supports all of RESTful HTTP in a way that REST is hidden behind programming language constructs.

Maybe he should solve world poverty too while he is at it?

A deep hard look at the SOA industry, that clearly Steve Vinoski has already taken, would show that this complexity is there to breed more complexity and syphon money out of enterprisy shops without the fortitude or technical aptitude to show the solutions for the waste that they are.

Amen!

I think Tomayko said it best .. strive for simplicity because Distributed Systems is hard enough in itself .. no need of add WS-crap to make it harder.

@anonymous: fair enough. The reason I asked is that your comments are generally good, so I figured you might want credit for them, but I understand your reasons for anonymity. No worries.

BTW, there is a preview button, as long as you don’t sign in as anonymous. No, honest, there is. Trust me. ;-)

So what makes you think that my RESTful systems don’t follow HATEOAS? Aren’t you sort of preaching to the choir here? HATEOAS is precisely why I mentioned the hypermedia constraint in the column as one of the areas that RESTful language-focused frameworks tend to get wrong. HATEOAS is as beautiful on a small scale as it is on a large one. Not sure why you think it’s a hassle or insanely complex.

Steve in the article you said “fundamentally flawed” I have to admit that in retrospect I wish I’d altered the tone (but not the content) it annoyed me to see someone write so black and white a picture in so gray an area.

Now apart from the obvious solution on Hypermedia and REST, namely use an OO language and map POST to object methods and links to other resources via references. Now from a “purist” perspective you could argue that this lacks the dynamism and has issues with dynamic upgrades (hence the Dynamic Proxy suggestion). LISP however would be another language of choice for this with its dynamic ability to process information and to form new references and convert flat data into rich data.

Now oddly enough I’m not going to write a whole framework (there is a cartoon out there about that), but seriously what is NP complete about hypermedia? Please note what I’m also saying is that some people/companies will do it _badly_, in fact probably worse than they do in RPC because they won’t understand (or be worried about) the issues. Its a solvable problem, but its also a problem that will be tooled and some of those tools will be dreadful. Developers will use these tools.

Steve having read you stuff for quite a few years and enjoyed quite a bit of it I’m well aware that you know about the people issues and the like. My point was only that “elitism” as you call it is a fact of life.

I apologise for any offence caused and you are right in that my tone isn’t as constructive as it could be in the article. I don’t expect every article to always talk about business, that would be nuts, but presenting a given approach as “fundamentally flawed” and pushing the next approach is what causes IT to flit from one technology to another, never does it seem that something is “almost” right its either rubbish or its great. Clearly I owe you a beer.

@Andrew: You said it all with the phrase “decent programmers” yes you are right that decent folks can grok distributed systems. I’m also not saying that an architecture should be lowest common denominator, I’m saying that it should take into account the _blend_ of skills that are available and enable people to succeed based on their abilities. One thing I always recommend is putting the talented people where they will have the most impact on the business and not wasting them doing back-office pieces. So maybe those folks will use REST, and use it successfully, but the folks doing the HR process extension to SAP will probably be using BPEL and WS-* and still delivering successfully.

The reason I asked is that your comments are generally good, so I figured you might want credit for them,

:O thanks a lot for saying that .. coming from you it means a lot.

Btw, I thought you would be really incensed by my “boo-fucking-hoo” line … or is this some devious plot to get me to out my identity ? :D

So what makes you think that my RESTful systems don’t follow HATEOAS? Aren’t you sort of preaching to the choir here?

In a way, following HATEOAS is reducing the amount of assumptions the client makes at his side. HATEOAS implies that everything is directed by the server via Hypermedia. The more the assumptions you make , the harder it becomes to code the client side application. (The basic assumption of the uniform interface/safe GET is always there).

Now making assumptions isn’t necessarily a bad thing, in most systems which aren’t distributed it is the norm. My point is slowly slowly as you increase the size of the system, the number of assumptions the client makes should reduce. RPC lies somewhere between a system with no distribution and a Web application. The final decision is a tradeoff and I agree with Steve Jones that this tradeoff involves a lot more than just the technical merit. It might be sad/wrong but its true.

Can’t we all just get along? This is not a religion which (maybe I am wrong) was the point of your article.

Words like ALWAYS, NEVER, and ONLY really don’t benefit software design or our progress as developers and architects.

Crazy Metaphor: Two construction workers show up at a job to work on a roofing project. One has a hammer, the other a nailgun. Do they start yelling each other and swearing about which tool is best? No, they work side by side and get the job done.

fundamental: adjective: 1. forming a necessary base or core; of central importance. 2. affecting or relating to the essential nature of something or the crucial point about an issue. 3. so basic as to be hard to alter, resolve, or overcome.

flaw: noun: 1. a mark, fault, or other imperfection that mars a substance or object. 2. a fault or weakness in a person’s character. 3. a mistake or shortcoming in a plan, theory, or legal document that causes it to fail or reduces its effectiveness.

RPC is indeed, according to the definitions quoted above, fundamentally flawed. This point is not at all debatable. As I thought my column made quite clear, RFC 707, where RPC was first defined, points out some of its flaws, and numerous publications since then have detailed others. I don’t know for certain what your definition of “fundamentally flawed” is, but it definitely doesn’t mean “completely flawed” or “impossibly flawed” as you seem to think it does.

As for your NP-completeness argument, why would I choose to waste my time arguing over something so truly pointless? I prefer pragmatism, thanks. The bottom line is that RPC, by definition, exists to make distribution an extension of the programming language, while REST, by definition, does not. I thought Stu made this same point quite clearly. This point, just like the one in the previous paragraph, is certain.

Fielding defined REST last decade and published its definition in 2000. Eight years later, we still have no framework that completely covers all of REST and completely hides it conveniently within a programming language. That’s because, IMO and in my experience, REST proponents simply don’t think that way. Stu made this point as well. Those who might set out not understanding REST and trying to build such a framework never reach their goal, because they figure out REST along the way and ultimately realize they’re going down the wrong path — I’ve already seen this happen in a number of cases.

Steve, I completely agree with your points on RPC being fundamentally flawed. I’ve seen some nightmarish uses of RPC in the real-world – fine grained calls, 70 parameter methods, synchronously blocking applications that were inherently asynchronous, etc. I’ve seen surprisingly few implementations that attempted to make use of futures to batch requests and try to achieve some higher level of concurrency – my guess is that folk smarter than me have tried and failed. In other words, the only reason I can think of to use RPC is to integrate it into the language as you mention – it really serves no other valuable purpose and with a ton of downsides to boot.

What is interesting is that we’re at a point now where concurrency is a big deal. Intel just said, “prepare for hundreds/thousands of cores”. Procedural/iterative development as it stands today just doesn’t cut it and that’s why languages like Erlang are really getting a lot of traction. I wonder if these same folk defending RPC will defend Java soon enough? “Java is great for concurrency – it has a synchronized keyword and a threading API!” :) And when their team of 20 is writing 2m lines and debugging race conditions, and my 2 great Erlang guys are proof-reading their equivalent 2500 line implementation, will they defend Java still?

It’s funny how people take what they know as their identity and to move away from that is akin to losing themselves.

You make an important point, and it partly hits on what motivated me to write this recent series of columns in the first place. The times are indeed a’changin, and we’re finding a variety of areas where there are much better ways of doing things than what the status quo offers. I mentioned a programming language renaissance in this posting, and it”s quite real. We’re also finding that the serious amount of accidental complexity associated with RPC-based approaches, SOAP, WS-*, and all that sort of stuff is simply not needed, and when we toss it away, it’s not just a technical win, but it saves money for both short-term development and long-term maintenance, and it sometimes even opens new unforeseen business opportunities (serendipity). If you’re a person who’s stuck in the post-mainstream conservative/skeptic technology adoption realm like the SOA and WS-* guys, then there’s a chance you haven’t seen these changes yet, and also a good chance they’d frighten you if you did. But regardless, the changes will eventually reach even there, even if they keep their heads stuck in the sand (or elsewhere). Because of the way markets work, it’s inevitable.

The productivity gains to be won from these changes should not be underestimated. For example, people tied to their tired old imperative languages constantly give me grief for talking about Erlang — I wish I had a dollar for every time I’ve heard one of those folks whine about the syntax, when in reality that syntax is quite simple, rather elegant, and can be learned entirely within literally a day or two, even by one of Steve Jones’s muppets — but the productivity gains Erlang can provide are incredibly impressive. I see REST in the same vein; I’ve written way way more than my share of RPC-oriented and SOA-oriented code and infrastructure (as have you, Dion), so I believe I’m in an excellent position to judge the practicality of RESTful approaches against the others. Based on my experiences, and as I’ve written in several of my columns this year, I believe RESTful approaches are game-changers too, in part because of productivity gains but also in part because of the lower degree of coupling that can be achieved along several axes that can positively affect not only the resulting systems themselves but also what their development teams can produce and maintain.

I could be wrong but it seems like Steve Jones keeps wanting to separate the technical and the non-technical, but in reality you can’t. They’re intertwined, so as technologies change, you have to change along with them or you’ll get stranded in a spot of ever-increasing irrelevance. Sometimes the required changes are big, like the multicore systems Intel recently told us about, and those big technical changes require associated big technical changes in tools, languages, and architectures, as well as non-technical changes in teams, development processes, and even ways of thinking. There’s just no getting around it, so I’d rather change on my own schedule and terms than find myself painted into a corner, unable to change fast enough to keep up.

@Steve fundamental: so basic as to be hard to alter, resolve, or overcome. I’m pretty sure that RPC doesn’t have flaws that are so basic that they are hard to alter, etc,etc people have still built working distributed systems with them and done so “conveniently” which would imply that it isn’t hard to overcome. Google seems to think that fundamentally flawed is a pretty big issue.

RPC has issues of course, surely you aren’t claiming that REST has no issues?

It is a bit disingenuous to point to 8 years without a result as being proof of the challenge. Firstly REST has been a very , very, minor development area for frameworks in that time and secondly the vendors who often create the worst frameworks have not committed themselves to the challenge.

@Dion Fine grained is an issue that can occur with anything that goes across a network. Its hard to see how REST encourages a coarse grained approach.

@Steve: it’s beyond me how you can continue to argue over the definition and use of the phrase “fundamentally flawed.” None of my reviewers, including Doug Lea, had an issue with it. My editor is incredibly excellent, and he had no issue with it, because it’s precisely correct in this instance. In my previous comment I supplied the dictionary definitions for the words comprising the phrase, which clearly support how I used it, and yet you somehow managed to drop the whole definitions save for the one little bit that just barely supports your argument. Sigh. If it really bothers you that much, next time you read the piece, just place your thumb over the word “fundamentally.”

Every approach has flaws. To try to imply that I said that REST has no flaws is ridiculous. The column, you’ll recall, is about the fact that RPC is an inappropriate abstraction for distributed systems, following on from the three previous columns. REST, as an abstraction, is not, yet that doesn’t mean it’s perfect. Again, and I get tired of repeating this, even if you think the column is worthless Stu already clearly explained all this.

Perhaps you see REST as a minor, minor development, but that’s only because it’s not yet part of the technology adoption lifecycle curve you inhabit. For many of us, REST is far from minor, and it’s had some incredibly bright people using it for many years now. And I’ll say again (for the final time because I’m tired of saying it) that REST is not a programming language abstraction, so trying to cram it underneath one ala RPC will not work very well. If someone wants to try to write such a framework, they’re obviously free to do so, but it’s very unlikely to be a winner.

@anonymous: you’re right to harp on HATEOAS, as it’s definitely the place to go wrong when building RESTful systems. Perhaps you should come out of hiding and submit an article on the topic to Stefan and InfoQ. Oh, and regarding your “cheap shot” comment which I didn’t post, I disagree, given that the person in question uses the term frequently, and so I see it as a clear and consistent part of that person’s overall agenda.

@steve: Things are great. Just moved down to MA but still adjusting to the excess perspiration from the substantial heat increase. ;) Not complaining though. :) How are things yourself?

@Steve Jones
Fine grained is just one of the pitfalls of using RPC, but one that’s indeed very real because the paradigm does nothing to prevent such an implementation. Consider a management application that queries a server for various statistics of its operation. A naive RPC approach might request the server for each name/value stat – ala:

With a REST approach, one almost has to go out of their way to do something like that. What I hear most companies say is that they don’t have a team of PhDs with uber skills – they have recent grads, outsourced labor, community college developers, and so forth and they need to do things easier, faster, and with lower after sales service costs. Time to market, ROI, and so forth – the things as developers we take as generic business bullplop – is still a very real concern to companies. The thing is, Java developers are cheap and abundant and because RPC fits the Java programming model, it’s viewed as the simplest/safest approach. But I believe that view is mainly laziness from folk unwilling to deep-dive into a new way of doing things because the current way gets things done (eventually). Just like I won’t move to emacs because I’ve already learned vi; nor can I play Guitar Hero because I play guitar in real life. I’m far to lazy to retrain the fingers or the brain.

There are a subset of applications where RPC might indeed be the applicable approach but more and more we’re realizing these to be edge cases.

Imposing limits is often better than exposing everything. Convention over Configuration, or something like that.

. Perhaps you should come out of hiding and submit an article on the topic to Stefan and InfoQ

It is so much easier to comment and advise to others who HAVE written something. :)

With a REST approach, one almost has to go out of their way to do something like that.

Why? Can you elaborate? I would think that in the example given there is certainly more requests taking place in the RESTful approach.

Clearly, if the number of request is very small (say 1) and latency is very high you would use RPC. This is a reason to use RPC(or any other approach just as long as you get the properties that you want). For e.g see Steve Jones’s example here. It is a little overdone, but it makes the point that I am trying to make viz. Lay down your requirements (say the properties you are interested in ) and then choose the style that gives you these properties best.

However, in real-world implementations, the RPC style makes composition of data easy because it maps to native types or composites of native types. With REST, typically the data is more expressive (a side effect of generic operations) and cannot as easily map to a native type of the underlying language – perhaps a string? But manipulating it is still a pain. So rather than making a lot of finer grained calls and then parsing the DOMs (or whatever the return might be) which becomes a chore, they revert to doing it once in a coarse grained call:

res://process/hogs

This is not part of the theoretical RESTy definition of course – nothing is preventing someone from creating fine grained calls in REST – but it becomes a positive side effect of not necessarily mapping the remote calls to the programming language.

But what our shy Anonymous chap or chapette intuitively did after reading all the REST media was to break it down into fine grained calls.

Only time will predict this, but many people taking naive approaches to REST will start to transition what used to be server I/O into Network traffic.

Whether this is controlled by clients making judicious calls to the resource or the server itself filtering “noisy” URIs and simply not fielding those – we’ll need to do this.

Otherwise, REST will get a bad reputation amongst the uninitiated. The good news is that even if this occurs and people get burned initially, the solution is fairly obvious once you have a think. Although actually the time taken to actually translating that thought to reality can vary.

I think it is worth talking about this risk to make the new wave of REST adopters aware of the risk. Even in Mr. Tilkov’s anti-patterns article on InfoQ the emphasis is on caching and not re-requesting resources. But he doesn’t mention the risk of over granulation resulting in saturation.

But manipulating it is still a pain. So rather than making a lot of finer grained calls and then parsing the DOMs (or whatever the return might be) which becomes a chore, they revert to doing it once in a coarse grained call:

Wow! that must be the worst argument ever!

So tomorrow , when we get better toolkits with REST , REST would become wrong too ?? cos parsing the DOM won’t be a pain …
as a side point , if the WS returns a XML you have to parse it .. how does it matter if you are using WS-* or REST ? By your argument, the people using WS-* would use the network at the minimum cos parsing WS-shit is a pain.

a quick hint on what someone might actually do

/hogs/1 or /hogs.top3 cos you usually need only those numbers.

/processes/hogs would probably only return a list of links (to say /hogs/1 , /hogs/2) with possibly the % usage of each process.

Either you are talking through your HAT or you aren’t able to transfer what you mean into words ….

did after reading all the REST media was to break it down into fine grained calls.

The good news is that even if this occurs and people get burned initially, the solution is fairly obvious once you have a think. Although actually the time taken to actually translating that thought to reality can vary.

What the hell are you talking about!?!?

If you actually read Tolkov’s anti-patterns … he just mentions caching with the very apparent point that you don’t need to care about it … it should ideally be all handled by the network stack / squid server / whatever. The only thing I can imagine the client doing is checking the value that tells it how long the representation is fresh for .. but that could also be done by the stack.

mention the risk of over granulation resulting in saturation.

hehehe .. I am really scared asking you … but what does this mean ? over granulation “- I might be able to wrap my head around , but whats saturation ??

I am going to quote Steve Jones, since my English is apparently too sexy for your mind:

“Fine grained interactions across a network are a bad thing(tm) and as REST encourages traversal from resource to resource (which can be pretty fine grained things) then it means that a greater degree of network traffic is likely, add in the GET first before POST and it increases further.

Now add in the “GET idempotent” thing and you can easily imagine people who will map the resource client side with each request hitting the server, after all there is no reason (beyond common sense) not to.

My point is that muppetism in REST will be worse than muppetism in RPC.”

Additionally, you assume everybody using RESTian architectures has a Squid Cache or Content Delivery Network like Akamai in the middle to field repeat requests. Additionally, you assume that because content is cached, it doesn’t have a network cost.

Thousands of clients making indiscriminate requests will potentially cause saturation on the network. If you don’t like the word network saturation, I’ll replace it with “cost” and make you feel better.

ooooo! wow!!! soooo sexy .. I am getting a !@@!# just thinking about it ….

Although you started off with an idiotic comment like the one above, you got better later .. the quote by steve jones corrects things a little …

thanks for that .. I guess you could say it the way you said it .. but people like me won’t understand it ..

you assume everybody using RESTian architectures has a Squid Cache

when did I say that ???

the whole point of putting stuff like this into the goddamn protocol is that it makes it uniform … and if it is so uniform , then you can ask the network stack take care of it … the network stack is on your computer too … that is a major point of fielding’s thesis “Best performance of network based archs is by those who don’t use networks” (or words to those effect .. i really shouldn’t use the quotes )

I have no intention of discussing this further with you on Vinoski’s blog, especially since you have your skirt on. ;)

Dion wrote: “I wonder if these same folk defending RPC will defend Java soon enough? “Java is great for concurrency – it has a synchronized keyword and a threading API!” :) And when their team of 20 is writing 2m lines and debugging race conditions, and my 2 great Erlang guys are proof-reading their equivalent 2500 line implementation, will they defend Java still?”

Sorry to go on a tangent defending Java, but I think you also missed that Java also has a well defined memory model to go along with the synchronized keyword and threading API. Furthermore, Java has been going strong for 15 years now and any web based Java applications are multi-threaded and concurrent. This idea that this “multi-core revolution” is going to change anything is a complete utter myth. People have been running Java (and other languages) on 2, 4, 8, 16, 32, and 64 cpus machines for a long time now very successfully. No, the REAL revolution will be the I/O one when SSD drives become cheap and our I/O speeds go up 10, 100, 1000 times and the line between RAM and persistent storage becomes very very blurry.

@Steve:

Although there are no frameworks out there that hide hypermedia(links) in the language, the pieces are out there to make this a reality. For example, tools like Hibernate have all the metadata and hooks available to automatically insert the links to a marshalled document. Tools like Flex have cool features like automatically versioned objects. When you send them across the wire as a DTO, you automatically get change-set information. If somebody starts combining the ideas in all these frameworks we could get there.

Also, Steve, I wish you would curb your comments about annotations. I get your point, that just slapping @WebService on a Java interface is gay, but one may think you are against annotations in general. For instance, I think the JAX-RS specification is doing some cool work on providing annotations for REST and shows some of the true powers of annotations.

As for this elitism, I agree with Steve J. 10% of developers do 90% of the work. I’m currently lucky because open source provides a nice litmus test for incoming employees and this ratio is much more even, but if I ever had to go back to a traditional company, I’d probably focus on hiring very few and firing the 90%.