Featured in AI, ML & Data Engineering

In this article, author shows how to use big data query and processing language U-SQL on Azure Data Lake Analytics platform. U-SQL combines the concepts and constructs both of SQL and C#. It combines the simplicity and declarative nature of SQL with the programmatic power of C# including rich types and expressions.

Featured in Culture & Methods

The book Agile Leadership in Practice - Applying Management 3.0 by Dominik Maximini is an experience report of the agile transformation journey of NovaTec. Maximini shares his experiences from applying principles and practices from Management 3.0, success stories, failure stories, and learnings from experiments.

Featured in DevOps

Yuri Shkuro presents a methodology that uses data mining to learn the typical behavior of the system from massive amounts of distributed traces, compares it with pathological behavior during outages, and uses complexity reduction and intuitive visualizations to guide the user towards actionable insights about the root cause of the outages.

Three Levels of the REST Maturity Model

In his new article, Martin Fowler is using the 3-level model of restful maturity that was developed by Leonard Richardson to explain web-style systems. Throughout his explanation Fowler is using an example of a service for booking a doctors appointment.

According to Fowler, the starting point for the maturity model is to use HTTP purely as a transport system for remote interactions. In this case there is a single service - appointment service, which is using a single method call (POST in his example) and an XML input/output to communicate specific requests and replies.

Finding available doctor’s appointments, in this case, will require a request:

The transition to REST (Level 1) starts with resources. Rather than making all our requests to a singular service endpoint, in the case of REST we deal with individual resources. In the earlier example those are doctors and appointment slots.

Working with resources allows to partition requests. Every resource supports a certain set of functionality. It also allows to simplify requests - referencing a certain resource makes some of the request information implicit. According to Fowler:

To an object guy like me this is like the notion of object identity. Rather than calling some function in the ether and passing arguments, we call a method on one particular object providing arguments for the other information.

In this case an initial query will be to a given doctor resource (the doctor’s name is not in the request anymore, but in the resource definition):

While Level 1 is aimed mostly at the system decomposition, Level 2 is about HTTP verbs used for calls - it attempts mapping the HTTP verbs as closely as possible to the way they are used in HTTP itself.

At Level 2, the use of GET for a [query] request... is crucial. HTTP defines GET as a safe operation, that is it doesn't make any significant changes to the state of anything. This allows us to invoke GETs safely any number of times in any order and get the same results each time. An important consequence of this is that it allows any participant in the routing of requests to use caching, which is a key element in making the web perform as well as it does. HTTP includes various measures to support caching, which can be used by all participants in the communication. By following the rules of HTTP we're able to take advantage of that capability.

... use of an HTTP response code to indicate [request outcome]... Rather than using a return code of 200 but including an error response, at level 2 we explicitly use some kind of error response like this. It's up to the protocol designer to decide what codes to use, but there should be a non-2xx response if an error crops up. Level 2 introduces using HTTP verbs and HTTP response codes... The key elements that are supported by the existence of the web are the strong separation between safe (eg GET) and non-safe operations, together with using status codes to help communicate the kinds of errors you run into.

He also notes that Level 2 exposes some inconsistencies:

REST advocates talk about using all the HTTP verbs. They also justify their approach by saying that REST is attempting to learn from the practical success of the web. But the world-wide web doesn't use PUT or DELETE much in practice. There are sensible reasons for using PUT and DELETE more, but the existence proof of the web isn't one of them.

Finally Level 3 introduces what is often referred to as HATEOAS (Hypertext As The Engine Of Application State). As every resource in REST has its own URI, instead of returning resource ID and allow the consumer to calculate this URI itself, the Level 3 response directly returns the URI of the resource, that can be used for the next operation:

The point of hypermedia controls is that they tell us what we can do next, and the URI of the resource we need to manipulate to do it. Rather than us having to know where to post our appointment request, the hypermedia controls in the response tell us how to do it.... One obvious benefit of hypermedia controls is that it allows the server to change its URI scheme without breaking clients... A further benefit is that it helps client developers explore the protocol. The links give client developers a hint as to what may be possible next.

The model presented by Fowler defines three design techniques (levels) for REST services:

Level 1 tackles the question of handling complexity by using divide and conquer, breaking a large service endpoint down into multiple resources.

Level 2 introduces a standard set of verbs so that we handle similar situations in the same way, removing unnecessary variation.

Level 3 introduces discoverability, providing a way of making a protocol more self-documenting.

Level 3

Your message is awaiting moderation. Thank you for participating in the discussion.

Does attaining level 3 and adopting HATEOAS conflict with the desire to publish WADL or using WSDL 2.0 to describe REST services? I.e. should I expect my clients to discover additional services naturally by way of my other services?

Level 3 example

Your message is awaiting moderation. Thank you for participating in the discussion.

In the Martin Fowler's article, section about Level 3 there is an example where two hypermedia controls point to the same URI but in fact should use different HTTP verbs - would it be better to include verb/method as part of that control?! Since the same URI can do both it sounds a bit unsafe to do not provide such information, client application can by mistake perform different operation.

GET and caching

Your message is awaiting moderation. Thank you for participating in the discussion.

Since you cannot cache the result of free doctor's slots (as appointments are being made at random times), the use of GET with the reasoning of cacheability in the example doesn't seem to be an appropriate choice.

Re: Level 3 example

Your message is awaiting moderation. Thank you for participating in the discussion.

ok, so what you are saying is that you should not return duplicate URLs, correct?I was thinking that since you return link to a given resource and that resource has a defined operations, giving them as part of a link could be a good way of informing client what to invoke and how. Otherwise consumer could invoke PUT (update) instead of DELETE (cancel). Perhaps I am going too far with this... but I just wanted to check what's your opinion.

Re: GET and caching

Your message is awaiting moderation. Thank you for participating in the discussion.

Since you cannot cache the result of free doctor's slots (as appointments are being made at random times), the use of GET with the reasoning of cacheability in the example doesn't seem to be an appropriate choice.

Depends. You can control cacheability by using standard caching headers (.ie. "never cache" or "cache for 120 seconds"). Using Fowler's example, we could cache a randomly made appointment slot (say slot 1234) for 120 seconds, both on the user's browser and in an appliance internal to the good doctor's network (making the cache available not only to the user who first made the query but to other potential patients.)

Whoever requests slot 1234 first gets it with the other patients getting a 3xx code (.ie. 301 - Moved Permanently) or a 4xx code (.ie. 410 - Gone) - depending on how we'd like to treat such a case, and with the response containing the same message one would get when issuing a GET on the good doctor's available slots for that day.

This is pretty much the exact same situation one would get when having X patients asking for available slots and then having Y out of X concurrently requesting the same one. The situation, problem and solution remains the same independently of caching.

However with short-duration caching we provide a slight performance improvement. Plus caching is something that is (should be) very easy to implement. In fact, in many cases it can be done upfront at the HTTP server with no code changes to be done on the application at all.

In the end, the decision to cache or not cache the current state of a resource depends on the specifics of an application. It might be deemed (in the example case) not to cache anything at all. But that is not necessarily invalidate the reasoning of GETs as facilitators of caching.

There is a "necessary but not sufficient" relationship between the usage of GETs and the viability of caching. Edge cases in the requirements of a specific application indicate when not use caching. But gets draw a line in sand on where to use caching, if at all.

Re: Level 3

Your message is awaiting moderation. Thank you for participating in the discussion.

While WADL may not conflict with HATEOAS, I still think searchable human readable documentation is 100 times more important than any definition language. WADL, if provided, should be an option, not a requirement, otherwise we're just heading down the WS-* path, IMO.

Re: GET and caching

Your message is awaiting moderation. Thank you for participating in the discussion.

It would seem that with a short cache period like 120s there is quite a low cache hit ratio to be expected. And at the same time, when you do get the result from the cache you stand a good chance on trying to book the slot that it has already been booked by someone else, which is not a nice interface model since the aim should be to minimise the failure rate of booking a slot.

As an example of GET and caching the chosen example is not a really good choice.An example where the GET with caching would make more sense could be: getting the available doctors at the hospital on duty this week.

Need to be at Level 3 t be REST? My Levels.

Your message is awaiting moderation. Thank you for participating in the discussion.

Ok. This is a nice baby step definition of actual user web for APIs stadium.Level 1 and Level 2 are Web API maturity levels. My understanding is those are not REST yet, since they are missing some important REST constrains.Even, Level 3 that includes the dreaded HATEOAS, is still missing some other considerations. Finally, all levels are implementation oriented. We have no business, organization nor architectural considerations in the mix.

What about this set:Level 1. Regular use of Web technologies to achieve RPC or exposition of internal APP to web. May be REST by coincidence.Level 2. Understanding of REST as state machine flowed architecture, suitable for large hypermedia documents transfer in a networked system. Design application and APIs under REST constrains. Meaning REST because of REST.Level 3. Understanding of REST constrains at architecture level. Architecting integration of internal systems, APIs. Deciding not all applications may fit REST. REST as an IT strategy.Level 4. Understanding REST position in Business architecture. Planification of enterprise architecture having REST as one option, not the Holy Grail. REST as an Bizz strategy.

Re: Level 3

Your message is awaiting moderation. Thank you for participating in the discussion.

Hello Craig.Boris and Bill's are good points.

First: In REST, the application flow is a state engine thing. You go from one state to the other one following the events and operations you have at each state. That is simple.

HATEOAS means you use Hypermedia to define that engine, using links as the operations you need to follow to the to the next state.

WSDL is an XML document. Service comsuption is clearly a state sequence: first, looking for services, next reading service definitions, then consuming one service by sending a message, then waiting for response (optional), then processing the response (after receiving as a message).

For HATEOAS, you need a document with links (hypermedia) to show you the way to the new state. Think of WSDL as that document. See? Those are not contradictory things. Still, you can use WSDL following a non-REST behavior. But that is another story.