Chapter 17: REST Primer

REST Primer

REST, which stands for Representational State Transfer, is not a protocol. Born out of a chapter of the Ph.D dissertation of Roy Fielding, one of the original architects of the Web, it is more of a description of how the HTTP protocol was meant to be used. REST has been given a lot of attention recently, especially in the Rails community, which has thrown its support being REST with ActiveResource. Rails developers can be expected to, by and large, use ActiveResource, since it is there. But there are a number of issues that application developers should be aware of before jumping on the bandwagon.

In this chapter, we’ll introduce “textbook” REST. We’ll then contrast this style of REST with what most people mean when they say REST or RESTful. They we’ll go over some of the issues you sould be aware of when choosing to create REST interfaces. The first concern is with the way ActiveResource encourages you to create services based off of database tables; this problem is avoidable but becoming endemic. The next concern is with integration; because REST is a convention – and one no one agrees upon yet – integration with external parties can be a challenge compared with the relative ease of XML-RPC services.

REST Basics

To understand the problems REST faces, and the problems you may face if you adopt REST for your service architecture, first we must go back to the theory of REST and its original goals. Only then can we understand the challenges faced in creating REST services today, and come up with the creative solutions to meet those challenges.

Resources and Verbs

Unlike XML-RPC, in which the basic unit is a procedure that acts on data maintained on the server, REST is about resources. In REST, a resource might be a web page with the universal resource locator (URL) like http://foo.com/doc.html. With the resource in place, the next aspect of REST is verbs that act upon the resources. The HTTP specification defines four verbs that can be performed on a URL. They are depicted in Figure 15-1. The first is PUT, which allows the caller to store a web page at the location specified by the URL. The second is GET, which allows the web page, or resource, to be retrieved later. The third is POST, which is somewhat open-ended, but in general allows the resource to be updated in some way, perhaps with a new version. The fourth and final HTTP verb, DELETE, instructs the server to discard the web page.

In REST, the universe of resources is limitless. The universe of verbs, on the other hand, is fixed. The REST principles require you to think of your problems in terms of data elements, and how you might transition the state of each element one by one in order to accomplish some task. A by-product of the restrictive verb set is that actions must take place somewhere other than on the server. In general, the actions take place on the client. To increment a counter stored on a server, you first GET the counter value. Then you locally increment its value. Finally, you POST back the new value to the server at the counter’s resource URL.

Figure 15-1. REST and the four HTTP verbs

Why is this a good thing? In part, the verb set and its anticipated uses are a historical matter. When the Web was born, it was not about commerce, nor were there many complex procedural transactions. The Web was largely used for exchanging information between different government and academic bodies. When commerce applications began to appear, those applications weren’t like the ones seen today, such as eBay, Orbitz, or even Google’s advertising market. Rather, the commerce available in the early days of the Web was often nothing more than a web page, possibly with a few images, and a phone number to call to make an actual transaction with a human. The Web was a collection of content embodied in HTML documents.

Mosaic, the world’s first web browser, adhered to the set of four HTTP verbs. When you browsed a web page, you could actually edit the text of the page directly in your browser. If you had the appropriate permissions, you could “save” the page, generating a POST, which stored the newly edited web page on the server. If you think of the Web as a participatory marketplace of ideas, as it was certainly in the eyes of its creators, this interface, plus some basic permissioning, was all that’s needed. No complicated HTML forms were necessary for editing or uploading new content. Talk about a content management system!

Sadly, the “REST-ness” of browsers was soon lost as the Web became, for a long time, more of a spectator sport, where websites were “published” and browsers “watched.” As a result, today’s breed of browsers support only half of the original HTTP specification’s set of verbs: POST and GET. By convention, we now use GET when no server-side state change is expected (like viewing information about a movie), and POST when new information is to be recorded somewhere (like when placing an order) or information is to change in some other way.

Hardware Is Part of the Application

Because there is nothing more to using REST than using the HTTP specification itself, hardware that understands HTTP can participate in the server architecture transparently. For example, a caching proxy that understands the HTTP “Expires” header can distribute a web page to clients for as long as that page is still considered fresh, reducing load on the back-end server. Figure 15-2 illustrates this behavior. A document must first be generated by the server and sent through the caching proxy, but then the same document can be sent directly from the caching proxy for each subsequent request.

Figure 15-2. REST with a caching proxy

It’s REST’s property of many endpoint URLs, one per resource, that facilitates caching via an intermediary piece of hardware, because each URL represents only a single piece of data. Contrast that with an XML-RPC interface, where a single endpoint URL defines the entire service, and the methods and arguments—such as getMovie(5)—are passed along as parameters of a POST request. In the case of XML-RPC, you can’t use a dumb piece of hardware like a caching proxy to sped up your application. On the other hand, do you really want to?

The “free” caching behavior of REST is great if you’re serving up lots of static content, but not so good if your data or its availability changes over time. The trade-off here is that the server has no way of expiring the document before the originally set expiry. Even if the document becomes invalid, the caching proxy continues to serve it until the natural expiration time passes. In the traditional SOA world, a server-side cache would be shared among a number of application servers (just like Memcache) and the application can flush items from the cache whenever it makes sense to do so. This type of scheme is described in Chapter 19.

Mapping REST to SOA

With a basic understanding of the underpinnings of REST, we are now ready to discuss REST in the context of a service-oriented architecture. A good place to start is with a cautionary note from Roy Fielding himself, who wrote the following in his dissertation:

The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

REST is great for the types of large-grained content users are accustomed to seeing on the web: HTML web pages, PDF documents, images, etc In fact, you can’t help but use REST when you request these documents; users of the web do use REST every day, whenever they request web pages. What REST isn’t great for is the context for which it has recently gotten so much attention, namely, mapping REST to database rows. Indeed, this is how ActiveResource, the Ruby on Rails implementation of REST, is being marketed: as an easy way to add a web-service interface atop ActiveRecord CRUD.

Mapping to CRUD

Although it is not generally desirable to do so, the four main HTTP verbs can be mapped to CRUD, as shown in Table 15-1. A create maps to an HTTP PUT, which translates to a SQL insert command. A read maps to an HTTP GET, which translates to a SQL select command. An update maps to an HTTP POST, which in turn translates to a SQL update command. Finally, a delete maps to an HTTP DELETE, which translates to a SQL delete statement.

Table 15-1. Mapping CRUD to REST and SQL

CRUD

REST

SQL

Create

PUT

insert

Read

GET

select

Update

POST

update

Delete

DELETE

delete

It’s tempting to directly map a REST interface atop each database table, moreso since Rails provides generators to automatically create code that does just that. What’s missing from REST is the ability to modify more than one record at a time. Although sometimes you may only be working with a single row in a database, more often you need to update a number of rows. For example, when placing an order for movie tickets, you may need to insert a row for the order, plus individual rows for each ticket line item in the order. It is still possible to do accomplish this with CRUD-mapped REST. In order to do so, treat each row in each table as its own resource. The tradeoff is performance. For an order of n tickets, you need to make n + 1 requests to your REST-based service for all the inserts. More caution from Fielding:

The disadvantage is that [REST] may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server’s control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions.

Worse, you have no transaction support. If an insert or update fails, there is no easy way to roll back the SQL statements that were already committed one by one in earlier REST actions. With pure REST, application logic that belongs in the back-end—where a relational database provides a great many benefits for data integrity—is suddenly moved to the client. A multi-step business process that sensibly can be abstracted with a single method must be implemented step by step on the client. In these cases, a resource-based approach can become extremely fragile. Fielding talked about this, too:

…information needs to be moved from the location where it is stored to the location where it will be used by, in most cases, a human reader. This is unlike many other distributed processing paradigms, where it is possible, and usually more efficient, to move the “processing agent” (e.g., mobile code, stored procedure, search expression, etc.) to the data rather than move the data to the processor.

As with our XML-RPC services, we need to repeat the epiphany that ActiveRecord classes are database configuration files, and they generally do not map to the structure or size of objects we would want to work with within our application. Once we make this leap, then the criticisms above disappear. Placing an order in our XML-RPC service required creating a number of records on the service side. But it only required a single XML-RPC request. This allowed all of the SQL insert statements to be wrapped in a transaction. The same would be true of a RESTful interface if the grain of the objects was large enough. In fact, it should be the same grain as the Logical::Order class we defined in Chapter 17.

Essentially, this reduces the differences between REST and XML-RPC greatly. The ActiveRecord models are the same. The logical models are the same. The difference is whether the object you’re operating on, and what operation you want to perform on that object are encoded in a single token – a method name such as get_movie or place_order, or split between the method identifier – the URL – and the HTTP verb. Viewed at this level, it becomes a question of syntax. Even the controllers (ActionController for RESTful services, and service models for XML-RPC services) could be essentially identical.

Different Clients, One Interface

The second benefit of REST’s single URL per resource approach is that machine clients, as well as human clients via web browser, can access a REST service. Although there are JavaScript XML-RPC implementations, you need to write a JavaScript application that consumes the service before you can use it directly in your web browser. With REST, you can point your web browser directly at a resource URL to access it. This is facilitated by the Accept header. A machine client may specify that it accepts XML responses only, while a browser client would specify it accepts XHTML.

Unfortunately, REST contends with two problems here. The first is that different clients have differing levels of support for REST (Figure 15-3). As already noted, browsers only support POST and GET. So a REST service intended to serve different types of clients must “dumb it down” for the lowest common denominator, the browser. This is ActiveRecord’s approach.

Figure 15-3. Common clients have differing levels of support for REST verbs

Second, there is no single convention for how clients specify a return type, either. While the Accept header is how return formats should be specified, ActiveRecord has taken a different tack. In Rails, you specify which return type you want by appending an extension on the URL. A browser client requests a resource with .html appended, while a machine client would append .xml.

The REST ideal is to have a single, uniform interface for browser and machine clients, but it pays to be pragmatic as the designer of a service-oriented architecture or public-facing web service. There is not really a benefit to tightly coupling the HTML web pages associated with a user interface and its human-oriented workflow with an API intended for consumption by programmers utilizing your service. One interface is for manipulating the resources that underly your application. The other is for creating a user experience.

Although in your first iteration of your website and service design, you may be able to construct an API that satisfies both sets of customers—and hopefully without much sacrifice to either—in your second iteration, you may not be so lucky. When your company’s product team comes up with a completely new perspective on how information is to be delivered to visitors to your site, what do you do with the machine side of the API that thousands of people have come to depend on? Do you force those clients, whose applications were operating perfectly well based on the old machine API, and independent of your user interface, to conform to a new API simply because your user interface has changed? Or do you start supporting what essentially amounts to two API sets anyway, one for humans and their browsers and one for machine clients?

Rather than whittle at an API until it works for both humans and machines, it is often sensible to write for each separately from the beginning. When you design this way, you don’t have to worry about breaking backward-compatibility for your machine clients when you change your user interface. Also, if you forego pursuing the purist ideal of one interface for multiple client types, you can be truer to the original ideas of REST where they are attainable. You can design a machine API that uses all four HTTP verbs where they are appropriate, and your browser-based “API”—a.k.a. website—can evolve as necessary to suit your ever-changing application and user needs. Remember that when you write an HTTP interface, you are writing a REST service, even if that service is not well-suited for machine clients.

One notable exception is a JavaScript client. JavaScript clients operate within your web browser, and they can make AJAX requests back to your web-service. The standard way that Rails interfaces with AJAX requests is with .rjs templates that render chunks of HTML to be placed in an existing page – either prepended to, appended to, or replacing an existing element. Even though this seems RESTful, because small pieces of data are being requested rather than entire web pages, it really is not. The application server is still very tightly coupled with the HTML user experience, and is unlikely to be useful as a generic interface for other machine clients.

HTTP+POX

In much of this chapter, I’ve talked about the numerous challenges REST faces in gaining adoption in the enterprise world for service-oriented architecture applications. These challenges begin with the strictness of the four verbs and the requirement that resources be transferred to the client for piecemeal processing. Further challenges ensue with purist REST due to the lack of support for the four-verb set in browsers. Finally, the lack of established convention for resource URLs and how one specifies content types (ActiveRecord does not comply) can make REST appear somewhat unpalatable.

However, outside of the Rails world, a variation on strict REST is gaining traction. This variation doesn’t discount the real need to deal with process-oriented applications simply because they don’t map to GET, POST, PUT, and DELETE. In fact, with this variation, you can accomplish anything you could do with XML-RPC, but forego the added layer of indirection inherent in XML-RPC layered over HTTP. This variation is known as REST+POX, where POX stands for “plain old XML.”

In HTTP+POX, the REST convention I’ve spoken about throughout this chapter is used where a resource-based approach makes sense. Notably, everything possible with ActiveResource is in this category. But for other problems, where a process-oriented approach is required—whether to ensure the server can wrap a procedure within a database transaction, or to accomplish a task without first moving all of the data to the client for processing—the POX side of the convention takes control.

What is POX, in this context? It is simply a method, accessible via a URL, which takes parameters, and returns a result in XML format. It‘s like the page defined in the action parameter of a web form, but in this context, the parameters passed in can be complete data structures encoded in XML. In short, it is the same sort of server-side actions we’ve been developing for years, with the addition of complex data as parameters. The “plain old XML” part of HTTP+POX is a way of bringing the procedural actions hidden behind an endpoint URI in XML-RPC back down to the lower-level HTTP layer.

Usually, when people say RESTful, this is what they mean.

Defining a service contract

Although REST was described by Fielding many years ago, REST is still in its infancy as a practical means for building web services. How to best implement a RESTful service is something that REST proponents still do not agree on.

The popularity of SOAP and XML-RPC were propelled by a rich toolset in a variety of development environments; the tools made it easy to create and to consume web services. ActionWebService is a great example; it makes child’s play of developing service APIs which can be shared as a bridge between applications.

But REST has been seen by many as a reaction to SOAP. But there has been a tendency to throw away the baby with the bathwater. In this case, the bathwater is the protocol translation layer that sits atop HTTP. That’s fine, as it doesn’t provide a large benefit to the end-user, but consumes resources to marshal and unmarshal data. The baby is a rich set of tools for creating and consuming RESTful services.

Tools all center around the contract that you, as a provider of a service are expected to live up to. With SOAP, this contract is the WSDL file, which describes what methods are available in the web service, what the parameters to those methods are, and what the return values are.

The contract can be a great thing. It can be used to generate documentation. It can be read by a human to see what a service is all about. It can be used to generate complete client code. It can also be used to generate a skeleton of a service implementation.

The problem with a contract is that, like its legal equivalent, it implies some degree of commitment from the provider. Once you’ve published your service contract, you can’t change it willy-nilly. That’s good for consumers of the service, but can seem restrictive to the service provider. On the other hand, one of the goals of publishing a service is to have people use it, so making it easy for clients to use your service by guaranteeing the APIs won’t change underneath them is in your own best interest, too.

To encourage static APIs, it is a good practice to develop the contract first, then figure out how you are going to implement it. For SOAP, that means hand-coding the WSDL file. The hand-written file would then be used as input to a program that would generate stub service code. These stubs contain declarations for each method of the API into which you insert your own code.

This process makes it a challenge to change the API, because you can’t easily regenerate your stubs once you’ve already filled them in. This discourages frequent API changes; only a change that is absolutely essential – such as for a critical bug fix – would warrant the effort of hand-editing generated code. For what otherwise amounts to enhancements and new functionality, the WSDL-first process encourages adding a whole new API version with a separate WSDL and a separate set of generated methods, leaving the old version in place, with support continued for existing clients.

But writing WSDL by hand is a terrible chore. It can seem like yet another language to master. You already know how to declare methods in the language you are using. Why should you need to declare them yet again in an XML file? Indeed, if the interpreter or compiler of your application can understand your declarations, can’t those declarations also be translated automatically into a WSDL XML file for other machines to process?

The answer is, of course, yes. This is how most of the tools for working with WSDL work these days, including ActionWebService . In fact, in ActionWebService, the WSDL “file” itself is completely ephemeral; it never is written to disk, but instead is served up fresh with each request for it, based on the current definitions of methods. While this is great for development iterations, it’s not so great once you are trying to lock down and stick to a published API.

But whether you prefer WSDL first or WSDL last, the point is that there are a variety of tools available to help you get your SOAP or XML-RPC service out the door. So what about REST? What is the equivalent?

Here we find another problem in the REST community. There are some – mostly those who tend toward the strict REST, not RESTful, paradigm – who believe there is no need for SOAP-like tools for communicating to clients how a service works. Since strict REST is about applying four HTTP verbs on resources, and resources contain links to other resources, you only need a URL or two defining lists of resources to discover the entire service.

Strictly speaking, this argument is correct, but it’s overly restrictive. Hopefully, you’ve already been convinced that it’s not always appropriate to deal in terms of resources and that the occasional verb-based URL is OK. If you are in this group, then suddenly you need a way to express to others what verbs are available, what the parameters you need to pass to those verbs are, and what the resulting return values will be. Suddenly you need something very much like a WSDL file. Even many in the RESTful – i.e., “it’s not SOAP” – camp, cringe at WSDL-like solutions. Therefore a standard way to describe RESTful web services has not yet been adopted, and there are also no widely adopted tools.

There have been a number of efforts toward tool standardization, though, and for REST to really become an enterprise option, as SOAP and XML-RPC are today, some kind of description language and toolset will certainly be adopted soon.

REST Clients in Ruby

At the moment, ActiveResource is the de facto REST client and server in Rails—so much so that it has pushed ActionWebservice completely out of the core Rails distribution. This is unfortunate, because ActiveResource’s style, which differs in some important ways from Fielding’s REST, is very far from being de facto in the REST world, much less the SOA or web-services world. Yet the choice by the maintainers of Rails to displace alternatives sends a message to new developers that they should use ActiveResource as their first—and apparently only—stop for implementing a remote service.

Of course, there are benefits to using ActiveResource, too. Like many other aspects of Rails, ActiveResource is a snap to set up and get running with quickly. Because it relies so heavily on convention, it is trivial to extrude an ActiveRecord model into an ActiveResource one with its own network API. Similarly, there is next to no configuration to be done on the client side, either.

ActiveResource can feel much like the original Rails screencast where David Heinemeier Hansson creates a blogging website in 10 minutes. The screencast was an inspiration to a number of developers sick of clunky development environments, including myself. On the other hand, writing a website using scaffolding is, in almost every way, a bad idea. Scaffolding is, by design, inflexible; although it’s quick, it’s not very pretty. But it is great as marketing material.

Indeed, because ActiveResource relies so heavily on convention, it does not automatically create a description of the service for clients, like ActionWebservice does with WSDL. Rails clients know how to use the API for free, and for screencasts that is enough. But when you’re writing web services and back-end SOA services, you can’t depend on convention if your clients are not using Rails. There are description languages that can handle REST services—WADL appears to be the best contender for a standard—but Rails does not yet generate WADL files automatically.

When you’re not consuming ActiveResource services, you can consume REST services just as easily in Rails if that service does provide a WADL file. Sam Ruby and Leonard Richardson have written a Ruby client that parses WADL files and creates a client library, allowing you to create Ruby interfaces to use a custom-written library, or to hand-compose and hand-parse results. Their client, wadl.rb, can be obtained at http://www.crummy.com/software/wadl.rb/.

The Way the Web Was Meant to Be Used

REST proponents argue that XML-RPC is an “unnatural” way to use HTTP, because XML-RPC treats HTTP only as a transport protocol. All requests are POST transactions, and the rest of the HTTP protocol goes unused. XML-RPC layers its own logic atop HTTP, delivering everything needed to process the request at the endpoint in the XML-PRC payload itself. On the other hand, Fielding himself provides us with all the arguments we need to dissuade ourselves from using REST for an extremely fine-grained service-oriented architecture. Inasmuch as this chapter may appear to throw FUD (fear, uncertainty, and doubt) in the direction of REST, so to do REST proponents direct FUD at XML-RPC.

As unnatural as it may seem to layer atop HTTP, in reality, XML-RPC has been serving enterprise architects well for quite some time. If XML-RPC wasn’t what the architects of HTTP had in mind, certainly they may be pleased by how far it has come, driven in large part by the flexibility of HTTP itself, which performs extraordinary well as a transport protocol for any type of packaged data. Indeed, Fielding will no doubt be pleased if some version of REST is one day heralded as the de facto mechanism for implementing SOA, even though that was not his original intent, either.

In the end, the decision is yours to make. If your company has something to gain from being Web 2.0 buzzword-compliant, then choosing REST may be sensible just for the press. If your goal is achieving a service architecture behind the firewall, where no external inspection is taking place, then you’re likely to get more mileage, with less hassle, out of XML-RPC. In the following chapters, we’ll see how to build both types of services. We’ll build an XML-RPC back-end service architecture for our movies application, and a HTTP+POX interface for the public-facing Internet.

For reference, Table 15-3 provides a list of the main remote service protocols and conventions, and the various considerations discussed in this and the previous chapter. In the next chapter, we’ll start building our first Rails service using XML-RPC.

Found this book useful?

I’m riding a bike 545 miles in June 2017 from San Francisco to Los Angeles to support HIV/AIDS-related charities. I would love to see readers of my book support this cause, too. If you got some use out of Enterprise Rails, please support my 545-mile charity bicycle ride taking place in June 2017 to say Thanks.