The three statements seem obvious until you begin to unpick what they mean – and they might even seem contradictory. Making an API simple seems like a noble goal but it can easily be thwarted by complex edge use cases, existing legacy code and a tendency on the part of some API designers to expose underlying data models in raw form. Flexibility often breeds complexity as the API becomes overloaded to meet many use cases. We’ll take each topic in turn and finish up with an all important metric: TTFHW.

Simplicity

Picking on simplicity first, it is important to ask the question “simple for what?” – it may be trivially easy to achieve a particular task with an API, but horrendously complex to achieve others. There is also a false simplicity trap to be aware of:

Making the methods in the API simple may in turn mean lots of them need to be called to get anything done.

It’s also important to avoid trivial conclusions “well, JSON is simpler than XML, so we’ll just use that!”. There are many excellent articles on good API design (e.g. Joshua Bloch’s Google talk [the slides are here]) so we won’t dig to deep here. However, it may be useful to to think about simplicity at several different levels including at least:

Method Structure: what methods are provided and how do they interrelate? If search style functionality is provided, how are queries expressed? Do actions have multiple effects at the same time?

Method Structure: what is the data model of the resources being exposed? If a variety of different objects are represented (e.g. Hotels and Reservations), how do they related? If different objects have identifiers, how are these managed and used?

Authentication: what authentication is required to access the API? Are rights and quotas easy to understand and work with?

While it is hard to come up with general guidelines, the following three questions are worth asking:

What are the logical atomic operations which have an easily understood and useful result? These a likely to be useful as elements of the API.

What are the most common and important complete workflows that should be supported (see part I of this series as to why complete workflows are important!), and how can these be done in the fewest number of calls possible? The calls which provide sound building blocks for these workflows are also likely to be worth having.

At the data model level, what are the common combinations of resources that needed together? It’s likely that structuring the API to make it possible to grab common combinations of data in one go is very useful.

In designing for simplicity it is also useful to bear in mind: simplicity for some, may mean comoplexity for others.
On the surface of it, using JSON may indeed seem simpler than XML, but it the API exposes complex nested data structures which often need to be parse, split and combined, XML might indeed be a better bet. Supporting multiple data formats is often desirable since it allows different communities of users to use familiar tools (that makes it simpler right?) – but it puts a burden on the operator of the API to ensure both representations are maintained and well documented (oops – complex :-/).

Lastly, simplicity is also linked to documentation: good documentation goes a long way to explaining how to get something done with an API. Again, this is linked to starting with a clear vision of the “jobs to be done” with the API.

Flexibility

Having made the API wonderfully simple, comes the next challenge: making it flexible. This immediately seems contradictory to the advice in the previous section of trying to ensure that important use-cases are easy to execute – surely the API might end up being over fitted for those use cases?

This is also the part in the conversation where the Hypermedia / HATEOAS / Media Types discussion comes into play for REST APIs. Hypermedia APIs in theory make an API infinitely flexible since their can effectively be changed on the fly and clients will still be able to cope.

Assuming for the moment that an API is fixed and not changing in real time, the methods and data models available do in general circumscribe a space of possible sequences of operations (data retrieval, state changes etc.) which can be carried out on the API. This space can potentially be very large or very restricted (e.g. read-only on a very small number of resource types). For any given API a question on flexibility is really two questions:

How large is the potential space of (sequences) of operations that would make sense given the data and operations available within the systems / datastores underlying the API?

What subset of these are possible via the API?

An example of a different answer to these two questions would be an API which exposed two types of data objects (e.g. people and addresses) which were linked behind the scenes but which did not permit relationship type queries (e.g. “which people live here”) via the API or expose sufficient identifiers to do so.

If the answers to the two questions are essentially the same, then the types of applications which can be written against the API are essentially all applications possible for that data set and the available back-end operations. Note however: they may not be efficient – an example of something inefficient but possible might be having and API which returned only single records on large data sets – in this case, it is indeed possible to see all entries, but it would take a long time.

So how does this square with the discussion on simplicity in the previous section which seems to suggest that the API should be fitted to common use cases? In reality, these notions are not so contradictory. Covering both cases effectively is likely best done by:

Exposing atomic operations that allow (in combination) the execution of the full space of operations the API can offer.

If these align poorly with the primary use cases that have been identified, add a second layer of macro operations which reduce the workload for common / expected operations.

Doing this should mean that new, serendipitous use cases are possible even if they are not particular efficient to code up. It is likely that if the API is heavily used, that user feedback over time will identify new potential macro combinations which can be added.

HATEOS arguably improves flexibility further because it allows runtime change in the API and in clients – however, the flexibility question is essentially the same: can all sensible combinations of operations be carried out via the API.

TTFHW

Having a simple, flexible API is not sufficient in this department to make an API truly great. There are still ways to fall at the last hurdle! In particular, great API design is wasted if developers cannot engage with the API. In other words, in order for it to be widely used, it needs to be easy to adopt. The slide deck at here has a great take on what this means to get engagement especially from developers that include: making it very clear what the API does, providing free access if possible or at the very least instant signup, being transparent about pricing and having great documentation. Without these, good API design is lost behind barriers that make getting started hard..

A great term that hits the nail on the head and which we hadn’t seen before John’s OSCON presentation, is one that everybody doing API design should adopt as a key metric:

TTFHW: Time To First Hello World.

This is great way to think about what hurdles a user of your API actually has to go through to get something working – not just understanding the API, but actually having working code. Getting people to this goal quickly builds confidence that the API is well organized and things are likely to work as expected. Delaying the “success moment” too long risks the developer going elsewhere or shying away from bigger projects.

And the measure shouldn’t assume that developer already has an account, keys etc. – not having signup with automated access (“email us for a key”) is just as much of a barrier as bad documentation.

Finally, in addition to your TTFHW, you might also want to measure TTFCA or TTFPA – Time to First Cool App or Time To First Profitable App. If you can get these two down low, then the engagement on your API will jump up even more quickly!

Next Up

In the next post we’ll more items from the list – management and monitoring of the API and why they are important not only for the operators of the API, but also for users.

Update: Part III, “The need for API Management and Infrastructure” can be read there

This is it! Finally, sell all your things and buy a one-way ticket to a paradise island, robots will automate conversations with customers for you! Sounds like a dream? We are getting there, slowly, but the space is really promising […]