Have you ever heard the adage that “there are only two hard problems in computer science, cache invalidation, naming things, and off-by-one errors?” Allegedly, Phil Karlton said thissometime around 1996/97. While many comedic spin-offs for this famous phrase mention additional problems, a recent observation in the world of APIs seems to prove the bit about naming things right: There’s some confusion about the terms “API” and “microservice,” and some people seem to use them interchangeably.

The whole world of computing is continuously in flux. Developers use various concepts and technologies and connect them in different ways. Therefore, it is not uncommon that we use inconsistent terminology, having multiple words for what is roughly the same concept or, vice versa, saying the same word but meaning different things.

Regarding APIs and microservices: Yes, they are related concepts, and there’s an interplay between them, but they are not the same thing. So, let’s get our terms straight!

That is one of the things that make APIs so exciting because developers of all kinds can tap into infrastructure built and exposed by others to enhance their applications with additional functionality.

When people talk about APIs these days, they are more often than not describing remote interfaces exposed through HTTP endpoints, and these APIs are what Stoplight’s API Corner is all about. To differentiate these remote APIs from the local system APIs mentioned above, I like using the term “Web API” now and then. (Although some people use that term for local APIs in the browser — confusing, right?)

We further categorize remote APIs, or Web APIs, by either their underlying design paradigms, such as query, RPC or RESTful, or protocols, like SOAP, gRPC or GraphQL. Apart from that, we also differentiate APIs by their target audience and call them public, partner, or private/internal APIs.

Strictly speaking, the term API only describes the interface, the shared language that client and server, API consumer and API provider, use to exchange information. For the API consumer, the API is nothing more than a description of the interface and an endpoint URL or set of URLs. URLs are one of the basic but somewhat magical building blocks of the web that allow a client to access information or services without knowing the nature or location of the server. Clients may remain ignorant of whether the URL leads to a Raspberry Pi hidden in someone’s basement, or a worldwide delivery network of massive data centers on every continent, as long as they receive a response. That is one of the things that make APIs so exciting because developers of all kinds can tap into infrastructure built and exposed by others to enhance their applications with additional functionality.

API providers, however, have to not only design, implement, and document the API, but also think about the infrastructure behind it. In the era of cloud computing though, that rarely means buying hardware and renting data center space anymore. Instead, API providers can choose from various as-a-service offerings, from managed clusters of virtual machines or containers, to fully serverless code hosting environments. Regardless of the infrastructure choice, at some point, the API needs to be deployed.

Do you see what I did there? I talked about deploying the API when what I meant to say was to deploy whatever code and infrastructure are required to expose the API. From a provider’s perspective, the API is not some magical door, but a tangible asset that needs to live somewhere. And, increasingly often, as companies move to a microservice architecture, that asset is … a microservice, or a set of microservices.

A microservice is an independent, self-contained component of a broader system or application. Every microservice should have a well-defined scope and responsibility and ideally do only one thing. It should be either stateless or stateful, and if it’s stateful, it should come with a persistence layer (i.e., database) of its own that it doesn’t share with other services. Software development teams use microservices to develop independent, potentially reusable components, in a more distributed fashion. They can use a custom framework, set of dependencies, or even wholly different programming languages for each of them. Microservices can also help with scalability, as they are distributed by nature, and each of them can grow or be replicated independently.

Containers are a means to establish isolated contexts within an operating system. In practice, that means that each of them has a separate virtual file system containing a set of installed software and associated configuration. As they are isolated, no container can directly access or affect other containers or the underlying host system.

The ability to create containers had already been part of the Linux operating system for quite a while, but it wasn’t until the launch of Docker in 2013 that containers became a commonly used technology.

As we’re talking about definitions, it’s worth noting that microservices and containers are not the same either, but the two concepts often go hand in hand, just like APIs and microservices. Without containers, either all servers would have to be configured to run multiple microservices that might then negatively interfere with each other, or, each microservice would require a separate server or virtual machine of its own, which causes unnecessary overhead. Therefore, each microservice is typically implemented as a set of containers managed by a container cluster software like Kubernetes. It’s safe to say that the rise of both containers and microservices have influenced and benefitted from each other.

An application or API built on a microservices architecture does not only expose itself in its entirety but also requires a connection between its internal components, the microservices, as well. As every microservice could have an implementation in a different programming language, we need to rely on standard protocols, like HTTP, to facilitate this connection. And that is where we circle back to APIs.

In its most basic form, every microservice exposes an API so that other services can make requests and retrieve data. There are different approaches, too, like messaging queues, but let’s stick with the basics for now. The microservice API is a private API that applies only to a single application. It is commonly not available on a public URL but, instead, uses private IPs or hostnames that exist only within the closed private network of the organization or even just a single cluster of servers. Still, these APIs can follow any design paradigm or protocol that partner or public APIs have. And, although they have a limited number of consumers, they should follow the basic rules of developer experience too. That means that they should have a relevant, consistent, and evolvable API design and some documentation to inform teams building other microservices (or even your future self) on how to use the service. Therefore, you can and should use similar tools, for example, Stoplight’s visual API designer, to create your microservice APIs.

Of course, there are different aspects to emphasize when designing microservice APIs compared to more outward-facing APIs. We’ll look at API design for microservices in an upcoming post, so stay tuned.

Microservices and APIs are not the same and, while we’re at it, neither are microservices and containers. However, the two concepts work together in two different ways: First, microservices can be a means to deploy the backend for an internal, partner, or public API. Second, microservices typically rely on APIs as a language-independent means to communicate with each other in an internal network. Development teams can use similar design approaches and tools for creating both outward-facing and microservice APIs. We will cover distinct best practices for different types of APIs in an upcoming post.

Read how world’s leading API first companies are solving API Design Management at Scale.