Author Archive

Justin Baker

Justin Baker is a Sr. Product Designer at Ten-X | Auction.com, leading design on Auction.com's core technology systems. He grew up in the Dev Tools space and is the former Lead Product Designer at LaunchDarkly. He is also a product marketer who writes for DZone, Tech.co, and Tech Ladder.

In this article, James Higginbotham outlines 5 reasons why your product’s API should support events. He discusses this in the context of ‘API Eventing’, whereby APIs become event-driven.

For the last decade, modern web APIs have grown from solutions like Flickr, to robust platforms that generate new business models. Throughout this period of growth, most APIs have been limited to request-response over HTTP. We are now seeing a move back to eventing with the popularity of webhooks to connect SaaS solutions, the introduction of technologies such as Kafka to drive internal messaging, and the need for integrating IoT devices.

API eventing completely changes the way API consumers interact with our APIs, creating new possibilities that request-response cannot. Let’s examine the driving factors contributing to the rise of API eventing in greater detail, along with the opportunities that may inspire you to consider adding API event support to your API.

How companies are adding realtime capabilities to their products and building realtime APIs

Mirroring the rise of API-driven applications, realtime is becoming an emerging, omnipresent force in modern application development. It is powering instant messaging, live sports feeds, geolocation, big data, and social feeds. But, what is realtime and what does it really mean? What types of software and technology are powering this industry? Let’s dive into it.

What Is Realtime?

For the more technical audience, realtime traditionally describes realtime computing, whereby “hardware and software systems are subject to a realtime constraint, for example from event to system response” (Source). For this article, we’re framing realtime from the perspective of an end-user: the perception that an event or action happens sufficiently quickly to be perceived as nearly instantaneous.

Moreover, realtime could be defined in a more relative temporal sense. It could mean that a change in A synchronizes with a change in B. Or, it could mean that a change in A immediately triggers a change in B. Or… it could mean that A tells B that something changed, yet B does nothing. Or… does it mean that A tells everyone something changed, but doesn’t care who listens?

Let’s dig a bit deeper. Realtime does not necessarily mean that something is updated instantly (in fact, there’s no singular definition of “instantly”). So, let’s not focus on the effect, but rather the mechanism. Realtime is about pushing data as fast as possible — it is automated, synchronous, and bi-directional communication between endpoints at a speed within a few hundred milliseconds.

Synchronous means that both endpoints have access to data at the same time.

Bi-directional means that data can be sent in either direction.

Endpoints are senders or receivers of data (phone, tablet, server).

A few hundred milliseconds is a somewhat arbitrary metric since data cannot be delivered instantly, but it most closely aligns to what humans perceive as realtime (Robert Miller proved this in 1986).

With this definition and its caveats in mind, let’s explore the concept of pushing data.

Data Push

We’ll start by contrasting data push with “request-response.” Request-response is the most fundamental way that computer systems communicate. Computer A sends a request for something from Computer B, and Computer B responds with an answer. In other words, you can open up a browser and type “reddit.com.” The browser sends a request to Reddit’s servers and they respond with the web page.

In a data push model, data is pushed to a user’s device rather than pulled (requested) by the user. For example, modern push email allows users to receive email messages without having to check manually. Similarly, we can examine data push in a more continuous sense, whereby data is continuously broadcasted. Anyone who has access to a particular channel or frequency can receive that data and decide what to do with it.

Moreover, there are a few ways that data push/streaming is currently achieved:

HTTP Streaming

HTTP streaming provides a long-lived connection for instant and continuous data push. You get the familiarity of HTTP with the performance of WebSockets. The client sends a request to the server and the server holds the response open for an indefinite length. This connection will stay open until a client closes it or a server side-side event occurs. If there is no new data to push, the application will send a series of keep-alive ticks so the connection doesn’t close.

Websockets

WebSockets provide a long-lived connection for exchanging messages between client and server. Messages may flow in either direction for full-duplex communication. This bi-directional connection is established through a WebSocket handshake. Just like in HTTP Streaming and HTTP Long-Polling, the client sends a regular HTTP request to the server first. If the server agrees to the connection, the HTTP connection is replaced with a WebSocket connection.

Webhooks

Webhooks are a simple way of sending data between servers. No long-lived connections are needed. The sender makes an HTTP request to the receiver when there is data to push. A WebHook registers or “hooks” to a callback URL and will notify you anytime an event has occurred. You register this URL in advance and when an event happens, the server sends a HTTP POST request with an Event Object to the callback URL. This event object contains the new data that will be pushed to the callback URL. You might use a WebHook if you want to receive notifications about certain topics. It could also be used to notify you whenever a user changes or updates their profile.

HTTP Long-Polling

HTTP long-polling provides a long-lived connection for instant data push. It is the easiest mechanism to consume and also the easiest to make reliable. This technique provides a long-lived connection for instant data push. The server holds the request open until new data or a timeout occurs. Most send a timeout after 30 to 120 seconds, it depends on how the API was setup. After the client receives a response (whether that be from new data or a timeout), the client will send another request and this is repeated continuously.

Is pushing data hard? Yes, it is, especially at scale (ex. pushing updates to millions of phones simultaneously). To meet this demand, an entire realtime industry has emerged, which we’ll define as Realtime Infrastructure as Service (Realtime IaaS).

Realtime Libraries

Here is a compilation of resources that are available for developers to build realtime applications based on specific languages / frameworks:

Realtime Infrastructure as a Service

According to Gartner, “Infrastructure as a service (IaaS) is a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities are owned and hosted by a service provider and offered to customers on-demand. Customers are able to self-provision this infrastructure, using a Web-based graphical user interface that serves as an IT operations management console for the overall environment. API access to the infrastructure may also be offered as an option.”

We often here PaaS (Platform as a Service) and SaaS (Software as a Service), so how are they different than IaaS?

Infrastructure as a Service (IaaS): hardware is provided by an external provider and managed for you.

Platform as a Service (PaaS): both hardware and your operating system layer are managed for you.

Software as a Service (SaaS): an application layer is provided for the platform and infrastructure (which is managed for you).

To power realtime, applications require a carefully architected system of servers, APIs, load balancers, etc. Instead of building these systems in-house, organizations are finding it more cost-effective and resource-efficient to purchase much of this systemic infrastructure and then drive it in-house. These systems, therefore, are not just IaaS, but typically provide both a platform and software layer to help with management. Foundationally speaking, their core benefit is that they provide realtime infrastructure, whether you host it internally or rely on managed instance

It all comes down to the simple truth that realtime is hard for a number of reasons:

Customer Uptime Demand – Customers that depend on realtime updates will immediately notice when your network is not performant.

Horizontal Scalability – You must be able to handle volatile and massive loads on your system or risk downtime. This is typically achieved through clever horizontal scalability and systems that are able to manage millions of simultaneous connections.

Architectural Complexity – Maintaining a performant realtime system is not only complex, but it requires extensive experience and expertise. This is expensive to buy, especially in today’s high demand engineering market.

Contingencies – Inevitably, your system will experience some downtime, whether due to an anticipated load spike or a newly released feature. It is important, therefore, to have multiple uptime contingencies in place to make sure that the system knows what to do, should your primary realtime mechanism fail to perform.

Queuing – When you’re sending a lot of data, then you likely need an intermediate queuing mechanism to ensure that your backend processes are not overburdened with increased message loads.

Realtime Application IaaS

Realtime app infrastructure sends data to browsers and clients. It typically uses pub/sub messaging, webhooks, and/or websockets — and is separate from an application or service’s main API. These solutions are best for organizations that are looking for realtime messaging without the need to build their own realtime APIs.

These systems also have more well-built platform/software management tools on top of their infrastructure offerings. For instance, the leading providers have built-in configuration tools like access controls, event delegation, debugging tools, and channel configuration.

Benefits of Realtime App IaaS

Speed – typically explicitly designed to deliver data with low latencies to end-user devices, including smartphones, tablets, browsers, and laptops.

Multiple SDKs for easier integration.

Uses globally distributed realtime data delivery platforms.

Multiple protocol adapters.

Well-tested in production environments.

Keeps internal configuration to a minimum.

Use Cases

While some of the platforms out there function differently, here are some of the most typical use cases:

Realtime Chat – In a microservice environment, a realtime API proxy makes it easy to listen for instant updates from other microservices without the need for a centralized message broker. Each microservice gets its own proxy instance, and microservices communicate with each other via your organization’s own API contracts rather than a vendor-specific mechanism.

Solutions

Here are some realtime application IaaS providers (managed) to check out for further learning: PubNub, Pusher, and Ably.

Realtime API IaaS for API Development

Realtime API infrastructure specifically allows developers to build realtime data push into their existing APIs. Typically, you would not need to modify your existing API contracts, as the streaming server would serve as a proxy. The proxy design allows these services to fit nicely within an API stack. This means it can inherit other facilities from your REST API, such as authentication, logging, throttling, etc and, consequently, it can be easily combined with an API management system. In the case of WebSocket messages being proxied out as HTTP requests, the messages may be handled statelessly by the backend. Messages from a single connection can even be load balanced across a set of backend instances.

All in all, realtime API IaaS is used for API development, specifically geared for organizations that need to build highly-performant realtime APIs like Slack, Instagram, Google, etc. All of these orgs build and manage their infrastructure internally, so the IaaS offering can be thought of as a way to extend these capabilities to organizations that lack the resources and technical expertise to build a realtime API from scratch.

Benefits of Realtime API IaaS

Custom build an internal API.

Works with existing API management systems.

Does not lock you into a particular tech stack.

Provides realtime capabilities throughout entire stack.

Usually proxy-based, with pub/sub or polling.

Add realtime to any API, no matter what backend language or database.

Cloud or self-hosted API infrastructure.

It can inherit facilities from your REST API, such as authentication, logging, throttling.

Use Cases

While some of the platforms out there function differently, here are some of the most typical use cases:

API development – As we’ve discussed, you can build custom realtime APIs on top of your existing API infrastructure.

Microservices – In a microservice environment, a realtime API proxy makes it easy to listen for instant updates from other microservices without the need for a centralized message broker. Each microservice gets its own proxy instance, and microservices communicate with each other via your organization’s own API contracts rather than a vendor-specific mechanism.

Message queue – If you have a lot of data to push, you may want to introduce an intermediate message queue. This way, backend processes can publish data once to the message queue, and the queue can relay the data via an adapter to one or more proxy instances. The realtime proxy is able to forward subscription information to such adapters, so that messages can be sent only to the proxy instances that have subscribers for a given channel.

API management – It’s possible to combine an API management system with a realtime proxy. Most API management systems work as proxy servers as well, which means all you need to do is chain the proxies together. Place the realtime proxy in the front, so that the API management system isn’t subjected to long-lived connections. Also, the realtime proxy can typically translate WebSocket protocol to HTTP, allowing the API management system to operate on the translated data.

Large scale CDN performance – Since realtime proxy instances don’t talk to each other, and message delivery can be tiered, this means the realtime proxy instances can be geographically distributed to create a realtime push CDN. Clients can connect to the nearest regional edge server, and events can radiate out from a data source to the edges.

Solutions

Conclusion

Realtime is becoming an emerging, omnipresent force in modern application development. It is not only a product differentiator, but is often sufficient for product success. It has accelerated the proliferation of widely-used apps like Google Maps, Lyft, and Slack. Whether you’re looking to build your own API from scratch or build on top of an IaaS platform, realtime capabilities are increasingly becoming a requirement of the modern tech ecosystem.

This online resource is a unique way to frame a conceptual model for evented APIs. Sam Curren and Phillip J. Windley discuss the fundamentals of evented APIs, how evented systems work, and a proposed protocol.

Events indicate something has happened. In this they differ from the request-response interaction style popular on the Web. Event-based systems are declarative whereas request-response systems are interrogatory. The difference between events (“this happened”) and requests (“will you do this?”) offers benefits in looser coupling of components as well as semantic encapsulation (see On Hierarchies and Networks for more detail).

APIs have become an economic imperative for many companies. But APIs based solely on request-response style interactions limit integrations to those where one system always knows what it wants from the other. The calling service must script the interaction and the APIs simply follow along.

We envision a world where applications integrate multiple products and services as equals based on event-driven interactions. Evented APIs, following the form described in this document, enable building such applications.

Web API Design – “This e-book is a collection of design practices that we have developed in collaboration with some of the leading API teams around the world, as they craft their API strategy through a design workshop that we provide at Apigee.”

API Design Fundamentals – ‘The fundamental concept in any RESTful API is the resource. A resource is an object with a type, associated data, relationships to other resources, and a set of methods that operate on it. It is similar to an object instance in an object-oriented programming language, with the important difference that only a few standard methods are defined for the resource (corresponding to the standard HTTP GET, POST, PUT and DELETE methods), while an object instance typically has many methods.’

Zapier has done a great job putting together a course introducing APIs. It is perfect for those just starting their API journey and covers protocols, data formats, API design, authentication, real-time communication, and implementation.

Entrepreneurship and APIs

If you are looking for an easy read about how important APIs are in entrepreneurial endeavors, I wrote an explanation on the topic a few weeks back. It briefly introduces some important business benefits of APIs, and why upcoming business builders should be learning and using this technology.

API Academy

API Academy provides free online lessons and in-person consulting services that cover essential API techniques. This useful resource provides business managers, interface designers and enterprise architects with some important knowledge. The growing repository is one you will want to revisit every so often.

API Evangelist’s Whitepapers

Kin Lane’s white papers on the basics of APIs, their history, and how to deploy and manage them are a useful collection of resources. He has also just released one on API design that is worth checking out.

In this article, Kristopher Sandoval highlights the five most common event-driven methods for data push. These methods all have their pros and cons, and work best based on your particular use cases.

The internet is a system of communication, and as such, the relationship between client and server, as well as server to server, is one of the most oft-discussed and hotly contested concepts. event-driven architecture is a methodology of defining these relationships, and creating systems within a specific set of relationships that allow for extensive functionality.

In this piece, we’re going to discuss 5 common event-driven methods — WebSockets, WebHooks, REST Hooks, Pub-Sub, and Server Sent Events. We’ll define what they fundamentally are and do, and how API providers go about using them. Additionally, we’ll provide some pros and cons on each to make choosing a solution for your platform easy and intuitive.

Make realtime push behavior delegable – the reason there isn’t a realtime push CDN yet is because the standards and practices necessary for delegating to a third party in a transparent way are not yet established.

As your API scales, a multi-tiered architecture will become inevitable, so make sure you have sufficient plans for scale.

Make sure you have multiple layers of redundancy and reliability to maintain performance, should something go wrong.

Challenges

In push architectures, one of the main challenges is delivering data reliably to receivers. There are many reasons for this:

Most push architectures (including those developed by our company) use the publish-subscribe messaging pattern, which is unreliable.

TCP’s built-in reliability is not enough to ensure delivery, as modern network sessions span multiple connections.

The last point trips up developers new to this problem space, who may wish for push systems to provide “guaranteed delivery.” If only it were that simple. Like many challenges in computer science, there isn’t a best answer, just trade-offs you are willing to accept.

Below we’ll go over the various issues and recommended practices around reliable push.

Publish-subscribe is unreliable by design

If you rely on a publish-subscribe broker as your source of truth then you’ll most likely end up with an application that loses data. Publishers might crash before having a chance to publish, and brokers don’t guarantee delivery anyway.

The only way a publish-subscribe broker could conceivably deliver data 100% reliably would be to do one of two things:

Implement publisher back pressure based on the slowest subscriber.

Store all published messages in durable storage for all of time.

Both of these options are terrible, so in general nobody does them. Instead, what you see are various degrees of best-effort behavior, for example large queues, or queues with a time limit (e.g. messages deliverable for N hours/days).

This isn’t to say publish-subscribe is unsuitable for push; in fact it’s an essential tool in your toolbox. Just know that it’s only one piece of the puzzle. The ZeroMQ guide has a section on publish-subscribethat’s worth reading even if you’re not using ZeroMQ.

TCP is about flow control, not reliability

In the modern world of mobile devices and cloud services, a single user session can easily span multiple client/server IP address pairs. TCP will retransmit data as necessary within a single connection, but it won’t retransmit across connections, for example if a client has to reconnect. Further, even if IP addresses are stable, TCP doesn’t attempt retransmissions forever. It tries really hard for awhile but will eventually give up if a peer is unresponsive.

In practice, this means TCP alone isn’t enough for reliable data transfer, and you’ll need to layer a reliable protocol on top of it.

If you learned long ago that TCP is for reliable communication (compared to, say, UDP), then you may be confused by this and wonder why developers continue to build systems using TCP. Well, TCP provides other useful features, such as back pressure, ordered delivery, and payloads of arbitrary size, so it is still an incredibly useful protocol even if we can’t depend on it for reliable transmission.

Error handling on the receiver

With request/response interactions, you can retry if a request fails, or just bubble errors up to the UI. With push, lost data looks about the same as no data, and users are left wondering why their screens aren’t updating.

What’s really obnoxious about this problem is that it’s not enough to just throw an error when a connection to the server is lost. You can lose data even when a connection seems to be working fine, due to some deeper failure within the server. This means the receiver usually cannot rely on the existence of a connection or subscription to be very meaningful, and will need to discover errors some other way.

Reliable transmission fundamentals

Before we get into the common practices, let’s go over some basics. Reliably exchanging data over a network requires two things:

An ability to retransmit data, potentially for a long time if loss is not tolerable.

An entity responsible for initiating a retransmission.

For example, when a mail server sends email to another mail server using SMTP, the sending mail server owns the data to be retransmitted and also acts as the responsible entity.

The entity responsible for initiating a retransmission doesn’t have to be the sender, though. For example, in web architectures, it’s common for a server to attempt to push data to a receiver, but if that fails then it’s up to the receiver to query the server for the missed data. In this case the receiver is the responsible entity.

Before you can build a reliable system, you need to determine whether the sender or receiver should be the responsible entity. This usually comes down to which side should care more about the transmission.

Recommended practices

Alright, so how can applications ensure data is reliably delivered?

Use a regular database as a source of truth. If pushed data is important, the first thing you should do is write it to durable storage (on disk, with no automatic expiration), before kicking off any push mechanisms. This way if there’s a problem during delivery, the data can later be recovered.

The receiver should be the responsible entity. Receivers usually come and go, and it makes more sense for receivers to keep track of what they need rather than for servers to keep track of what has been sent.

Have a way to sync with the server. Receivers should be able to ask the server for new data, independent of any realtime push mechanisms. This way, publish workers can crash, queues can max out, and connections can get dropped, yet receivers can always catch up by making requests for updates.

Architect your system initially without push. Related to the previous recommendations, if you build a system where your data lives in durable storage, and receivers have a way to ask for incremental updates, then it will be easy to add a push layer on top. This may seem like a boring approach, but boring works.

Consider sending hints. This is where you push indications of new data rather than the actual data. Receivers react to hints by making requests for updates. See Andyet’s article about this approach. Hints are straightforward and work well if there aren’t too many recipients for the same data.

Include sequencing information. If data may arrive from potentially two sources (push subscription or request for updates), or data is a stream of deltas, then receivers will need a way to detect for out-of-sequence data. This can be done by including a sequence or version ID in the data payloads. If there is a sequencing problem when receiving a stream of deltas, then the receiver will need to make a request to the server for updates.

Periodically sync with the server. If data is sent infrequently, then receivers may want to periodically check in with the server to ensure nothing has been missed. For a typical web app, this interval could be something like a few minutes.

AsyncAPI Specification

AsyncAPI Specification codifies and standardizes the documentation and definitions of asynchronous APIs.

According to the AsyncAPI site, “the AsyncAPI specification allows you to create machine-readable definitions of your asynchronous APIs. It’s protocol-agnostic, so you can use it for APIs that work over MQTT, AMQP, WebSockets, STOMP, etc. The specification is heavily inspired on OpenAPI (fka Swagger) and it’s designed to maintain as much compatibility as possible with it.”

The AsyncAPI Specification defines a set of files required to describe such an API, which can be used to create utilities, such as documentation, integration and/or testing tools.

REST Hooks

Codified by the team at Zapier, REST Hooks (RESTful WebHooks) is a collection of patterns that treat webhooks like subscriptions. These subscriptions are manipulated via a REST API just like any other resource. REST Hooks support subscription, notification, and publication through a RESTful interface. This retains the RESTful patterns by using HTTP GET, PUT, POST and DELETE to act on subscription, notification, and publication resources. In other words, the REST Hook pattern adheres to standard REST definitions

There are 4 primary components for creating REST Hooks:

Create a subscription – the webhook needs to be tied to a subscriber URL in order to escape polling

Send the hook – the API sends the data via a POST

Unsubscribing – a delete call is made to unsubscribe

Setting up a global URL – you can create a polling URL to create a permanent location from where to draw data

According to Gartner, “Infrastructure as a service (IaaS) is a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities are owned and hosted by a service provider and offered to customers on-demand. Customers are able to self-provision this infrastructure, using a Web-based graphical user interface that serves as an IT operations management console for the overall environment. API access to the infrastructure may also be offered as an option.”

We often here PaaS (Platform as a Service) and SaaS (Software as a Service), so how are they different than IaaS?

Infrastructure as a Service (IaaS): hardware is provided by an external provider and managed for you

Platform as a Service (PaaS): both hardware and your operating system layer are managed for you

Software as a Service (SaaS): an application layer is provided for the platform and infrastructure (which is managed for you)

There are two distinct categories for realtime infrastructure: realtime API IaaS and realtime app IaaS. This article will focus on realtime app infrastructure as it relates to services that facilitate the pushing of data to browsers and clients.d

Definition

Realtime app infrastructure sends data to browsers and clients. It typically uses pub/sub messaging, webhooks, and/or websockets — and is separate from an application or service’s main API.

Benefits of Realtime App IaaS

Speed – typically explicitly designed to deliver data with low latencies to end-user devices, including smartphones, tablets, browsers, and laptops

Multiple SDKs for easier integration

Uses globally distributed realtime data delivery platforms

Multiple protocol adapters

Well-tested in production environments

Keeps internal configuration to a minimum

Use Cases

While some of the platforms out there function differently, here are some of the most typical use cases:

Realtime Chat – In a microservice environment, a realtime API proxy makes it easy to listen for instant updates from other microservices without the need for a centralized message broker. Each microservice gets its own proxy instance, and microservices communicate with each other via your organization’s own API contracts rather than a vendor-specific mechanism.

According to Smartbear, “Microservice architecture, or simply microservices, is a distinctive method of developing software systems that has grown in popularity in recent years. In fact, even though there isn’t a whole lot out there on what it is and how to do it, for many developers it has become a preferred way of creating enterprise applications. Thanks to its scalability, this architectural method is considered particularly ideal when you have to enable support for a range of platforms and devices—spanning web, mobile, Internet of Things, and wearables—or simply when you’re not sure what kind of devices you’ll need to support in an increasingly cloudy future.”

More simply, microservice architecture is an architectural style that structures an application as a collection of loosely coupled services that each facilitate businesses services. It not only enables the continuous delivery and deployment of complex applications, but it allows you to scale applications by service.

Adding Realtime Communication to Microservices

In a microservice environment, you can add a realtime reverse proxy instance (like Pushpin) to listen for instant updates from other microservices without the need for a centralized message broker. Each microservice gets its own realtime proxy instance, and microservices communicate with each other via your organization’s own API contracts rather than a vendor-specific mechanism.

Resources

Messaging Patterns for Event-Driven Microservices by Fred Melo – “In a microservices architecture, each microservice is designed as an atomic and self-sufficient piece of software. Implementing a use case will often require composing multiple calls to these single responsibility, distributed endpoints. Although synchronous request-response calls are required when the requester expects an immediate response, integration patterns based on eventing and asynchronous messaging provide maximum scalability and resiliency. Some of the world’s most scalable architectures such as Linkedin and Netflix are based on event-driven, asynchronous messaging.”

Microservice Architecture Fortified for Real-Time Communications by Wolfram Hempel – “But this flexibility comes at a price: Enterprise-scale microservice architectures quickly become highly complex. Load-balancing clusters, routing requests to endpoints, orchestrating distributed messaging, sharding storage layers, and facilitating concurrent read and write access are just some of the many challenges. With the increasing move from request-response workflows to streaming real-time data — be it for financial price distribution, social messaging, collaboration apps, or Internet of Things (IoT) data aggregation — we have to rethink the way our services interact and share resources. If you’re responsible for implementing and supporting real-time communications and collaboration, strengthening your backend’s spine will help you create more dynamic, scalable, and manageable deployment strategies.”

Microservices: Architecture for the Real-time Organization by Kevin Webber – “The real-time organization is responsive to change. Real-time organizations architect their systems to evolve naturally as they adapt to the competitive landscape around them. At the core of real-time organizations are microservices. The microservice architecture (MSA) empowers independent teams within large organizations to move at the pace of startups, freeing them from the constraints of “design by committee” and other architectural anti-patterns that ground productivity within the enterprise to a halt. We explore all of the relevant patterns of microservices architecture including domain-driven design (DDD), circuit breaker, data pump, saga pattern, distributed transaction, async messaging, etc.”

Pushpin is the open source reverse proxy for the realtime web. One of the benefits of Pushpin functioning as a proxy is that it can be combined with an API management system, such as Mashape’s Kong. Kong is the open source management layer for APIs. To use Kong with Pushpin, simply chain the two together on the same network path.

Why would you want to use an API management system with Pushpin? Realtime web services have many of the same concerns as request/response web services, and it can be helpful to centrally manage those aspects.

Proxy order

Since both Pushpin and Kong are reverse proxies, you may wonder what order they should be placed. As it turns out, both ways work, with different trade-offs.

If Pushpin is placed in front of Kong:

Kong is able to inspect/manipulate any instructions the backend sends to Pushpin.

Pushpin can convert WebSocket connections into a series of HTTP requests, allowing Kong to manage a WebSocket API.

Kong is unable to see data published through Pushpin.

If Kong is placed in front of Pushpin:

Kong sees data published through Pushpin.

WebSockets won’t work.

Kong logging of HTTP streaming will delay until connection closes.

If you’re using the Fanout Cloud version of Pushpin, then you don’t have much choice; it’ll always come first. Fortunately, this is arguably the preferred order. However, if you are running Pushpin and Kong on your own servers, then you can choose either order.

Authentication

API consumer authentication is one of the most valuable features of Kong. It can be used to provide protection to an API without having to implement authentication in the backend.

Suppose we have an HTTP streaming endpoint with the following architecture:

WebSockets

Pushpin is able to convert WebSocket connections into a series of HTTP requests. This allows backend services (e.g. a service built with django-grip) to drive WebSocket connections without needing direct support for WebSockets. With this mode enabled, Kong is able to proxy the converted HTTP traffic between Pushpin and the backend.

This means you can use Kong to protect a WebSocket API the same way you would a normal HTTP API. Below is another sequence diagram, this time for a network flow involving WebSockets:

Suppose you have a WebSocket API living at ws://api.example.com/events/ that you can normally access like this:

With the tcplog plugin added to the API, any requests, including those that become HTTP streaming connections or get converted into WebSocket events, will be logged along with information about the authenticated consumer.

Conclusion

All APIs can benefit from API management, realtime or not. Pushpin makes it easy to create realtime APIs that can be managed by API management systems. If you are designing or building a realtime API infrastructure, definitely check out Pushpin and Kong. They are a great pair.

Blog Posts

Realtime API Hub

The Hub’s mission is to centralize realtime API information and provide a foundation for others to build their own APIs. This is proudly maintained by the team at Fanout.io and other individual contributors.