When it comes to APIs, change isn’t popular. While software developers are used to iterating quickly and often, API developers lose that flexibility as soon as even one user starts consuming their interface. Many of us are familiar with how the Unix operating system evolved. In 1994, The Unix-Haters Handbook was published containing a long list of missives about the software—everything from overly-cryptic command names that were optimized for Teletype machines, to irreversible file deletion, to unintuitive programs with far too many options. Over twenty years later, an overwhelming majority of these complaints are still valid even across the dozens of modern derivatives. Unix had become so widely used that changing its behavior would have challenging implications. For better or worse, it established a contract with its users that defined how Unix interfaces behave.

Similarly, an API represents a contract for communication that can’t be changed without considerable cooperation and effort. Because so many businesses rely on Stripe as infrastructure, we’ve been thinking about these contracts since Stripe started. To date, we’ve maintained compatibility with every version of our API since the company’s inception in 2011. In this article, we’d like to share how we manage API versions at Stripe.

Code written to integrate with an API has certain inherent expectations built into it. If an endpoint returns a boolean field called verified to indicate the status of a bank account, a user might write code like this:

if bank_account[:verified]
...
else
...
end

If we later replaced the bank account’s verified boolean with a status field that might include the value verified (like we did back in 2014), the code will break because it depends on a field that no longer exists. This type of change is backwards-incompatible, and we avoid making them. Fields that were present before should stay present, and fields should always preserve their same type and name. Not all changes are backwards-incompatible though; for example, it’s safe to add a new API endpoint, or a new field to an existing API endpoint that was never present before.

With enough coordination, we might be able to keep users apprised of changes that we’re about to make and have them update their integrations, but even if that were possible, it wouldn’t be very user-friendly. Like a connected power grid or water supply, after hooking it up, an API should run without interruption for as long as possible.

Our mission at Stripe is to provide the economic infrastructure for the internet. Just like a power company shouldn’t change its voltage every two years, we believe that our users should be able to trust that a web API will be as stable as possible.

API versioning schemes

A common approach to allow forward progress in web APIs is to use versioning. Users specify a version when they make requests and API providers can make the changes they want for their next version while maintaining compatibility in the current one. As new versions are released, users can upgrade when it’s convenient for them.

This is often seen as a major versioning scheme with names like v1, v2, and v3 that are passed as a prefix to a URL (like /v1/widgets) or through an HTTP header like Accept. This can work, but has the major downside of changes between versions being so big and so impactful for users that it’s almost as painful as re-integrating from scratch. It’s also not a clear win because there will be a class of users that are unwilling or unable to upgrade and get trapped on old API versions. Providers then have to make the difficult choice between retiring API versions and by extension cutting those users off, or maintaining the old versions forever at considerable cost. While having providers maintain old versions might seem at first glance to be beneficial to users, they’re also paying indirectly in the form of reduced progress on improvements. Instead of working on new features, engineering time is diverted to maintaining old code.

At Stripe, we implement versioning with rolling versions that are named with the date they’re released (for example, 2017-05-24). Although backwards-incompatible, each one contains a small set of changes that make incremental upgrades relatively easy so that integrations can stay current.

The first time a user makes an API request, their account is automatically pinned to the most recent version available, and from then on, every API call they make is assigned that version implicitly. This approach guarantees that users don’t accidentally receive a breaking change and makes initial integration less painful by reducing the amount of necessary configuration. Users can override the version of any single request by manually setting the Stripe-Version header, or upgrade their account’s pinned version from Stripe’s dashboard.

Some readers might have already noticed that the Stripe API also defines major versions using a prefixed path (like /v1/charges). Although we reserve the right to make use of this at some point, it’s not likely to change for some time. As noted above, major version changes tend to make upgrades painful, and it’s hard for us to imagine an API redesign that’s important enough to justify this level of user impact. Our current approach has been sufficient for almost a hundred backwards-incompatible upgrades over the past six years.

Versioning under the hood

Versioning is always a compromise between improving developer experience and the additional burden of maintaining old versions. We strive to achieve the former while minimizing the cost of the latter, and have implemented a versioning system to help us with it. Let’s take a quick look at how it works.
Every possible response from the Stripe API is codified by a class that we call an API resource. API resources define their possible fields using a DSL:

API resources are written so that the structure they describe is what we’d expect back from the current version of the API. When we need to make a backwards-incompatible change, we encapsulate it in a version change module which defines documentation about the change, a transformation, and the set of API resource types that are eligible to be modified:

Version changes are written so that they expect to be automatically applied backwards from the current API version and in order. Each version change assumes that although newer changes may exist in front of them, the data they receive will look the same as when they were originally written.

When generating a response, the API initially formats data by describing an API resource at the current version, then determines a target API version from one of:

A Stripe-Version header if one was supplied.

The version of an authorized OAuth application if the request is made on the user’s behalf.

The user’s pinned version, which is set on their very first request to Stripe.

It then walks back through time and applies each version change module that finds along the way until that target version is reached.

Requests are processed by version change modules before returning a response.

Version change modules keep older API versions abstracted out of core code paths. Developers can largely avoid thinking about them while they’re building new products.

Changes with side effects

Most of our backwards-incompatible API changes will modify a response, but that’s not always the case. Sometimes a more complicated change is necessary which leaks out of the module that defines it. We assign these modules a has_side_effects annotation and the transformation they define becomes a no-op:

Elsewhere in the code a check will be made to see whether they’re active:

VersionChanges.active?(LegacyTransfers)

This reduced encapsulation makes changes with side effects more complex to maintain, so we try to avoid them.

Declarative changes

One advantage of self-contained version change modules is that they can declare documentation describing what fields and resources they affect. We can also reuse this to rapidly provide more helpful information to our users. For example, our API changelog is programmatically generated and receives updates as soon as our services are deployed with a new version.

We also tailor our API reference documentation to specific users. It notices who is logged in and annotates fields based on their account API version. Here, we’re warning the developer that there’s been a backwards-incompatible change in the API since their pinned version. The request field of Event was previously a string, but is now a subobject that also contains an idempotency key (produced by the version change that we showed above):

Minimizing change

Providing extensive backwards compatibility isn’t free; every new version is more code to understand and maintain. We try to keep what we write as clean as possible, but given enough time dozens of checks on version changes that can’t be encapsulated cleanly will be littered throughout the project, making it slower, less readable, and more brittle. We take a few measures to try and avoid incurring this sort of expensive technical debt.

Even with our versioning system available, we do as much as we can to avoid using it by trying to get the design of our APIs right the first time. Outgoing changes are funneled through a lightweight API review process where they’re written up in a brief supporting document and submitted to a mailing list. This gives each proposed change broader visibility throughout the company, and improves the likelihood that we’ll catch errors and inconsistencies before they’re released.

We try to be mindful of balancing stagnation and leverage. Maintaining compatibility is important, but even so, we expect to eventually start retiring our older API versions. Helping users move to newer versions of the API gives them access to new features, and simplifies the foundation that we use to build new features.

Principles of change

The combination of rolling versions and an internal framework to support them has enabled us to onboard vast numbers of users, make enormous changes to our API—all while having minimal impact on existing integrations. The approach is driven by a few principles that we’ve picked up over the years. We think it’s important that API upgrades are:

Lightweight. Make upgrades as cheap as possible (for users and for ourselves).

First-class. Make versioning a first-class concept in your API so that it can be used to keep documentation and tooling accurate and up-to-date, and to generate a changelog automatically.

Fixed-cost. Ensure that old versions add only minimal maintenance cost by tightly encapsulating them in version change modules. Put another way, the less thought that needs to be applied towards old behavior while writing new code, the better.

While we’re excited by the debate and developments around REST vs. GraphQL vs. gRPC, and—more broadly—what the future of web APIs will look like, we expect to continue supporting versioning schemes for a long time to come.

We recently released a new and improved version of Connect, our suite of tools designed for platforms and marketplaces. Stripe’s design team works hard to create unique landing pages that tell a story for our major products. For this release, we designed Connect’s landing page to reflect its intricate, cutting-edge capabilities while keeping things light and simple on the surface.

In this blog post, we’ll describe how we used several next-generation web technologies to bring Connect to life, and walk through some of the finer technical details (and excitement!) on our front-end journey.

CSS Grid Layout

Earlier this year, three major browsers (Firefox, Chrome, and Safari) almost simultaneously shipped their implementation of the new CSS Grid Layout module. This specification provides authors with a two-dimensional layout system that is easy-to-use and incredibly powerful. Connect’s landing page relies on CSS grids pretty much everywhere, making some seemingly tricky designs almost trivial to achieve. As an example, let’s hide the header’s content and focus on its background:

Historically, we’ve created these background stripes (as we obviously call them) by using absolute positioning to precisely place each stripe on the page. This approach works, but fragile positioning often results in subtle issues: for example, rounding errors can cause a 1px gap between two stripes. CSS stylesheets also quickly become verbose and hard to maintain, since media queries need to be more complex to account for background differences at various viewport sizes.

With CSS Grid, pretty much all our previous issues go away. We simply define a flexible grid and place the stripes in their appropriate cells. Firefox has a handy grid inspector allowing you to visualize the structure of your layout. Let’s see how it looks:

We’ve highlighted three stripes and removed the tilt effect to make things easier to understand. Here’s what the CSS for our grid looks like:

We can then just transform the entire .stripes container to produce the tilted background:

header .stripes {
transform: skewY(-12deg);
transform-origin: 0;
}

And voilà! CSS Grid might look intimidating at first sight as it comes with an unusual syntax and many new properties and values, but the mental modal is actually very simple. And if you’re used to Flexbox, you’re already familiar with the Box Alignment module, which means you can reuse the properties you know and love such as justify-content and align-items.

CSS 3D

The landing page’s header displays several cubes as a visual metaphor for the building blocks that compose Connect. These floating cubes rotate in 3D at random speeds (within a certain range) and benefit from the same light source, which dynamically illuminates the appropriate faces:

These cubes are simple DOM elements that are generated and animated in JavaScript. Each of them instantiate the same HTML template:

Pretty straightforward. We can now easily turn these blank and empty elements into a three-dimensional shape. Thanks to 3D transforms, adding perspective and moving the sides along the z-axis is fairly natural:

While CSS makes it trivial to model the cube, it doesn’t provide advanced animation features like dynamic shading. The cube’s animation instead relies on requestAnimationFrame to calculate and update each side at any point in the rotation. There are three things to determine on every frame:

Visibility. There are never more than three faces visible at the same time, so we can avoid any computations and expensive repaints for hidden sides.

Transformation. Each visible side of the cube needs to be transformed based on its initial rotation, current animation state, and the speed of each axis.

Shading. While CSS lets you position elements in a three-dimensional space, there are no traditional concepts from 3D environments (e.g. light sources). In order to mimic a 3D environment, we can render a light source by progressively darkening the sides of the cube as they move away from a particular point.

There are other considerations to take into account (such as improving performance using requestIdleCallback in JavaScript and backface-visibility in CSS), but these are the main pillars behind the logic of the animation.

We can calculate the visibility and transformation of each side by continually tracking their state and updating them with basic math operations. With the help of pure functions and ES2015 features such as template literals, things become even easier. Here are two short excerpts of JavaScript code to compute and define the current transformation:

The most challenging piece of the puzzle is how to properly calculate shading for each face of the cube. In order to simulate a virtual light source at the center of the stage, we can gradually increase each face’s lighting effect as they approach the center point—on all axes. Concretely, that means we need to calculate the luminosity and color for each face. We’ll perform this calculation on every frame by interpolating the base color and the current shading factor.

Phew! The rest of the code is fortunately far less hairy and mostly composed of boilerplate code, DOM helpers and other elementary abstractions. One last detail that’s worth mentioning is the technique used to make the animations less obtrusive depending on the user’s preferences:

On macOS, when Reduce Motion is enabled in System Preferences, the new prefers-reduced-motion media query will be triggered (only in Safari for now), and all decorative animations on the page will be disabled. The cubes use both CSS animations to fade in and JavaScript animations to rotate. We can cancel these animations with a combination of a @media block and the MediaQueryList Interface:

More CSS 3D!

We use custom 3D-rendered devices across the site to showcase Stripe customers and apps in situ. In our never-ending quest to reduce file sizes and loading time, we considered a few options to achieve a soft three-dimensional look and feel with lightweight and resolution-independent assets. Drawing the devices directly in CSS fulfilled our objectives. Here’s the CSS laptop:

Defining the object in CSS is obviously less convenient than exporting a bitmap, but it’s worth the effort. The laptop above weighs less than 1KB and is easy to tweak. We can add hardware-acceleration, animate any part, make it responsive without losing image quality, and precisely position DOM elements (e.g. other images) within the laptop’s display. This flexibility doesn’t mean giving up on clean code—the markup stays clear, concise and descriptive:

Styling the laptop involves a mix of gradients, shadows and transforms. In many ways, it’s a simple translation of the workflow and concepts you know and use in your graphic tools. For example, here’s the CSS code for the lid:

Choosing the right tool for the job isn’t always obvious—between CSS, SVG, Canvas, WebGL and images the choice isn’t as clear as it used to be. It’s easy to dismiss CSS as something exclusively meant for presenting documents, but it’s just as easy to go overboard and abuse its visual capabilities. No matter the technology you choose, optimize for the user! This means paying close attention to client-side performance, accessibility needs, and fallback options for older browsers.

Web Animations API

The Web Animations API provides the performance and simplicity of CSS @keyframes in JavaScript, making it easy to create smooth, chainable animation sequences. As opposed to the requestAnimationFrame low-level API, you get all the niceties of CSS animations for free, such as native support for cubic-bezier easing functions. As an example, let’s take a look at the code for our keyboard sliding animation:

Nice and simple! The Web Animations API covers the vast majority of typical UI animation needs without requiring a third-party dependency (as a result, the whole Express animation is about 5KB all included: scripts, images, etc.). That being said, it is not a downright replacement for requestAnimationFrame which still provides finer control over your animation and allows you to create effects otherwise impossible, such as spring curves and independent transform functions. If you’re not sure about the right technology to use for your animations, you can probably prioritize your options like this:

CSS transitions. This is the fastest, easiest, and most efficient way to animate. For simple things like hover effects, this is the way to go.

CSS animations. These have the same performance characteristics as CSS transitions: they’re declarative animations that can be highly optimized by the browsers and run on a separate thread. CSS animations are more powerful than transitions and allow for multiple steps and multiple iterations. They’re also more intricate to implement as they require named @keyframes declaration and often need an explicit animation-fill-mode. (And naming things is always one of the hardest things in computer science!)

Web Animations API. This API offers almost the same performance as CSS animations (these animations are driven by the same engine, but JavaScript code will still run on the main thread) and nearly the same ease of use. This should be your default choice for any animation where you need interactivity, random effects, chainable sequences, and anything richer than a purely declarative animation.

requestAnimationFrame. The sky is the limit, but you have to engineer the rocket ship. The possibilities are endless and the rendering methods unlimited (HTML, SVG, canvas—you name it), but it’s a lot more complicated to use and may not perform as well as the previous options.

No matter the technique you use, there are a few simple tips you can apply everywhere to make your animations look significantly better:

Custom curves. You almost never want to use a built-in timing-function like ease-in, ease-out and linear. A nice time-saver is to globally define a number of custom cubic-bezier variables.

Performance. Avoid jank in your animations at all costs. In CSS, this means exclusively animating cheap properties (transform and opacity) and offloading animations to the GPU when you can (using will-change).

Speed. Animations should never get in the way. The very goal of animations is to make a UI feel responsive, harmonious, enjoyable and polished. There’s no hard limit on the exact animation duration as it depends on the effect and the curve, but in most cases you’ll want to stay under 500 milliseconds.

Intersection Observer

The Express animation starts playing automatically as soon as it’s visible in the viewport (you can try it by scrolling the page). This is usually accomplished by observing scroll movements to trigger some callback, but historically this meant adding expensive event listeners, resulting in verbose and inefficient code.

Connect’s landing page uses the new Intersection Observer API which provides a much more robust and performant way to detect the visibility of an element. Here’s how we start playing the Express animation:

The observeScroll helper simplifies our detection behavior (i.e. when an element is fully visible, the callback is triggered once) without executing anything on the main thread. Thanks to the Intersection Observer API, we’re now one step closer to buttery-smooth web pages!

Polyfills and fallbacks

All these new and shiny APIs are exciting, but they’re unfortunately not yet available everywhere. The common workaround is to use polyfills to feature-test for a particular API and execute only if the API is missing. The obvious downside to this approach is that it penalizes everyone, forever, by forcing them to download the polyfill regardless of whether it’s used. We decided on a different solution:

For JavaScript APIs, Connect’s landing page feature-tests whether a polyfill is necessary and can dynamically insert it in the page. Scripts that are dynamically created and added to the document are asynchronous by default, which means the order of execution isn’t guaranteed. That’s obviously a problem, as a given script may execute before an expected polyfill. Thankfully, we can fix that by explicitly marking our scripts as not asynchronous and therefore lazy-load only what’s required:

CSS feature queries are easy, reliable, and they should likely be your default choice. However, they weren’t suited to our audience since close to 90% of our visitors already use a Grid-friendly browser (❤️). In our case, it didn’t make sense to penalize the overwhelming majority of our users with hundreds of fallback rules for a small and decreasing percentage of browsers. Given these statistics, we chose to dynamically create and insert a fallback stylesheet when needed:

That’s a wrap!

We hope you enjoyed (and maybe even learned) some of these front-end tips! Modern browsers provide us with powerful tools to create rich, fast and engaging experiences, letting our creativity shine on the web. If you’re as excited as we are about the possibilities, we should probably experiment with them together.

Availability and reliability are paramount for all web applications and APIs. If you’re providing an API, chances are you’ve already experienced sudden increases in traffic that affect the quality of your service, potentially even leading to a service outage for all your users.

The first few times this happens, it’s reasonable to just add more capacity to your infrastructure to accommodate user growth. However, when you’re running a production API, not only do you have to make it robust with techniques like idempotency, you also need to build for scale and ensure that one bad actor can’t accidentally or deliberately affect its availability.

Rate limiting can help make your API more reliable in the following scenarios:

One of your users is responsible for a spike in traffic, and you need to stay up for everyone else.

One of your users has a misbehaving script which is accidentally sending you a lot of requests. Or, even worse, one of your users is intentionally trying to overwhelm your servers.

A user is sending you a lot of lower-priority requests, and you want to make sure that it doesn’t affect your high-priority traffic. For example, users sending a high volume of requests for analytics data could affect critical transactions for other users.

Something in your system has gone wrong internally, and as a result you can’t serve all of your regular traffic and need to drop low-priority requests.

At Stripe, we’ve found that carefully implementing a few rate limiting strategies helps keep the API available for everyone. In this post, we’ll explain in detail which rate limiting strategies we find the most useful, how we prioritize some API requests over others, and how we started using rate limiters safely without affecting our existing users’ workflows.

Rate limiters and load shedders

A rate limiter is used to control the rate of traffic sent or received on the network. When should you use a rate limiter? If your users can afford to change the pace at which they hit your API endpoints without affecting the outcome of their requests, then a rate limiter is appropriate. If spacing out their requests is not an option (typically for real-time events), then you’ll need another strategy outside the scope of this post (most of the time you just need more infrastructure capacity).

Our users can make a lot of requests: for example, batch processing payments causes sustained traffic on our API. We find that clients can always (barring some extremely rare cases) spread out their requests a bit more and not be affected by our rate limits.

Rate limiters are amazing for day-to-day operations, but during incidents (for example, if a service is operating more slowly than usual), we sometimes need to drop low-priority requests to make sure that more critical requests get through. This is called load shedding. It happens infrequently, but it is an important part of keeping Stripe available.

A load shedder makes its decisions based on the whole state of the system rather than the user who is making the request. Load shedders help you deal with emergencies, since they keep the core part of your business working while the rest is on fire.

Using different kinds of rate limiters in concert

Once you know rate limiters can improve the reliability of your API, you should decide which types are the most relevant.

At Stripe, we operate 4 different types of limiters in production. The first one, the Request Rate Limiter, is by far the most important one. We recommend you start here if you want to improve the robustness of your API.

Request rate limiter

This rate limiter restricts each user to N requests per second. Request rate limiters are the first tool most APIs can use to effectively manage a high volume of traffic.

Our rate limits for requests is constantly triggered. It has rejected millions of requests this month alone, especially for test mode requests where a user inadvertently runs a script that’s gotten out of hand.

Our API provides the same rate limiting behavior in both test and live modes. This makes for a good developer experience: scripts won't encounter side effects due to a particular rate limit when moving from development to production.

After analyzing our traffic patterns, we added the ability to briefly burst above the cap for sudden spikes in usage during real-time events (e.g. a flash sale.)

Request rate limiters restrict users to a maximum number of requests per second.

Concurrent requests limiter

Instead of “You can use our API 1000 times a second”, this rate limiter says “You can only have 20 API requests in progress at the same time”. Some endpoints are much more resource-intensive than others, and users often get frustrated waiting for the endpoint to return and then retry. These retries add more demand to the already overloaded resource, slowing things down even more. The concurrent rate limiter helps address this nicely.

Our concurrent request limiter is triggered much less often (12,000 requests this month), and helps us keep control of our CPU-intensive API endpoints. Before we started using a concurrent requests limiter, we regularly dealt with resource contention on our most expensive endpoints caused by users making too many requests at one time. The concurrent request limiter totally solved this.

It is completely reasonable to tune this limiter up so it rejects more often than the Request Rate Limiter. It asks your users to use a different programming model of “Fork off X jobs and have them process the queue” compared to “Hammer the API and back off when I get a HTTP 429”. Some APIs fit better into one of those two patterns so feel free to use which one is most suitable for the users of your API.

Fleet usage load shedder

Using this type of load shedder ensures that a certain percentage of your fleet will always be available for your most important API requests.

We divide up our traffic into two types: critical API methods (e.g. creating charges) and non-critical methods (e.g. listing charges.) We have a Redis cluster that counts how many requests we currently have of each type.

We always reserve a fraction of our infrastructure for critical requests. If our reservation number is 20%, then any non-critical request over their 80% allocation would be rejected with status code 503.

We triggered this load shedder for a very small fraction of requests this month. By itself, this isn’t a big deal—we definitely had the ability to handle those extra requests. But we’ve had other months where this has prevented outages.

Worker utilization load shedder

Most API services use a set of workers to independently respond to incoming requests in a parallel fashion. This load shedder is the final line of defense. If your workers start getting backed up with requests, then this will shed lower-priority traffic.

This one gets triggered very rarely, only during major incidents.

We divide our traffic into 4 categories:

Critical methods

POSTs

GETs

Test mode traffic

We track the number of workers with available capacity at all times. If a box is too busy to handle its request volume, it will slowly start shedding less-critical requests, starting with test mode traffic. If shedding test mode traffic gets it back into a good state, great! We can start to slowly bring traffic back. Otherwise, it’ll escalate and start shedding even more traffic.

It’s very important that shedding and bringing load happen slowly, or you can end up flapping (“I got rid of testmode traffic! Everything is fine! I brought it back! Everything is awful!”). We used a lot of trial and error to tune the rate at which we shed traffic, and settled on a rate where we shed a substantial amount of traffic within a few minutes.

Only 100 requests were rejected this month from this rate limiter, but in the past it’s done a lot to help us recover more quickly when we have had load problems. This load shedder limits the impact of incidents that are already happening and provides damage control, while the first three are more preventative.

Building rate limiters in practice

Now that we’ve outlined the four basic kinds of rate limiters we use and what they’re for, let’s talk about their implementation. What rate limiting algorithms are there? How do you actually implement them in practice?

We use the token bucket algorithm to do rate limiting. This algorithm has a centralized bucket host where you take tokens on each request, and slowly drip more tokens into the bucket. If the bucket is empty, reject the request. In our case, every Stripe user has a bucket, and every time they make a request we remove a token from that bucket.

We implement our rate limiters using Redis. You can either operate the Redis instance yourself, or, if you use Amazon Web Services, you can use a managed service like ElastiCache.

Here are important things to consider when implementing rate limiters:

Hook the rate limiters into your middleware stack safely. Make sure that if there were bugs in the rate limiting code (or if Redis were to go down), requests wouldn’t be affected. This means catching exceptions at all levels so that any coding or operational errors would fail open and the API would still stay functional.

Show clear exceptions to your users. Figure out what kinds of exceptions to show your users. In practice, you should decide if you want HTTP 429 (Too Many Requests) or HTTP 503 (Service Unavailable) and what is the most accurate depending on the situation. The message you return should also be actionable.

Build in safeguards so that you can turn off the limiters. Make sure you have kill switches to disable the rate limiters should they kick in erroneously. Having feature flags in place can really help should you need a human escape valve. Set up alerts and metrics to understand how often they are triggering.

Dark launch each rate limiter to watch the traffic they would block. Evaluate if it is the correct decision to block that traffic and tune accordingly. You want to find the right thresholds that would keep your API up without affecting any of your users’ existing request patterns. This might involve working with some of them to change their code so that the new rate limit would work for them.

Conclusion

Rate limiting is one of the most powerful ways to prepare your API for scale. The different rate limiting strategies described in this post are not all necessary on day one, you can gradually introduce them once you realize the need for rate limiting.

Our recommendation is to follow the following steps to introduce rate limiting to your infrastructure:

Start by building a Request Rate Limiter. It is the most important one to prevent abuse, and it’s by far the one that we use the most frequently.

Introduce the next three types of rate limiters over time to prevent different classes of problems. They can be built slowly as you scale.

Follow good launch practices as you're adding new rate limiters to your infrastructure. Handle any errors safely, put them behind feature flags to turn them off easily at any time, and rely on very good observability and metrics to see how often they’re triggering.

To help you get started, we’ve created a GitHub gist to share implementation details based on the code we actually use in production at Stripe.

Networks are unreliable. We’ve all experienced trouble connecting to Wi-Fi, or had a phone call drop on us abruptly.

The networks connecting our servers are, on average, more reliable than consumer-level last miles like cellular or home ISPs, but given enough information moving across the wire, they’re still going to fail in exotic ways. Outages, routing problems, and other intermittent failures may be statistically unusual on the whole, but still bound to be happening all the time at some ambient background rate.

To overcome this sort of inherently unreliable environment, it’s important to design APIs and clients that will be robust in the event of failure, and will predictably bring a complex integration to a consistent state despite them. Let’s take a look at a few ways to do that.

Planning for failure

Consider a call between any two nodes. There are a variety of failures that can occur:

The initial connection could fail as the client tries to connect to a server.

The call could fail midway while the server is fulfilling the operation, leaving the work in limbo.

The call could succeed, but the connection break before the server can tell its client about it.

Any one of these leaves the client that made the request in an uncertain situation. In some cases, the failure is definitive enough that the client knows with good certainty that it’s safe to simply retry it. For example, a total failure to even establish a connection to the server. In many others though, the success of the operation is ambiguous from the perspective of the client, and it doesn’t know whether retrying the operation is safe. A connection terminating midway through message exchange is an example of this case.

This problem is a classic staple of distributed systems, and the definition is broad when talking about a “distributed system” in this sense: as few as two computers connecting via a network that are passing each other messages. Even the Stripe API and just one other server that’s making requests to it comprise a distributed system.

Making liberal use of idempotency

The easiest way to address inconsistencies in distributed state caused by failures is to implement server endpoints so that they’re idempotent, which means that they can be called any number of times while guaranteeing that side effects only occur once.

When a client sees any kind of error, it can ensure the convergence of its own state with the server’s by retrying, and can continue to retry until it verifiably succeeds. This fully addresses the problem of an ambiguous failure because the client knows that it can safely handle any failure using one simple technique.

As an example, consider the API call for a hypothetical DNS provider that enables us to add subdomains via an HTTP request:

All the information needed to create a record is included in the call, and it’s perfectly safe for a client to invoke it any number of times. If the server receives a call that it realizes is a duplicate because the domain already exists, it simply ignores the request and responds with a successful status code.

Guaranteeing “exactly once” semantics

While the inherently idempotent HTTP semantics around PUT and DELETE are a good fit for many API calls, what if we have an operation that needs to be invoked exactly once and no more? An example might be if we were designing an API endpoint to charge a customer money; accidentally calling it twice would lead to the customer being double-charged, which is very bad.

This is where idempotency keys come into play. When performing a request, a client generates a unique ID to identify just that operation and sends it up to the server along with the normal payload. The server receives the ID and correlates it with the state of the request on its end. If the client notices a failure, it retries the request with the same ID, and from there it’s up to the server to figure out what to do with it.

If we consider our sample network failure cases from above:

On retrying a connection failure, on the second request the server will see the ID for the first time, and process it normally.

On a failure midway through an operation, the server picks up the work and carries it through. The exact behavior is heavily dependent on implementation, but if the previous operation was successfully rolled back by way of an ACID database, it’ll be safe to retry it wholesale. Otherwise, state is recovered and the call is continued.

On a response failure (i.e. the operation executed successfully, but the client couldn’t get the result), the server simply replies with a cached result of the successful operation.

The Stripe API implements idempotency keys on mutating endpoints (i.e. anything under POST in our case) by allowing clients to pass a unique value in with the special Idempotency-Key header, which allows a client to guarantee the safety of distributed operations:

If the above Stripe request fails due to a network connection error, you can safely retry it with the same idempotency key, and the customer is charged only once.

Being a good distributed citizen

Safely handling failure is hugely important, but beyond that, it’s also recommended that it be handled in a considerate way. When a client sees that a network operation has failed, there’s a good chance that it’s due to an intermittent failure that will be gone by the next retry. However, there’s also a chance that it’s a more serious problem that’s going to be more tenacious; for example, if the server is in the middle of an incident that’s causing hard downtime. Not only will retries of the operation not go through, but they may contribute to further degradation.

It’s usually recommended that clients follow something akin to an exponential backoff algorithm as they see errors. The client blocks for a brief initial wait time on the first failure, but as the operation continues to fail, it waits proportionally to 2n, where n is the number of failures that have occurred. By backing off exponentially, we can ensure that clients aren’t hammering on a downed server and contributing to the problem.

Exponential backoff has a long and interesting history in computer networking.

Furthermore, it’s also a good idea to mix in an element of randomness. If a problem with a server causes a large number of clients to fail at close to the same time, then even with back off, their retry schedules could be aligned closely enough that the retries will hammer the troubled server. This is known as the thundering herd problem.

We can address thundering herd by adding some amount of random “jitter” to each client’s wait time. This will space out requests across all clients, and give the server some breathing room to recover.

Thundering herd problem when a server faces simultaneous retries from all clients.

Codifying the design of robust APIs

Considering the possibility of failure in a distributed system and how to handle it is of paramount importance in building APIs that are both robust and predictable. Retry logic on clients and idempotency on servers are techniques that are useful in achieving this goal and relatively simple to implement in any technology stack.

Here are a few core principles to follow while designing your clients and APIs:

Make sure that failures are handled consistently. Have clients retry operations against remote services. Not doing so could leave data in an inconsistent state that will lead to problems down the road.

Make sure that failures are handled safely. Use idempotency and idempotency keys to allow clients to pass a unique value and retry requests as needed.

Make sure that failures are handled responsibly. Use techniques like exponential backoff and random jitter. Be considerate of servers that may be stuck in a degraded state.

Engineering teams face a common challenge when building software: they eventually need to redesign the data models they use to support clean abstractions and more complex features. In production environments, this might mean migrating millions of active objects and refactoring thousands of lines of code.

Stripe users expect availability and consistency from our API. This means that when we do migrations, we need to be extra careful: objects stored in our systems need to have accurate values, and Stripe’s services need to remain available at all times.

In this post, we’ll explain how we safely did one large migration of our hundreds of millions of Subscriptions objects.

Why are migrations hard?

Scale

Stripe has hundreds of millions of Subscriptions objects. Running a large migration that touches all of those objects is a lot of work for our production database.

Imagine that it takes one second to migrate each subscription object: in sequential fashion, it would take over three years to migrate one hundred million objects.

Uptime

Businesses are constantly transacting on Stripe. We perform all infrastructure upgrades online, rather than relying on planned maintenance windows. Because we couldn’t simply pause the Subscriptions service during migrations, we had to execute the transition with all of our services operating at 100%.

Accuracy

Our Subscriptions table is used in many different places in our codebase. If we tried to change thousands of lines of code across the Subscriptions service at once, we would almost certainly overlook some edge cases. We needed to be sure that every service could continue to rely on accurate data.

A pattern for online migrations

Moving millions of objects from one database table to another is difficult, but it’s something that many companies need to do.

There’s a common 4 step dual writing pattern that people often use to do large online migrations like this. Here’s how it works:

Dual writing to the existing and new tables to keep them in sync.

Changing all read paths in our codebase to read from the new table.

Changing all write paths in our codebase to only write to the new table.

Removing old data that relies on the outdated data model.

Our example migration: Subscriptions

What are Subscriptions and why did we need to do a migration?

Stripe Subscriptions helps users like DigitalOcean and Squarespace build and manage recurring billing for their customers. Over the past few years, we’ve steadily added features to support their more complex billing models, such as multiple subscriptions, trials, coupons, and invoices.

In the early days, each Customer object had, at most, one subscription. Our customers were stored as individual records. Since the mapping of customers to subscriptions was straightforward, subscriptions were stored alongside customers.

class Customer
Subscription subscription
end

Eventually, we realized that some users wanted to create customers with multiple subscriptions. We decided to transform the subscription field (for a single subscription) to a subscriptions field—allowing us to store an array of multiple active subscriptions.

class Customer
array: Subscription subscriptions
end

As we added new features, this data model became problematic. Any changes to a customer’s subscriptions meant updating the entire Customer record, and subscriptions-related queries scanning through customer objects. So we decided to store active subscriptions separately.

Our redesigned data model moves subscriptions into their own table.

As a reminder, our four migration phases were:

Dual writing to the existing and new tables to keep them in sync.

Changing all read paths in our codebase to read from the new table.

Changing all write paths in our codebase to only write to the new table.

Removing old data that relies on the outdated data model.

Let’s walk through these four phases looked like for us in practice.

Part 1: Dual writing

We begin the migration by creating a new database table. The first step is to start duplicating new information so that it’s written to both stores. We’ll then backfill missing data to the new store so the two stores hold identical information.

All new writes should update both stores.

In our case, we record all newly-created subscriptions into both the Customers table and the Subscriptions table. Before we begin dual writing to both tables, it’s worth considering the potential performance impact of this additional write on our production database. We can mitigate performance concerns by slowly ramping up the percentage of objects that get duplicated, while keeping a careful eye on operational metrics.

At this point, newly created objects exist in both tables, while older objects are only found in the old table. We’ll start copying over existing subscriptions in a lazy fashion: whenever objects are updated, they will automatically be copied over to the new table. This approach lets us begin to incrementally transfer our existing subscriptions.

Finally, we’ll backfill any remaining Customer subscriptions into the new Subscriptions table.

We need to backfill existing subscriptions to the new Subscriptions table.

The most expensive part of backfilling the new table on the live database is simply finding all the objects that need migration. Finding all the objects by querying the database would require many queries to the production database, which would take a lot of time. Luckily, we were able to offload this to an offline process that had no impact on our production databases. We make snapshots of our databases available to our Hadoop cluster, which lets us use MapReduce to quickly process our data in a offline, distributed fashion.

We use Scalding to manage our MapReduce jobs. Scalding is a useful library written in Scala that makes it easy to write MapReduce jobs (you can write a simple one in 10 lines of code). In this case, we’ll use Scalding to help us identify all subscriptions. We’ll follow these steps:

Write a Scalding job that provides a list of all subscription IDs that need to be copied over.

Run a large, multi-threaded migration to duplicate these subscriptions with a fleet of processes efficiently operating on our data in parallel.

Once the migration is complete, run the Scalding job once again to make sure there are no existing subscriptions missing from the Subscriptions table.

Part 2: Changing all read paths

Now that the old and new data stores are in sync, the next step is to begin using the new data store to read all our data.

For now, all reads use the existing Customers table: we need to move to the Subscriptions table.

We need to be sure that it’s safe to read from the new Subscriptions table: our subscription data needs to be consistent. We’ll use GitHub’s Scientist to help us verify our read paths. Scientist is a Ruby library that allows you to run experiments and compare the results of two different code paths, alerting you if two expressions ever yield different results in production. With Scientist, we can generate alerts and metrics for differing results in real time. When an experimental code path generates an error, the rest of our application won’t be affected.

We’ll run the following experiment:

Use Scientist to read from both the Subscriptions table and the Customers table.

If the results don’t match, raise an error alerting our engineers to the inconsistency.

GitHub’s Scientist lets us run experiments that read from both tables and compare the results.

After we verified that everything matched up, we started reading from the new table.

Our experiments are successful: all reads now use the new Subscriptions table.

Part 3: Changing all write paths

Next, we need to update write paths to use our new Subscriptions store. Our goal is to incrementally roll out these changes, so we’ll need to employ careful tactics.

Up until now, we’ve been writing data to the old store and then copying them to the new store:

We now want to reverse the order: write data to the new store and then archive it in the old store. By keeping these two stores consistent with each other, we can make incremental updates and observe each change carefully.

Refactoring all code paths where we mutate subscriptions is arguably the most challenging part of the migration. Stripe’s logic for handling subscriptions operations (e.g. updates, prorations, renewals) spans thousands of lines of code across multiple services.

The key to a successful refactor will be our incremental process: we’ll isolate as many code paths into the smallest unit possible so we can apply each change carefully. Our two tables need to stay consistent with each other at every step.

For each code path, we’ll need to use a holistic approach to ensure our changes are safe. We can’t just substitute new records with old records: every piece of logic needs to be considered carefully. If we miss any cases, we might end up with data inconsistency. Thankfully, we can run more Scientist experiments to alert us to any potential inconsistencies along the way.

Our new, simplified write path looks like this:

We can make sure that no code blocks continue using the outdated subscriptions array by raising an error if the property is called:

Part 4: Removing old data

Our final (and most satisfying) step is to remove code that writes to the old store and eventually delete it.

Once we’ve determined that no more code relies on the subscriptions field of the outdated data model, we no longer need to write to the old table:

With this change, our code no longer uses the old store, and the new table now becomes our source of truth.

We can now remove the subscriptions array on all of our Customer objects, and we’ll incrementally process deletions in a lazy fashion. We first automatically empty the array every time a subscription is loaded, and then run a final Scalding job and migration to find any remaining objects for deletion. We end up with the desired data model:

Conclusion

Running migrations while keeping the Stripe API consistent is complicated. Here’s what helped us run this migration safely:

We laid out a four phase migration strategy that would allow us to transition data stores while operating our services in production without any downtime.

We processed data offline with Hadoop, allowing us to manage high data volumes in a parallelized fashion with MapReduce, rather than relying on expensive queries on production databases.

All the changes we made were incremental. We never attempted to change more than a few hundred lines of code at one time.

All our changes were highly transparent and observable. Scientist experiments alerted us as soon as a single piece of data was inconsistent in production. At each step of the way, we gained confidence in our safe migration.

We’ve found this approach effective in the many online migrations we’ve executed at Stripe. We hope these practices prove useful for other teams performing migrations at scale.

When people talk about their data infrastructure, they tend to focus on the technologies: Hadoop, Scalding, Impala, and the like. However, we’ve found that just as important as the technologies themselves are the principles that guide their use. We’d like to share our experience with one such principle that we’ve found particularly useful: reproducibility.

We’ll talk about our motivation for focusing on reproducibility, how we’re using Jupyter Notebooks as our core tool, and the workflow we’ve developed around Jupyter to operationalize our approach.

Jupyter notebooks are a fantastic way to create reproducible data science research.

Motivation

Data tools are most often used to generate some kind of exploratory analysis report. At Stripe, an example is an investigation of the probability that a card gets declined, given the time since its last charge. The investigator writes a query, which is executed by a query engine like Redshift, and then runs some further code to interpret and visualize the results.

The most common way to share results from these sorts of studies is to compose an email and attach some graphs. But this means that viewers of the report don’t know how the query was constructed and analyzed. As a result, they are unable to review the work in depth, or to extend it themselves. It’s very easy to commit methodological errors when asking questions of data; an unintended bias here, or a missed corner case there, can lead to entirely incorrect conclusions.

In academia, the peer review system helps catch these errors. Many in the scientific community have championed the practice of open science, where data and code are released along with experimental results, such that reviewers can independently recreate the original results. Taking inspiration from this movement, we sought to make data reports within Stripe transparent and reproducible, so that anyone at the company can look at a report and understand how it was generated. Just like an always-green test suite forces developers to write better code, we wanted to see if requiring all analyses be reproducible would force us to produce better reports.

Implementation

Our implementation of reproducible analysis centers on Jupyter Notebook, a web-based frontend to the Jupyter interactive computing environment which provides an interface similar to Mathematica or Matlab.

Jupyter Notebook also comes with built-in functionality to convert a notebook into a publishable HTML document. You can see a sample of one of our published notebooks, studying the relationship between Benford’s Law and the amounts of each charge made on Stripe.

Now, let’s say that Alice wants to share a notebook with Bob. The state of the interactive environment can be persisted as a JSON file containing both the code input to the notebook and data output from it. To share the notebook, Alice would typically send this notebook file directly to Bob. Now, when Bob opens it, he’ll see the same outputs as Alice, but may not be able to do much with them. These outputs include computational results and plots’ image data, but not the values of any of the variables that Alice was working with. To inspect these variables and extend Alice’s work, he’ll have to recompute them from the code inputs. However, there may have been certain cells that only run correctly on the Alice’s computer, or some cells might have been rearranged in a way that unintentionally broke the flow of computation. It’s easy to miss mistakes like these when you’re able to share a notebook with the results embedded, so we decided to try something different.

In our workflow, developers and data scientists work on a notebook locally and check this source file into Git. To publish their work, they use our common deployment framework, which executes the notebook code once it hits our servers. The results are translated into HTML, which are served statically. Importantly, we strip results from the notebook files in a pre-commit hook, meaning that only code is checked into our repositories. This ensures that the results are fully reproduced from scratch when the notebook is published. Thus, it’s a requirement that all notebooks be programmatically executable from back to front, without needing any manual steps to run. If you were on a Stripe computer, you could run the notebook above with one click and obtain the same results. This is a huge deal!

To make this workflow possible, we had to write some additional tooling to enable the same code to run on developers’ laptops and production servers. The bulk of this work involved access to our query engines, which is perhaps the most common obstacle to collaboration on data analysis projects. Even very well-organized workflows often require a data file to be present at a particular path, or some out of band authentication step with the machines running the queries. The key to overcoming these challenges was to create a common entry point in code to access these query engines from developers’ laptops, as well as our servers. This way, a notebook that runs on one developer’s computer will always run correctly on everyone else’s.

Adding this tooling also greatly improved the experience of doing exploratory data analysis within the notebook. Prior to our reproducibility tooling, setting up data access was tedious, time-consuming, and error-prone. Automating and standardizing this process allowed data scientists and developers to focus on their analysis instead.

Conclusion

Reproducibility makes data science at Stripe feel like working on GitHub, where anyone can obtain and extend others’ work. Instead of islands of analysis, we share our research in a central repository of knowledge. This makes it dramatically easier for anyone on our team to work with our data science research, encouraging independent exploration.

We approach our analyses with the same rigor we apply to production code: our reports feel more like finished products, research is fleshed out and easy to understand, and there are clear programmatic steps from start to finish for every analysis.

We’ve switched over to reproducible reports, and we’re not looking back. Delivering them requires more up-front work, but we’ve found it to be a good long-term investment. If you give it a try, we think you’ll feel the same way!

With so many new technologies coming out every year (like Kubernetes or Habitat), it’s easy to become so entangled in our excitement about the future that we forget to pay homage to the tools that have been quietly supporting our production environments. One such tool we've been using at Stripe for several years now is Consul. Consul helps discover services (that is, it helps us navigate the thousands of servers we run with various services running on them and tells us which ones are up and available for use). This effective and practical architectural choice wasn't flashy or entirely novel, but has served us dutifully in our continued mission to provide reliable service to our users around the world.

We’re going to talk about:

What service discovery and Consul are,

How we managed the risks of deploying a critical piece of software,

The challenges we ran into along the way and what we did about them.

You don’t just set up new software and expect it to magically work and solve all your problems—using new software is a process. This is an example of what the process of using new software in production has looked like for us.

What’s service discovery?

Great question! Suppose you’re a load balancer for Stripe, and a request to create a charge has come in. You want to send it to an API server. Any API server!

We run thousands of servers with various services running on them. Which ones are the API servers? What port is the API running on? One amazing thing about using AWS is that our instances can go down at any time, so we need to be prepared to:

Lose API servers at any time,

Add extra servers to the rotation if we need additional capacity.

This problem of tracking changes around which boxes are available is called service discovery. We use a tool called Consul from HashiCorp to do service discovery.

The fact that our instances can go down at any time is actually very helpful—our infrastructure gets regular practice losing instances and dealing with it automatically, so when it happens it’s just business as usual. It’s easier to handle failure gracefully when failure happens often.

Introduction to Consul

Consul is a service discovery tool: it lets services register themselves and to discover other services. It stores which services are up in a database, has client software that puts information in that database, and other client software that reads from that database. There are a lot of pieces to wrap your head around!

The most important component of Consul is the database. This database contains entries like “api-service is running at IP 10.99.99.99 at port 12345. It is up.”

Then if you want to talk to the API service, you can ask Consul “Which api-services are up?”. It will give you back a list of IP addresses and ports you can talk to.

Consul is a distributed system itself (remember: we can lose any box at any time, which means we could
lose the Consul server itself!) so it uses a consensus algorithm called Raft to keep its database in sync.

The beginning of Consul at Stripe

We started out by only writing to Consul—having machines report whether or not they were up to the Consul server, but not using that information to do service discovery. We wrote some Puppet configuration to set it up, which wasn’t that hard!

This way we could uncover potential issues with running the Consul client and get experience operating it on thousands of machines. At first, no services were being discovered with Consul.

What could go wrong?

Addressing memory leaks

If you add a new piece of software to every box in your infrastructure, that software could definitely go wrong! Early on we ran into memory leaks in Consul’s stats library: we noticed that one box was taking over 100MB of RAM and climbing. This was a bug in Consul, which got fixed.

100MB of memory is not a big leak, but the leak was growing quickly. Memory leaks in general are worrisome because they're one of the easiest ways for one process on a box to Totally Ruin Things for other processes on the box.

Good thing we decided not to use Consul to discover services to start! Letting it sit on a bunch of production machines and monitoring memory usage let us find out about a potentially serious problem quickly with no negative impact.

Starting to discover services with Consul

Once we were more confident that running Consul in our infrastructure would work, we started adding a few clients to talk to Consul! We made this less risky in 2 ways:

Only use Consul in a few places to start,

Keep a fallback system in place so that we could function during outages.

Here are some of the issues we ran into. We’re not listing these to complain about Consul, but rather to emphasize that when using new technology, it’s important to roll it out slowly and be cautious.

A ton of Raft failovers. Remember that Consul uses a consensus protocol? This copies all the data on one server in the Consul cluster to other servers in that cluster. The primary server was having a ton of problems with disk I/O—the disks weren’t fast enough to do the reads that Consul wanted to do, and the whole primary server would hang. Then Raft would say “oh, the primary is down!” and elect a new primary, and the cycle would repeat. While Consul was busy electing a new primary, it would not let anybody write anything or read anything from its database (because consistent reads are the default).

Version 0.3 broke SSL completely. We were using Consul’s SSL feature (technically, TLS) for our Consul nodes to communicate securely. One Consul release just broke it.
We patched it.
This is an example of a kind of issue that isn’t that difficult to detect or scary (we tested in QA, realized SSL was broken, and just didn’t roll out the release), but is pretty common when using early-stage software.

Goroutine leaks. We started using Consul’s leader election. and there was a goroutine leak that caused Consul to quickly eat all the memory on the box. The Consul team was really helpful in debugging this and we fixed a bunch of memory leaks (different memory leaks from before).

Once all of these were fixed, we were in a much better place. Getting from “our first Consul client” to “we’ve fixed all these issues in production” took a bit less than a year of background work cycles.

Scaling Consul to discover which services are up

So, we’d learned about a bunch of bugs in Consul, and had them fixed, and everything was operating much better. Remember that step we talked about at the beginning, though? Where you ask Consul “hey, what boxes are up for api-service?” We were having intermittent problems where the Consul server would respond slowly or not at all.

This was mostly during raft failovers or instability; because Consul uses a strongly-consistent store its availability will always be weaker than something that doesn't. It was especially rough in the early days.

We still had fallbacks, but Consul outages became pretty painful for us. We would fall back to a hardcoded set of DNS names (like “apibox1”) when Consul was down. This worked okay when we first rolled out Consul, but as we scaled and used Consul more widely, it became less and less viable.

Consul Template to the rescue

Asking Consul which services were up (via its HTTP API) was unreliable. But we were happy with it otherwise!

We wanted to get information out of Consul about which services were up without using its API. How?

Well, Consul would take a name (like monkey-srv) and translate it into one or several IP addresses (“this is where monkey-srv lives”). Know what else takes in names and outputs IP address? A DNS server! So we replaced Consul with a DNS server. Here’s how: Consul Template is a Go program that generates static configuration files from your Consul database.

We started using Consul Template to generate DNS records for Consul services. So if monkey-srv was running on IP 10.99.99.99, we’d generate a DNS record:

{{range service $service.Name}}
{{$service.Name}}.service.consul. IN A {{.Address}}
{{end}}

If you're thinking "wait, DNS records only have an IP address, you also need to know which port the server is running on," you are right! DNS A records (the kind you normally see) only have an IP address in them. However, DNS SRV records can have ports in them, and we also use Consul Template to generate SRV records.

We run Consul Template in a cron job every 60 seconds. Consul Template also has a “watch” mode (the default) which continuously updates configuration files when its database is updated. When we tried the watch mode, it DOSed our Consul server, so we stopped using it.

So if our Consul server goes down, our internal DNS server still has all the records! They might be a little old, but that’s fine. What’s awesome about our DNS server is that it’s not a fancy distributed system, which means it’s a much simpler piece of software, and much less likely to spontaneously break. This means that I can just look up monkey-srv.service.consul get an IP, and use it to talk to my service!

Because DNS is a shared-nothing eventually consistent system we can replicate and cache it a bunch (we have 5 canonical DNS servers and every server has a local DNS cache and knows how to talk to any of the 5 canonical servers) so it's fundamentally more resilient than Consul.

Adding a load balancer for faster healthchecks

We just said that we update DNS records from Consul every 60 seconds. So, what happens if an API server explodes? Do we keep sending requests to that IP for 45 more seconds until the DNS server gets updated? We do not! There’s one more piece of the story: HAProxy.

HAProxy is a load balancer. If you give a healthcheck for the service it’s sending requests to, it can make sure that your backends are up! All of our API requests actually go through HAProxy. Here’s how it works:

This means that HAProxy always has an approximately correct set of backends.

If a machine goes down, HAProxy realizes quickly that something has gone wrong (since it runs healthchecks every 2 seconds).

This means we restart HAProxy every 60 seconds. But does that mean we drop connections when we restart HAProxy? No. To avoid dropping connections between restarts, we use HAProxy’s graceful restarts feature. It’s still possible to drop some traffic with this restart policy, as described here, but we don’t process enough traffic that it’s an issue.

We have a standard healthcheck endpoint for our services—almost every service has a /healthcheck endpoint that returns 200 if it’s up and and errors if not. Having a standard is important because it means we can easily configure HAProxy to check service health.

When Consul is down, HAProxy will just have a stale configuration file, which will keep working.

Trading consistency for availability

If you’ve been paying close attention, you’ll notice that the system we started with (a strongly consistent database which was guaranteed to have the latest state) was very different from the the system we finished with (a DNS server which could be up to a minute behind). Giving up our requirement for consistency let us have a much more available system—Consul outages have basically no effect on our ability to discover services.

An important lesson from this is that consistency doesn’t come for free! You have to be willing to pay the price in availability, and so if you’re going to be using a strongly consistent system it’s important to make sure that’s something you actually need.

What happens when you make a request

We covered a lot in this post, so let’s go through the request flow now that we’ve learned how it all works.

When you make a request for https://stripe.com/, what happens? How does it end up at the right server? Here’s a simplified explanation:

It comes into one of our public load balancers, running HAProxy,

Consul Template has populated a list of servers serving stripe.com in the /etc/haproxy.conf configuration file,

HAProxy reloads this configuration file every 60 seconds,

HAProxy sends your request on to a stripe.com server! It makes sure that the server is up.

It’s actually a little more complicated than that (there’s actually an extra layer, and Stripe API requests are even more complicated, because we have systems to deal with PCI compliance), but all the core ideas are there.

This means that when we bring up or take down servers, Consul takes care of removing them from the HAProxy rotation automatically. There’s no manual work to do.

More than a year of peace

There are a lot of areas we’re looking forward to improving in our approach to service discovery. It’s a space with loads of active development and we see some elegant opportunities for integrating our scheduling and request routing infrastructure in the near future.

However, one of the important lessons we’ve taken away is that simple approaches are often the right ones. This system has been working for us reliably for more than a year without any incidents. Stripe doesn’t process anywhere near as many requests as Twitter or Facebook, but we do care a very great deal about extreme reliability. Sometimes the best wins come from deploying a stable, excellent solution instead of a novel one.

Stripe Radar is a collection of tools to help businesses detect and prevent fraud. At Radar’s core is a machine learning engine that scans every card payment across Stripe’s 100,000+ businesses, aggregates information from those payments into behavioral signals that are predictive of fraud, and blocks payments that have a high probability of being fraudulent.

Radar’s power comes from all the data we obtain from the Stripe “network.” Instead of requiring users to label charges manually, Radar obtains the “ground truth” of fraud directly from our banking partners. Just as importantly, the signals we use in our models include aggregates over the entire stream of payments processed by Stripe: when a card is used for the first time on a Stripe business, there’s an 80% chance we’ve seen that card elsewhere on the Stripe network, and those previous interactions provide valuable information about potential fraud.

If you’re curious to learn more, we’ve put together a detailed outline that describes how we use machine learning at Stripe to detect and prevent fraud.

When a company writes about their observability stack, they often focus on sweet visualizations, advanced anomaly detection or innovative data stores. Those are well and good, but today we’d like to talk about the tip of the spear when it comes to observing your systems: metrics pipelines! Metrics pipelines are how we get metrics from where they happen—our hosts and services—to storage quickly and efficiently so they can be queried, all without interrupting the host service.

First, let’s establish some technical context. About a year ago, Stripe started the process of migrating to Datadog. Datadog is a hosted product that offers metric storage, visualization and alerting. With them we can get some marvelous dashboards to monitor our Observability systems:

Observability Overview Dashboard

Previously, we’d been using some nice open-source software but it was sadly unowned and unmaintained internally. Facing the high cost—in money and people—we decided that outsourcing to Datadog was a great idea. Nearly a year later, we’re quite happy with the improved visibility and reliability we’ve gained through significant effort in this area. One of the most interesting aspects of this work was how to even metric!

Using StatsD for metrics

There are many ways to instrument your systems. Our preferred method is the StatsD style: a simple text-based protocol with minimal performance impact. Code is instrumented to emit UDP to a central server at runtime whenever measured stuff happens.

Like all of life, this choice has tradeoffs. For the sake of brevity, we’ll quickly mention the two downsides of StatsD that are most relevant to us: its use of inherently unreliable UDP, and its role as a Single Point of Failure for timer aggregation.

As you may know, UDP is a “fire and forget” protocol that does not require any acknowledgement by the receiver. This makes UDP pretty fast for the client, but also means that client has no way to ensure that the metric was received by anyone! When combined with the network and the host’s natural protections that cause traffic to be dropped, you’ve got a problem.

Another problem is the Single Point of Failure. The poor StatsD server has to process a lot of UDP packets if you’ve got a non trivial number of sources. Add to that the nightmare of the machine going down and the need to shard or use other tricks to scale out, and you’ve got your work cut out for you.

DogStatsD and the lack of “global”

Aware that a central StatsD can be a problem for some, Datadog takes a different approach: Each host runs an instance of DogStatsD as part of the Datadog agent. This neatly sidesteps most performance problems but created a large feature regression for Stripe: no more global percentiles. Datadog only supports per-host aggregations for histograms, timers and sets.

Remember that, with StatsD, you emit a metric to the downstream server each time the event occurs. If you’re measuring API requests and emitting that metric on each host, you are now sending your timer metric to the local Datadog agent which aggregates them and flushes them to Datadog’s servers in batches. For counters, this is great because you can just add them together! But for percentiles we’ve got problems. Imagine you’ve got hundreds of servers each doing an unequal number of API requests with unequal workloads. Our percentiles are not representative of how our whole API is behaving. Even worse, once we’ve generated the percentiles for our histograms there is no meaningful way, mathematically, to combine them. (More precisely, the percentiles of arbitrary subsamples of a distribution are not sufficient for the percentiles of the full distribution).

Stripe needs to know the overall percentiles because each host’s histogram only has a small subset of random requests. We needed something better!

Enter Veneur

To provide these features to Stripe we created Veneur, a DogStatsD server with global aggregation capability. We’re happily running it in production and you can too! It’s open-source and we’d love for you to take a look.

Veneur runs in place of Datadog’s bundled DogStatsD server, listening on the same port. It flushes metrics to Datadog just like you’d expect. That’s where the similarities end, however, and the magic begins.

Instead of aggregating the histogram and emitting percentiles at flush time, Veneur forwards the histogram on to a global Veneur instance which merges all the histograms and flushes them to Datadog at the next window. It adds a bit of delay—one flush period—but the result is a best-of-both mix of local and global metrics!

We monitor the performance of many of our API calls, such as this chart of various percentiles for creating a charge. Red bars are deploys!

Approximate, mergeable histograms

As mentioned earlier, the essential problem with percentiles is that, once reported, they can’t be combined together. If host A received 20 requests and host B received 15, the two numbers can be added to determine that, in total, we had 35 requests. But if host A has a 99th percentile response time of 8ms and host B has a 99th percentile response time of 10ms, what’s the 99th percentile across both hosts?

The answer is, “we don’t know”. Taking the mean of those two percentiles results in a number that is statistically meaningless. If we have more than two hosts, we can’t simply take the percentile of percentiles either. We can’t even use the percentiles of each host to infer a range for the global percentile—the global 99th percentile could, in rare cases, be larger than any of the individual hosts’ 99th percentiles. We need to take the original set of response times reported from host A, and the original set from host B, and combine those together. Then, from the combined set, we can report the real 99th percentile across both hosts. That’s what forwarding is for.

Of course, there are a few caveats. If each histogram stores all the samples it received, the final histogram on the global instance could potentially be huge. To sidestep this issue, Veneur uses an approximating histogram implementation called a t-digest, which uses constant space regardless of the number of samples. (Specifically, we wrote our own Go port of it.) As the name would suggest, approximating histograms return approximate percentiles with some error, but this tradeoff ensures that Veneur’s memory consumption stays under control under any load.

Degradation

The global Veneur instance is also a single point of failure for the metrics that pass through it. If it went down we would lose percentiles (and sets, since those are forwarded too). But we wouldn’t lose everything. Besides the percentiles, StatsD histograms report a counter of how many samples they’ve received, and the minimum/maximum samples. These metrics can be combined without forwarding (if we know the maximum response time on each host, the maximum across all hosts is just the maximum of the individual values, and so on), so they get reported immediately without any forwarding. Clients can opt out of forwarding altogether, if they really do want their percentiles to be constrained to each host.

Veneur’s Overview Dashboard, well instrumented and healthy!

Other cool features and errors

Veneur—named for the Grand Huntsman of France, master of dogs!—also has a few other tricks:

Drop-in replacement for Datadog’s included DogStatsD. It even processes events and service checks!

Over the course of Veneur’s development we also iterated a lot. Our initial implementation was purely a global DogStatsD implementation without the forwarding or merging. It was really fast, but we quickly decided that processing more packets faster wasn’t really going to get us very far.

Next we took some twists and turns through “smart clients” that tried to route metrics to the appropriate places. This was initially promising, but we found that supporting this for each of our language runtimes and use cases was prohibitively expensive and undermined (Dog)StatsD’s simplicity. Some of our instrumentation is as simple as an nc command and that simplicity is very helpful to quickly instrument things.

While overall our work was transparent, we did cause some trouble when we initially turned the global features back on. Some teams had come to rely on per-host for very specific metrics. When we had to fail back to host local for some refactoring, we caused problems to teams who had just adapted to using global features. Argh! Each of these wound up being positive learning experiences, and we found Stripe’s engineers to be very accommodating. Thanks!

Thanks and future work

The Observability team would like to thank Datadog for their support and advice in the creation of Veneur. We’d also like to thank our friends and teammates at Stripe for their patience as we iterated to where we are today. Specifically for the occasional broken charts, metrics outages and other hilarious-in-hindsight problems we caused along the way.

We’ve been running Veneur in production for months and have been enjoying the fruits of our labor. We’re now operating at a stable, more mature pace for improvements around efficiency learned from monitoring its production behavior. We hope to leverage Veneur in the future for continued improvements to the features and reliability of our metrics pipeline. We’ve discussed additional protocol features, unified formats, per-team accounting and even incorporating other sensor data like tracing spans. Veneur’s speed, instrumentation and flexibility give us lots of room to grow and improve. Someone’s gotta feed those wicked cool visualizations and anomaly detectors!

At Stripe, we make extensive use of automated testing to help ensure
the stability and reliability of our services. We have expansive
test coverage for our API and other core services, we run tests on a
continuous integration server over every git branch, and we never deploy without green
tests.

The size and complexity of our codebase has grown over the past few years—and
so has the size of the test suite. As of August 2015, we have over 1400 test
files that define nearly 15,000 test cases and make over 130,000 assertions.
According to our CI server, the tests would take over three hours if run
sequentially.

With a large (and growing) group of engineers waiting for those tests
with every change they make, the speed of running tests is
critical. We’ve used a number of hosted CI solutions in the past,
but as test runtimes crept past 10 minutes, we brought
testing in-house to give us more control and room for experimentation.

Recently, we’ve implemented our own distributed test runner that
brought the runtime of our tests to just under three minutes. While
some of these tactics are specific to our codebase and systems, we
hope sharing what we did to improve our test runtimes will help
other engineering organizations.

Forking executor

We write tests
using minitest,
but we've implemented our
own plugin
to execute tests in parallel across multiple CPUs on multiple
different servers.

In order to get maximum parallel performance out of our build
servers, we run tests in separate processes, allowing each process to make
maximum use of the machine's CPU and I/O capability. (We run builds on
Amazon's c4.8xlarge instances, which give us 36 cores each.)

Initially, we experimented with using Ruby's threads instead
of multiple processes, but discovered that using a large number of
threads was significantly slower than using multiple processes. This
slowdown was present even if the ruby threads were doing nothing but
monitoring subprocess children. Our current runner doesn’t use Ruby threads at all.

When tests start up, we start by loading all of our application code into
a single Ruby process so we don’t have to parse and load all
our Ruby code and gem dependencies multiple times.
This process then calls fork a
number of times to produce N different processes that’ll each have
all of the code pre-loaded and ready to go.

Each of those workers then starts executing tests. As they execute
tests, our custom executor forks further: Each
process forks and executes a single test file’s worth
of tests inside the child process. The child process writes the
results to the parent over a pipe, and then exits.

This second round of forking provides a layer of isolation
between tests: If a test makes changes to global state, running the test
inside a throwaway process will clean everything up once that process
exits. Isolating state at a per-file level also means that running
individual tests on developer machines will behave similarly to the
way they behave in CI, which is an important debugging affordance.

Docker

The custom forking executor spawns a lot of processes, and creates a
number of scratch files on disk. We run all builds at Stripe inside
of Docker, which means we don't need to worry about cleaning up all
of these processes or this on-disk state. At the end of a build,
all of the state—be that in-memory processes or on
disk—will be cleaned up by a docker stop, every
time.

Managing trees of UNIX processes is notoriously difficult to do
reliably, and it would be easy for a system that forks
this often to leak zombie processes or stray workers (especially
during development of the test framework itself). Using a
containerization solution like Docker eliminates that
nuisance, and eliminates the need to write a bunch of fiddly cleanup
code.

Managing build workers

In order to run each build across multiple machines at once, we
need a system to keep track of which servers are currently in-use
and which ones are free, and to assign incoming work to available
servers.

We run all our tests inside
of Jenkins; Rather than
writing custom code to manage worker pools, we (ab)use a Jenkins
plugin called
the matrix
build plugin.

The matrix build plugin is designed for projects where you want a
"build matrix" that tests a project in multiple environments. For
example, you might want to build every release of a library against
several versions of Ruby and make sure it works on each of them.

We misuse it slightly by configuring a custom build axis, called
BUILD_ROLE, and telling Jenkins to build with
BUILD_ROLE=leader, BUILD_ROLE=worker1,
BUILD_ROLE=worker2, and so on. This causes Jenkins to
run N simultaneous jobs for each build.

Combined with
some other
Jenkins configuration, we can ensure that each of these builds
runs on its own machine. Using this, we can take
advantage of Jenkins worker management, scheduling, and resource
allocation to accomplish our goal of maintaining a large pool of
identical workers and allocating a small number of them for each
build.

NSQ

Once we have a pool of workers running, we decide which tests to run
on each node.

One tactic for splitting work—used by several of our previous test
runners—is to split tests up statically. You decide ahead of time
which workers will run which tests, and then each worker just runs
those tests start-to-finish. A simple version of this strategy just
hashes each test and take the result modulo the number of workers;
Sophisticated versions can record how long each test took, and try
to divide tests into group of equal total runtime.

The problem with static allocations is that they’re extremely prone
to stragglers. If you guess wrong about how long tests will take, or
if one server is briefly slow for whatever reason, it’s very easy
for one job to finish far after all the others, which means slower,
less efficient, tests.

We opted for an alternate, dynamic approach, which allocates work in
real-time using a work queue. We manage all coordination between
workers using an nsqd
instance. nsq is a super-simple queue that was
developed at Bit.ly; we already use it
in a few other places, so it was natural to adopt here.

Using the build number provided by Jenkins, we separate distinct
test runs. Each run makes use of three queues to coordinate work:

The node with BUILD_ROLE=leader writes each test
file that needs to be run into
the test.<BUILD_NUMBER>.jobs queue.

As workers execute tests, they write the results back to the
test.<BUILD_NUMBER>.results queue, where they
are collected by the leader node.

Once the leader has results for each test, it writes "kill"
signals to the test.<BUILD_NUMBER>.shutdown
queue, one for each worker machine. A thread on each worker
pulls off a single event and terminates all work on that node.

Each worker machine forks off a pool of
processes after loading code. Each of those processes independently
reads from the jobs queue and executes tests. By
relying on nsq for coordination even within a single machine, we
have no need for a second, machine-local, communication mechanism,
which might risk limiting our concurrency across multiple CPUs.

Other than the leader node, all nodes are homogenous;
they blindly pull work off the queue and execute it, and otherwise
behave identically.

Dynamic allocation has proven to be hugely effective. All of our
worker processes across all of our different machines reliably
finish within a few seconds of each other, which means we're making
excellent use of our available resources.

Because workers only accept jobs as they go, work remains
well-balanced even if things go slightly awry: Even if one of the
servers starts up slightly slowly, or if there isn't enough capacity
to start all four servers right at once, or if the servers happen to
be on different-sized hardware, we still tend to see every worker
finishing essentially at once.

Visualization

Reasoning about and understanding performance of a distributed
system is always a challenging task. If tests aren't finishing
quickly, it's important that we can understand why so we can debug
and resolve the issue.

The right visualization can often capture performance characteristics
and problems in a very powerful (and visible) way, letting operators
spot the problems immediately, without having to pore through reams of
log files and timing data.

To this end, we've built a
waterfall
visualizer for our test runner. The test processes record timing
data as they run, and save the result in a central file on the build
leader. Some Javascript d3 code can then assemble that
data into a waterfall diagram showing when each individual job started
and stopped.

Waterfall diagrams of a slow test run and a fast test run.

Each group of blue bars shows tests run by a single process on a
single machine. The black lines that drop down near the right show
the finish times for each process. In the first visualization, you
can see that the first process (and to a lesser extent, the second)
took much longer to finish than all the others, meaning a single
test was holding up the entire build.

By default, our test runner uses test files as the unit of
parallelism, with each process running an entire file at a
time. Because of stragglers like the above case, we implemented an
option to split individual test files further, distributing the
individual test classes in the file instead of the entire file.

If we apply that option to the slow files and re-run, all the "finished"
lines collapse into one, indicating that every process on every worker
finished at essentially the same time—an optimal usage of resources.

Notice also that the waterfall graphs show processes generally going
from slower tests to faster ones. The test runner keeps a persistent
cache recording how long each test took on previous runs, and
enqueues tests starting with the slowest. This ensures that slow
tests start as soon as possible and is important for ensuring an
optimal work distribution.

The decision to invest effort in our own testing infrastructure
wasn't necessarily obvious: we could have continued to use a
third-party solution. However, spending a comparatively small amount
of effort allowed the rest of our engineering organization to move
significantly faster—and with more confidence. I'm also optimistic
this test runner will continue to scale with us and support our
growth for several years to come.

If you end up implementing something like this (or have
already), send me a note!
I'd love to hear what you've done, and what's worked or hasn't for
others with similar problems.