Zuora Engineering is moving fast towards a microservices architecture for our core billing, payments and finance platform. Multiple nimble independent teams are working hard to implement individual services. In practice, this means that teams and services are now dependent on each other to provide an end-to-end customer solution, and since none of these services live in isolation, the varying velocities at which they are implemented can severely impact the time it takes to declare any single service production ready.

The natural question then arises: How can individual services be functionally implemented as fast as possible, in order to test integrations between services, without sacrificing the quality of the final product? The answer we are currently employing at Zuora is in-memory servers.

Context

Zuora's Platform Team is building a Custom Fields Service internally known as Murano (named after the famous Venetian glass, not the car!). Murano allows users to augment existing Zuora Business Model objects with a set of customer-specific fields.

For example, a customer A may want to extend the Subscription object with two extra fields and their types:

name: X
type: string
name: Y
type: string

while customer B may wish to define only one field and corresponding type:

name: Z
type: integer

on the same Subscription object.

When customers A and B query the Subscription object, each of them will receive an object with the defined additional custom fields.

Murano Interdependency Problem

As you can imagine, a service which provides custom field support is undoubtedly going to become a dependency for any other service that requires querying or writing to the Zuora business model objects. As such, several Zuora engineering teams became dependent on the Murano service, even before it was built! As a result, the Murano team had to figure out a way to unblock the other teams quickly and, at the same time, avoid jeopardizing quality or rush architectural decisions around their service.

Unblocking Other Teams

The first step in avoiding becoming the limiting factor in engineering velocity, was to create and agree upon the Murano API. This allowed all teams to agree upon a strict service contract for the service and start us on the path to providing the API implementations.

The second step was to actually implement the APIs, but great implementation requires time and multiple iterations. In particular, it was unclear which persistent storage would be used to best satisfy service requirements. The landscape of persistence storage has grown considerably over the last decade; should it be a relational or a nosql data store? MySQL, Postgres, Cassandra or something else? We certainly didn’t want to make a rushed decision before evaluating different solutions, but this would mean we’d further impact the velocity of other teams.

In-Memory Model Over Final Persistence Solution

What did we decide to do?

We created a modular design, independent of the underlying store. Our goal was to provide an in-memory storage model first and delay decisions about persistent storage until enough evaluation was done to support the use of a particular persistence technology. In particular, we chose Google’s Guava Table collection and Java’s native Map collection.

The following code snippet demonstrates how we’re able to implement the custom field solution at the persistence level, by creating a table and inserting two data values dataX and dataY at the cells represented by the strings rowX, columnX and rowY, columnY:

The Murano team was able to quickly provide dependent teams with a functional implementation of the custom field service that can be used for testing and integration while allowing the Murano team to perform due diligence in choosing the best persistent storage technology.

Providing an in-memory version of the storage is a small part in delivering a great microservice. Rigorous testing, awesome continuous integration and deployment pipelines, useful metrics, extensive logging and much more are all part of the journey.

Oh yeah, not to mention, the final persistence technology decision as well :-)