Twitter

Working towards microservices with monolithic applications

I want to start off by saying that having a monolithic application isn’t always a bad thing, and this article may not necessarily be for you. Yet. It just comes down to the correct timing of using microservices when it make sense and then diving into that work at the moment it’s needed, and not a moment later. Utilizing a microservices architecture too soon will hold you back and slow the development process back, whereas waiting too long to perform the migration makes the refactoring effort very painful.

If you have a single product that was designed well, is easily maintainable, and carries minimal technical debt, you may not have a lot of reasons to invest into a microservices architecture. Or certain areas are becoming areas of concern for performance and scalability, then you may slowly split those areas out.

If you’re like the rest of us dealing with multiple products through acquisitions, mergers, or reorganizations that were originally built in a time long ago before best practices existed for online services, there is little hope that it is maintainable or carrying minimal technical debt.

I want to also assume that you know little about microservices and that’s why you’re reading this article in the first place.

What is a microservice?

Microservices is not really a new concept and started back about a decade or so ago when the whole Service-Oriented Architecture (SOA) movement was gaining ground. The only difference is that SOA is coarse-grained, working with services at a higher level than microservices. In all the departments that could exist in a company, a SOA-based service might be something like the whole Human Resources department, whereas a microservice is fine-grained doing something small and simple at a lower level, like your ‘identity’.

The ‘identity’ microservice would handle something like a properly designed class in code – a person’s name, address, email, phone numbers, government identifiers, and other unique peripheral properties that comprise a person’s identity.

There would be other services like ‘organization’ that handles your placement in the corporate hierarchy tree structure, and another service for something like ‘benefits’ that simply tracks what you have for coverage, when it’s up for renewal, or life events that make you qualified to make changes prior to open enrollment. And so on for each type of domain that falls into the Human Resources role.

Microservices are just small autonomous services that work together, that work in conjunction with other microservices, or are consumed by SOA services or larger applications. If you are familiar with Domain Driven Design, it’s essentially just developing each ‘domain’ into it’s own service.

What are the benefits?

The great thing about these microservices is that they are extremely simple in what they do and what they should handle. Here are just a few of the advantages that a microservices architecture can give you just off the top of my head…

Configurations move away from source and become discoverable (via service registries)

Perfectly scalable through use of containers

Single front door, no more direct access shenanigans.

Easier to understand context and grasping the big picture.

Eliminates long term commitment to a technology stack; best language/framework/database for the job.

Clear separation of concerns

Decentralized governance

Independently scalable, changeable, and replaceable

Independent development, testing, and deployment from rest of applications (independent speed of development)

What are the disadvantages?

The bad thing about these microservice architectures is that they aren’t a free lunch and come with some high upfront investments and a nominal amount of effort over the long term assuming you have a good set of development principles. Some challenges in both of these timeframes could be summarized to include:

Testing investment – Test as much as possible on surface through functionality testing and internally through unit and integration tests.

Automation investment – Perform the testing as often as possible, and streamline the deployment and delivery pipelines so the service is carried by the same repeatable logic from sandbox to production.

Monitoring investment – This is just as much about tooling as it is about instrumentation, to allow both operations and development teams to have a more detailed accounting of what is occurring to make proper decisions in rollbacks, fixes, or workarounds.

Software delivery – It may be time to move away from single server to a single service. Start the work to migrate away from server sprawl through containers.

Workflow knowledge – The domain is no longer directly integrated and instead is loosely coupled over the network through versioned contracts. How do you perform traceability to track a single request at the front door through the entire system and the microservices?

Data knowledge – Knowing where and how the existing data is used and getting it migrated from existing products over into the new microservice, and what can and shouldn’t be migrated.

Operational excellence – High availability for the new service and it’s data tier, getting to know the new stack and it’s quirks.

Development excellence – Fault tolerance is a large concern where it may not have been even considered before. How do you fail gracefully in the event that you can’t fetch the user’s address?

Debugging complexity – Self explanatory. How well you do all of the above plays into how hard this becomes.

Security concerns – How do you secure a microservice request so that someone who has penetrated the network can’t simply dump your entire identity microservice via it’s REST interface?

There is also the inherent bias developers are going to have in how they approach problems and will want to direct the solution towards the products they’ve worked on. It’s critically important that the developers maintain a perspective that the microservice serves all clients equally.

Most of these items may not be issues for you anymore because your teams have evolved far enough to engineer out most of the concerns or are already part of the life cycle management process.

Primary focus is making the software development life cycle easier to manage and change, so the changes should be made to answer for a cohesive suite and not just a bunch of tiny services.

Where do I start?

I’m going to assume your development team grappling with the weight of multiple applications that are supposed to act in unison as though they are a suite, but in today’s world are very much suite by name only. Each application plays a particular role for your company’s solution portfolio and each product focusing on a particular role or discipline for the given suite.

Let’s say you have six major applications through those acquisitions – you’re going to have six different sets of credentials, six possible different sets of technology stacks, six possible different UI frameworks and styling, and six different levels or types of development principles.

The first step in migrating to a microservices architecture is looking at all six silos and finding the areas that are all in common, but are completely disconnected from one another. We’ll take the ‘identity’ domain concept from earlier as an example.

There are really only two different directions you could take to start the microservices effort:

Recycle existing code baseTake the best engineered domain code that is application-agnostic and separate it into it’s own deployed service that all of the other products can utilize. The chances that you have something that could be so easily moveable into a shared service is pretty small given that it’s most likely very much specifically tailored for a given application. If you don’t have that issue, then kudos to you and the engineering teams that made it possible. We’re not all that lucky. You must be a startup making the move when you were supposed to, aren’t you?

Create new code baseTake a team with the best software development principles that has actually been practicing them and the best product manager you have. Allow them to greenfield the microservice in an isolated source code repository, but developed in the most social manner possible to ensure that all of the peripheral properties to the concept domain are being taken into consideration so that it serves all products equally in a suite-level/generic manner.

Word of warning to those eager to start the New Year with a bunch of changes ‘for the good’: don’t get greedy and take all of the best developers you have around the company and throw them into a single team. The quickest way to sabotage your own project is to comprise a team full of incompatible personality types. Go with the best people you know can work together and let them self organize.

Assuming you have your new fancy pants microservice deployed and ready to keep track of an endless number of John Smiths, the next step is migrating existing applications to use it. It’s important to keep in mind at this juncture that this new microservice should be viewed as being a pristine environment.

You don’t want to just simply dump in a bunch of useless, orphaned, or stale data. It’s important that the filter to keep out the bad data should not be in the microservice, but instead of the translation or ‘anti-corruption’ layer that is going to redirect the identity information from it’s own data store to the new microservice.

This anti-corruption layer is your last line of defense to keep product specific implementation details from affecting the generic suite-wide built microservice. It’s imperative that when it comes to the microservice’s data, that it’s always in a good, known state. You don’t want to taint the data stream with something that the other consuming services don’t understand.

Hopefully you have built the microservice to adhere to a strict enough contract to not accept anything, but I can see an example of an ‘extended properties’ collection that allows some future properties to be added to the identity service without changing the contract or requiring a version bump that could be tainted.

A collection of identifiers on an identity is a good example. There’s a driver’s license, social security number or other national identifier, and passport identifier. But then on the last line of defence of a consuming application, a developer decides to overload it’s usage for certification identifiers (like ITIL, MCP, CCIE, MCIE, etc.).

While those identifiers are valid for an identity, they are probably more likely useful in another microservice that deals with the identity’s talent profile. Those other applications may not expect them which causes unexpected behavior, and you’ve used the service to go beyond it’s original intent. Communication is key in assisting with this journey, and adopting DevOps before the move is highly suggested.

But back to the microservices universe: The resulting product from this level of integration should eventually allow someone who has moved or changed their last name to be reflected globally across all of the applications they use.

Where do I stop?

Think of each of the microservices as something that you would put into a ‘common’ library. Anything that would only ever serve a single product isn’t exactly ‘common’ and should remain in that application. The migration ends when you no longer have any domains that are shared between multiple products.

Over time, each application should eventually just become a thin facade over the microservices where it only contains a very concise set of business logic that delivers the necessary value to the user’s interaction with the application. A suite afterall is just a collection of discipline oriented workflows.

You should also keep the microservices extremely simple, but no to the point where you are segmenting a domain. The existence of too many microservices can reduce the ability for people on the development team to properly grasp the big picture or put a particular block of code into context.