I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Please check the box if you want to proceed.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

has already generated some trials -- both impressive successes and frightening failures. Be sure your own plans for microservices and cloud don't go astray in performance and quality of experience (QoE). Understand the specific impacts of microservices on performance, architect your microservice-based applications to maximize QoE, and take steps in compute and network architecture to minimize latency and maximize availability.

Applications based on microservices extend the basic notion of componentization. They create a larger number of functionally specialized pieces that are shared across applications and are connected through a company network. Many see microservices as a natural evolution of service-oriented architecture (SOA) or the application of web principles of abstract resources and representational state transfer (REST). Others see them as a way to exploit the agility of the cloud. It's in the balance of these two visions that the performance benefits and risks lie.

Any application that binds its components over a network connection will introduce delays that wouldn't be present if those components were tightly coupled in a single machine image. Because microservices componentize applications more, they introduce more network binding and potentially more delay. The question is how can that delay be minimized or compensated so that performance can be stable overall or even improve after a microservice transition.

At least it's scalable

The first factor that can enable improved microservices and cloud application performance is scalability of microservice instances under load. Properly designed microservices can be horizontally scaled, which means additional instances of the service can be created, to respond to workload. In order to do that a mechanism for load balancing is required among the instances. It is easier if you've designed your microservices to be stateless or employed something like back-end state control.

The trick here is to focus your scaling efforts on microservices that actually benefit. Load balancing will introduce additional network handling delay. So, start by focusing on microservices that can reasonably be scaled to four or more instances to justify that balancing delay. Compute-bound processes are easy to scale. But those that require a lot of disk access or that use other microservices may be more difficult.

The second way microservices and cloud application performance can be improved is by abstracting database access into logical queries. Databases are almost always hosted in a single fixed location, and often on the data center side of a hybrid cloud. Database access is then network-connected, and the delay can accumulate if a large number of records are to be inspected. A microservice that is hosted near the database and that takes as its input a high-level query or request rather than an I/O command can significantly improve application QoE.

While either of these factors can improve microservices and cloud application performance, they may not be enough to overcome basic network-latency issues unless application design and microservice use are optimized. We've already noted that the best microservices are developed in stateless form. So, any copy of a microservice can field any request without using information saved within it from an earlier part of a transaction dialog. Stateless design is frequently used in web programming but is less common in SOA and .NET native development. Developers may not be familiar with the techniques. Developer tools and middleware can help everyone get up to speed and standardize approaches for optimum performance.

Don't overthink the design

One common error in microservice design is to overthink the service coupling to support run-time binding. SOA was designed to allow applications to find services dynamically, but in most installations, the service locations and workflow steering were actually fairly constant. That's also likely to be true in microservice applications, but many are still designed to employ an API broker to link an application with the microservice it needs.

API brokers can improve development agility, but they nearly always limit performance. If you need one, try to combine that function with microservice load balancing. Then you don't have to introduce two additional steps in your microservice workflow. If you know that some microservices will be heavily used, then consider moving them outside the broker framework and publishing them as simple RESTful services. That will reduce the microservice overhead for these applications, and ones that are heavily used don't really need run-time binding anyway.

The other common error to be avoided is inefficient microservice structures. A microservice should be small enough to be generally useful, but not so small that it breaks cohesive logical functions into pieces. Over-segmentation will multiply delay alarmingly. You may also want to avoid having microservices call other microservices because this succession of API calls will add to delay, which can be hard to detect without examining all the microservice logic.

There are also useful performance-enhancement steps outside the microservices themselves. One noted step is the load balancing. The efficiency of your microservice scalability practices will depend in large part on whether you can distribute work effectively to all instances. However, efficiency is also impacted by the network delay between users and a load balancer, and also between a load balancer and all the microservice instances. If your microservices use database resources, then you also need to factor in the access delay for those resources. All of this calls for careful policy control over where microservice instances are hosted. It means that your DevOps or deployment tools will have to enforce hosting and connection policies to insure minimal delay.

Therefore, microservices and cloud application performance can be boosted or it can be seriously degraded. Microservices' impact on performance is often difficult to assess. It implies that you have to take care of it not only during the design and initial deployment, but also whenever changes to application workflow or structure are made. Problems can creep in at any time, and only careful review and testing can assure success when it comes to microservices and cloud application performance.

Join the conversation

1 comment

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.