Microservices in SOA infrastructure

07 July, 2017

AntonHrytsenko

We constantly develop systems using a service-oriented architecture (SOA). Such systems have long lifetime and significant cost. The important portion of these systems is a SOA infrastructure that ensures integration of their parts. In this article, I’d like to discuss how to utilize microservices in a SOA infrastructure to improve its reliability, maintainability, and portability. Further, I share the details of realizing a SOA infrastructure with microservices using Apache ServiceMix.

The purpose of the article is to show that modern design of complex distributed systems engages and combines different architectures and paradigms to gain most advantages and neglect significant drawbacks.

Design

We realize the SOA infrastructure using microservices. These services are small stateless self-contained modules that are deployed within the enterprise service bus (ESB). So, we utilize a resource-oriented architecture (ROA) for the SOA infrastructure.

According to SOA, the main requirements for services are loose coupling and interoperability. We preserve these requirements for infrastructural services. To ensure compliance with these requirements, we apply the following constraints on interaction between services:

We don’t have any constraints on interaction with backend or frontend systems.

We don’t share the common business object model (BOM) between the infrastructural services. So, different services may provide different representation of the same data. This causes additional overhead for the data mapping. Since services exchange data in JSON format, we widely utilize a JSON to JSON transformation for data mapping.

We categorize infrastructural services as assets and applications (Figure 1).

Figure 1. Infrastructural services

Assets

Assets interact with backend systems and serve as their facades. Typically, an asset combines various basic services for a given backend system. Each asset interacts only with one backend system. But, it may interact with several instances of the backend system to ensure load balancing or fault tolerance.

The main responsibilities of assets are data exchange and data transformation. Firstly, these services exchange data with backend systems using appropriate protocols. For example, a specific backend system may require a mutual authentication or it may use a proprietary protocol. Then, these services perform harmonization of data types and formats.

Assets harmonize only fundamental data types to ensure the loose coupling between infrastructural services. We inherit fundamental data types from JSON data types that include strings, numbers, booleans, objects, and lists. These data types have exact representation in most programming languages. For example, in Java these types are represented by strings, integers, booleans, maps, and lists.

Applications

Applications operate over assets to implement a business functionality. Each application represents a self-contained part of a business functionality. For example, application may implement a business process.

The main responsibilities of applications are data manipulation and intelligent routing. Firstly, these services manage assets to manipulate data. For example, an application may aggregate data from different assets, transform it, and store for further usage. Then, these services route inbound messages according to business rules. For example, an application may return preprocessed or latest data depending on message content. Or, an application may throttle inbound requests for heavy data processing to avoid overload. Additionally, applications ensure transaction safety (typically, with compensation) and fault tolerance.

Portability

Because of loose coupling, infrastructural services support flexible deployment. The exact deployment configuration can be adopted for various technical and business requirements. The deployment configuration may vary between environments without changes in infrastructural services. Also, depending on the container, the deployment configuration can be updated dynamically. The possibility to change the deployment configuration without introducing changes in services is an important characteristic for enterprise applications.

For example, a certain service can be deployed in a dedicated container due to security limitations or performance requirements (Figure 2).

Figure 2. Deployment

Preconfigured containers can be shipped to the target environments using tools like Docker.

Technologies

We use Apache ServiceMix as integration container within the SOA infrastructure that provides a complete technology stack for realizing infrastructural services (Figure 3).

Figure 3. Technologies

Integration

Apache Karaf is an OSGi environment that provides a runtime for infrastructural services. We use Apache Karaf as a container. For Apache Karaf, services are realized as the OSGi bundles. These bundles are small, highly configurable and easily manageable within the container that is highly valuable for microservices.

Apache Camel is an integration framework that provides an implementation of enterprise integration patterns (EIP). We use Apache Camel as routing and mediation engine to realize infrastructural services. This framework provides facilities to ensure load balancing, fault tolerance, self-monitoring, and so on. Also, this framework provides complete support for testing.

Apache CXF is a web services framework that implements the JAX-RS specification that is part of Java EE specification. We use this framework to realize RESTful web services. Apache Camel provides integration with Apache CXF with the CXFRS component. Apache ServiceMix uses Jetty as servlet container for RESTful web services.

Apache ActiveMQ is a messaging server that provides support for the JMS specification. We use this server for asynchronous messaging between services. Apache Camel provides integration with Apache ActiveMQ with the ActiveMQ component.

Additionally, we widely use libraries for JSON data processing. For example, Jackson for marshalling, JsonPath for querying, and Jolt for transformation.

Application

We have a business requirement to retrieve data on employees from different sources and perform search over the aggregated data.

At the first stage, we analyze the engaged backend systems. The primary data on employees are stored in the directory service. The additional data are stored in the social network. While the aggregated data are stored in the search engine.

Further, we create assets for these backend systems. Assets for the directory service and for the social network provide the RESTful web services to retrieve data on employees. These assets ensure communication with backend systems and initial harmonization of retrieved data. An asset for the search engine provides the RESTful web services to manage aggregated data and search over these data.

At the second stage, we analyze the required business functionality. The first piece of functionality relates to the search over the aggregated data. The second piece relates to the actualization of these data.

After that, we create a separate application for each piece of functionality. The first application primarily uses the asset for the search engine to perform search over the aggregated data. Also, it uses the asset for the directory service for scenarios where the latest data is required. This application provides the RESTful web services for the frontend systems. The second application retrieves data using the assets for the directory service and for the social network, aggregates these data, and stores the resulting data using the asset for the search engine. This application performs actualization by schedule.

The final infrastructure includes all these services (Figure 4). These services can be configured and deployed using an appropriate container. We realize these services as the OSGi bundles and deploy within the Apache ServiceMix.

Figure 4. Application

Each of these services has a single responsibility and a small size. As a result, these services are highly maintainable and portable. Also, each of these services and the overall infrastructure provides the required level of scalability and reliability.

Also, these services can be maintained, tested, and deployed separately with predictable impact on overall functionality.

Thank you for reaching out to Sigma Software! Please fill the form below. Our team will contact you shortly.

Full Name *

E-mail *

Phone

Company

Message

Page url

I hereby confirm that I am familiar with
Sigma Privacy Policy and agree to the personal data provided by me being stored and processed in accordance with the Policy
*

Anton

Hrytsenko

Lead Java Developer

Anton is a Lead Java Developer at Sigma Software. He is engaged in Java development since 2011. In recent years, his professional activity is focused on development and integration of enterprise applications. Anton is enthusiastic about knowledge sharing and moving the IT industry forward. He used to teach a course on Java, and now often speaks at Java meetups and conferences.