This section further defines integration approaches which underpin the principles defined in the previous articles, and provide additional information to better articulate the intention of each approach and concept.

A distributed orchestration is an integration action which involves multiple systems. A distributed orchestration can be synchronous or asynchronous (both in terms of response to the calling context, and to operations performed within an orchestration), transitional or non-transactional, and can maintain state or be implemented stateless.

The above definition provides a broad canvass upon which to develop flexible integration models and patterns, and harnesses the power of a mature enterprise integration capability.

A service orchestration can be used to abstract one(or more) systems, and make use of data mapping or data transformation capabilities to achieve integration with native or bespoke systems. This is typically seen where more than one system “owns” a record, entity, source of truth and an enterprise data model is defined and deployed across the Enterprise’s systems.

For cases where there is a design need to support synchronous and/or asynchronous processing of events or actions, integration middleware may bridge the gap to provide a solution. Common scenarios include:

A client application initiates an integration action is unable to wait for a response, or a response is not practical;

A client application requires a response, but does not need to wait until an entire operation has executed (BASE design);

A client application needs to asynchronously consume a service or endpoint which does not support asynchronous execution (need to handle exceptions/error conditions while the client continues);

In these scenarios, the integration layer can provide support by wrapping other endpoints, creating multiple target integration orchestrations (service orchestrations) or wrapping existing COTS (proprietary) interfaces to extend the functionality.

Message based integration involves the use of message queues. Typically message queues are typically utilized for asynchronous processing of stateless messages (data), and are designed to scale well as a result.

Local queuing removes a dependency on external services or systems. Messages are stored locally (relative to a publishing application or service) and remain queued locally until de-queued by a message consumer (as a message pull).

Some advantages of local queuing include:

Reduced risk of lost messages (queue is local to the publisher/sender),

Easy to re-queue messages at the point of message publication,

· Local queuing is supported in HA configurations, for example using Windows Failover clustering.

Remote queuing provides message queuing capability external (i.e. not on the same client or host) to a message publisher or subscriber. It reduces the complexity of message operations typically by abstracting the message queue implementation behind an interface (e.g. an API) which is easier to consume, or agnostic to the specific queuing technology used – for example, maintaining a web service endpoint in front of an implementation of MSMQ which handles publishing/subscribing to messages on the queue(s).

Exceptions within the integration layer may cause the routing of messages to error queues. Some messages may be candidates for message replay. The message replay mechanism allows messages to be manually added into the integration workflow (or source queue) as a new message instance. The message can be consumed as if it had not been previously received or processed.

Replay actions are applicable for many services, particularly where transient errors are reasonably expected to occur. However, the following circumstances make Replay actions ineffective or risky:

Short lifetime – The event which triggers the message has a short lifecycle which is usually expired before a replay can be processed,

Application defects – A service end point has a defect/issue and is unable to accept the incoming data. The application requires to be fixed (patched) for a replay to occur,

Data validation – A service end point cannot accept data which does not comply to the expected format or pass validation rules. The message requires a data fix to enable replay,

Business/ Service process design – The source system may send further messages later, which creates a risk for replayed messages to overwrite the latest version of data in the target system; or making replays redundant because the same information is re-sent as a in subsequent messages.

There are some integration scenarios where the actions of a line of business system may be of interest to other applications.

When key data is added, changed or invalidated; the system publishes relevant information to the integration layer. Applications interested in this information subscribe to these specific messages and can consume them.

For systems which are designated as a “source of truth” for domain-specific information, this integration pattern is quite useful. This pattern, combined with some additional safeguards can help implement an eventual consistency model[1] or Basically Available Soft state Eventual model.

About Rob Sanders

IT Professional and TOGAF 9 certified Enterprise Architect with nearly two decades of industry experience, 18 years in commercial software development and 11 years in IT consulting. Check out the
"About Rob" page for more information.

Post navigation

Stuff to cover the hosting costs

Disclaimer

Privacy: Ads are generated on this site. Google may collect cookies about your interests to make ads more relevant. To Opt out or find more information, see this blog or also see the Google Privacy Center