Short and Long-Running Processes in SOA-part2

Fast Short-Running BPEL

Let's begin with a discussion on compiled BPEL.

Uses of Short-Running Processes

Having developed an approach to keep SOA processes running for an arbitrarily long time, we now turn our attention to short-running processes and ask: howcan we make them run as fast as possible? The two most common uses of a short-running process are:

To implement a synchronous web service operation. The process begins with an input message, runs through a quick burst of logic to process it, sends back the output message, and completes. The client application blocks for the duration, as diagram (a) in the next figure shows. If the process moves too slowly, the client will complain about the response time.

To perform complex routing for the ESB. As David Chapelle discusses in his book Enterprise Service Bus (O'Reilly, 2004), a good ESB can natively perform basic content-based- and itinerary-based-routing, but it needs orchestration processes to handle more complex routing patterns. In diagram (b) in the figure, when the ESB receives a message, it passes it to an orchestration process that proceeds to perform in eight steps a series of transformation and invocation maneuvers that could never be achieved with the basic branching capabilities of the ESB. Again, speed is critical. The ESB prefers to get rid of messages as soon as it gets them. When it delegates work to an orchestration process, it expects that process to move quickly and lightly.

Architecture for Short-Running Processes

In considering a design to optimize the performance of these two cases, we assume that our stack has both an ESB and a process integration layer. All messages in and out of the stack go through the ESB. The ESB, when it receives an inbound message, routes it to the process integration engine for processing. The process integration engine, in turn, routes all outbound messages through the ESB. Further, we assume that the ESB uses message queues to converse with the process integration layer. Client applications, on the other hand, typically use web services to converse with the ESB.

The following figure shows how we might enhance this architecture for faster short-running processes. (The implementation we consider is a Java-based BPEL process engine.)

When a client application or partner process calls through the ESB, the ESB routes the event, based on the event's type, either to the general process integration engine or to an engine optimized for short-running processes. To route to the general engine, the ESB places the message on the Normal PI In Queue. That engine is drawn as a cloud; we are not concerned in this discussion with its inner workings. To route to the optimized engine, the ESB either queues the message on SR In Queue or, to reduce latency, directly calls the short-running engine's main class, ProcessManager. (Direct calls are suitable for the orchestration routing case described in the previous figure; there, processes run as an extension of the ESB, so it makes sense for the ESB to invoke them straightaway.) A set of execution threads pulls messages from SR In Queue and invokes ProcessManager to inject these inbound events to the processes themselves. The role of ProcessManager is to keep the state of, and to execute, short-running processes. Each process is represented in compiled form as a Java class (for example, ProcessA or ProcessB) that inherits from a base class called CompiledProcess. Compiled classes are generated by a tool called BPELCompiler, which creates Java code that represents the flow of control specified in the BPEL XML representation of the process. ProcessManager runs processes by creating and calling the methods of instances of CompiledProcess-derived classes. It also uses TimeManager to manage timed events. Processes, whether running on the general engine or on the optimized engine, send messages to partners by placing messages on the outbound queue Out Queue, which the ESB picks up and routes to the relevant partner.

A general process engine is built to handle processes of all durations, long and short alike, and, with a mandate this extensive, does not handle the special case of time-critical short-running processes very effectively. There are three optimizations we require, and we build these into the short-running engine:

Process state is held in memory. Process state is never persisted, even for processes with intermediate events. Completed process instances are cleaned out of memory immediately, so as to reduce the memory required.

Processes are compiled, not interpreted. That is, the process definition is coded in Java class form, rather than as an XML document. Compilation speeds the execution time of a burst.

The process may define timed events of a very short duration, to the order of milliseconds. Furthermore, the engine generates a fault when the process exceeds its SLA. The process may catch the fault or let it bubble up to the calling application.

The architecture we sketched in this section, as we discover presently, is designed to meet these requirements.

Example of a Very Fast Process

The next figure shows a short-running process with multiple bursts that benefits from these optimizations.

When the process starts, it initializes its variables (InitVars) and asynchronously invokes a partner process called the Producer (Call Producer Asynx). It then enters into a loop (FetchLoop) that, on each iteration, waits for one of the two events from the Producer: result or noMore. If it gets the result event, it, in parallel, invokes two handler services (Call Handler A and Call Handler B), and loops back. If it gets the noMore event, the process sets the loop's continuation flag to false (Set Loop Stop). The loop exits, and the process completes. While it waits for the producer events, the process also sets a timed event (too long) that fires if neither event arrives in sufficient time. If the timer expires, the process sends an exception message to the producer (Send Exception Msg Producer Async), and loops back.

The timing characteristics are shown in parentheses. The producer, on average, sends a result or noMore event in 80 milliseconds. The handlers that the process invokes to handle a result event average 50 milliseconds and 70 milliseconds, but because they run in parallel, their elapsed time is the greater of these two times, or 70 milliseconds. Thus, an iteration of the loop with a result event averages roughly 150 milliseconds. Iteration with a noMore event averages just 80 milliseconds, because the activity Set Loop Stop runs nearly instantaneously. The cycle time of an instance with one result iteration and one noMore iteration is just 220 milliseconds. The too long timed event has a duration of 200 milliseconds, which in itself is rather a small interval, but is a huge chunk of time compared to the normal cycle time. The cycle time of an instance whose three intermediate events are result, too long, and noMore is 420 milliseconds on average. Times this fast cannot be achieved on a general-purpose engine.

Running the Very Fast Process on the Optimized Engine

The sequence diagram in the following figure illustrates how this process runs on the short-running engine:

The process starts when client application sends a message intended to trigger the process' start event. The ProcessManager receives this event (either as a direct call or indirectly via an execution thread that monitors the short-running inbound queue) in its routeMessageEvent() method. It then checks with the process class—shown as Process in the figure, a subclass of the CompiledProcess class we discuss presently—whether it supports the given start event type (hasStartEvent()), and if so, injects the event into the process (onStartEvent()). The process, as part of its logic, performs the activities InitVars and CallProducerAsync and enters the first iteration of the while loop, in which it records in its data structures that it is now waiting for three pending events (Set Pending Events). Because one of these events is a timed event, it also registers that event with the TimeManager (addEvent()).The first burst is complete.

In the second burst, the producer process responds with a result event (result: routeMessageEvent()). The ProcessManager checks whether the process instance is waiting for that event (hasPendingEvent()) and injects it (onIntermediateEvent()). The process invokes the two handlers (that is, it invokes CallHandler on HandlerA and HandlerB), completing the first iteration of the loop. It now loops back, resets the pending events (Set Pending Events), and registers a new timed event (addEvent()). The second burst is complete.

Assuming the producer does not respond in sufficient time, the timer expires, and the TimeManager which checks for expired events on its own thread notifies the Process Manager (routeTimedEvent()). ProcessManager gives the event to the process (calling hasPendingEvent() to confirm that the process is waiting for it and onIntermediateEvent() to inject it), and the process in turn performs the SendExceptionMsg activity, completing the second iteration of the loop. The next iteration starts, and the process resets its pending events. The third burst is complete, and we leave it there.

Managing Inbound Events and Timeouts

The state information needed to tie all of this together is held in memory. ProcessManager maintains a data structure called instanceList that, much like the Process table just described, keeps a list of process instances indexed by the combination of conversation identifier and process type. The list contains references to CompiledProcess-derived objects. The logic for routeMessageEvent(), in pseudo code, is the following:

TimeManager keeps a list of timed events, each tied to a particular wait node in a process instance. TimerManager's thread periodically sweeps through the list, finding events that have expired. It calls ProcessManager's routeTimedEvent() method to inject the event to the instance. Three types of timed events are supported:

wait activity

onAlarm activity

SLA on the instance

The first two event types simply wake up the process. If the process previously entered a wait activity, for example, the timed event causes it to complete. The third generates a fault. If the process has a handler for this fault, control moves immediately to the handler. Otherwise, the instance is immediately aborted.

Compiled Form

The CompiledProcess class (the base class for compiled BPEL processes) keeps track of variables, current pending events, and permitted start events, holding in memory the same sort of data that is defined for the tables ProcessVariable, PendingEvent, and ProcessStarter. Here is an excerpt of the code:

Notice that the class is marked abstract, and that its method getGraph() is not implemented. In our design, each BPEL process is run through a special compiler utility that generates a Java class extending CompiledProcess. The utility, called BPELCompiler, is a Java program that takes as input the XML source code for the BPEL process. It parses the XML and outputs a Java source file that is later compiled and loaded into the address space of the process engine. At runtime, the BPEL process runs at the speed of compiled Java. We thus save the performance-stultifying effect of runtime XML parsing and serialization that afflicts many process engines.

Here is a snippet of the Java source of the class for our sample short-running process:

The class does nothing except build a graph representing its process definition. It begins by declaring a class-scoped member variable called graph (static BPELGraph graph = null;). In the static intializer code that follows (beginning with static {), it instantiates this attribute (graph=new BPELGraph();) and proceeds to construct it as a set of nodes (for example, graph.addSequence(), graph.addReceive(), graph.addAssign(), graph.addInvoke(), graph.addWhile(), graph.addPick(), and others not shown) and arcs (graph.addArc()). The class also overrides the getGraph() method that is left abstract in the base class. This method simply returns a reference to the graph variable.

And that's all there is to the generated class. It inherits the most important methods from the base class. Its job is to fill in the one missing ingredient: the actual process definition. Significantly, it creates this definition (that is, the graph) at class scope, so that there is only one copy of it in the process engine, not one copy per process instance. This saves a lot of memory.

The structure of the graph is similar to that of the XML-defined process in the source—which is not surprising given that this code is generated from a parse of the XML. The next figure depicts the graph constructed in the compiled process.

Here is a snippet of the corresponding BPEL source, predictably similar to the graph:

The surest way to learn the functionality of compiled processes and the short-running engine is to play with the accompanying compiler demo. See About the Examples for a download link.

Compiled Code—What Not To Do

An alternative to the graph implementation is to represent the process as a single block of code, as follows:

InitVars();CallProducerAsync();While (loopContinue)WaitNextEvent()If (event is result)Fork (CallHandlerA)Fork (CallHandlerB)Join the forksElse if (event is noMore)Set loopContinue = falseElse if (event is too slow)SendExceptionMsg();End IfEnd While

Though simple, this code hampers performance, because the intermediate event in WaitNextEvent() ties up an execution thread while it waits. That's one less thread for the process engine to work with, which might be needed elsewhere. The graph implementation might be a little harder to code—that code is generated by a tool anyway—but it uses resources more efficiently. Performance is the point, after all.

About the Examples

The source code for this article is available for download. To find the source code of this article look for 5487_05_Codes.zip in the code bundle. Refer to the README file for information on how to set up and run the examples.

The example of email funds transfer, which demonstrates how to build a long-running process out of several short-running processes, uses TIBCO's BusinessWorks 5.6 and Enterprise Message Service 4.4, as well as an RDBMS. TIBCO products can be downloaded from http://download.tibco.com. You must have an account to access this site. Once in, there are several installation programs to download; refer to our README file for the complete list.

The BPEL compiler is a set of Java programs. To run them, you require JDK 1.4 or higher. If you wish to compile the source code or run the programs from Eclipse, you need Eclipse 3.0 or later.

Summary

SOA processes have both active and passive activities. Active activities include calls to systems and services, data manipulations and transformations, and scripts or inline code snippets. Passive activities are events. When performing active activities, the process is actively performing work, tying up the process engine. Events put the process into an idle wait state.

An event can occur at the beginning of the process or in the middle. Every SOA process starts with an event. An event in the middle is called an intermediate event, and not every SOA process has one. The segment of a process between two events is called a burst; in a burst, the process performs active activities.

Processes are classified by duration as short-running, long-running, or mid-running.

Short-running processes span no more than a few seconds. Many short-running processes are a single burst, but some have intermediate events, which break the process into multiple bursts. Languages that support short-running processes include TIBCO's BusinessWorks and BEA's Weblogic Integration.

Long-running processes run longer—often days, weeks, months, or years— than the uptime of the process engine on which they run. Most of the time is spent waiting on intermediate events; the bursts themselves are quick. The engine persists the state of such processes to a database to survive a restart. Languages that support long-running processes include BPEL and Weblogic Integration.

Mid-running processes run for about the duration of a phone call in a call center. In call center usage, processes are structured as question-and-answer conversations between agent and customer. Bursts process the previous answer and prepare the next question; intermediate events wait for the customer's next answer. The engine keeps process state in memory. If the engine goes down, in-flight instances are lost. Chordiant's Foundation Server is an example of this sort of implementation.

Process data models include process metadata (information about the types of processes currently deployed), instance data (the status of live instances of processes), and pending events (and how to correlate them with instances). We studied the data models in Oracle's BPEL Process Manager and BEA's Weblogic Integration, and developed our own model that generalizes these. We used this model to build a use case that requires a long-running process (email funds transfer) from several short-running processes in TIBCO's BusinessWorks.

We concluded by designing a process engine optimized for short-running processes. The design is able to run short-running processes faster than a typical process engine because process state is held in memory (never persisted), processes are compiled rather than interpreted, and the process may define timed events of a very short duration. Further, the engine generates a fault when the process exceeds its SLA; the process may catch the fault or let it bubble up to the caller.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.