Smart Processes Using Rules

Packt Publishing

Java developers and architects will find this book an indispensable guide to understanding Business Process Management frameworks in the real world. Using open source jBPM5, it tutors through authentic examples, screenshots, and diagrams.

Good old integration patterns

What we've learnt from experience about jBPM3 is that a rule engine can become very handy when evaluating different situations and making automatic decisions based on the information available. Based on my experience in consulting, I've noticed that people who understand how a process engine works and feel comfortable with it, start looking at rule engines such as Drools. The most intuitive first step is to delegate the business decisions in your processes and the data validations to a rule engine. In the past, adopting one of these two different technologies at the same time was difficult, mostly because of the learning curve as well as the maturity and investment required by a company to learn and use both technologies at once. At the end of the day, companies spend time and money to create in-house integrations and solutions so that they can merge these two worlds.

The following example shows what people have done with jBPM3 and the Drools Rule Engine:

The first and most typical use case is to use a rule engine to choose between different paths in a process. Usually, the information that is sent to the rule engine is the same information that is flowing through the process tasks or just some pieces of it; we expect a return value from the rule engine that will be used to select which path to take. Most of the time, we send small pieces of information (for example, the age or salary of a person, and so on) and we expect to get a Boolean (true/false) value, in the case that we want to decide between just two paths, or a value (integers such as 1, 2, 3, and so on) that will be used to match each outgoing sequence flow. In this kind of integration, the rule engine is considered just an external component. We expect a very stateless behavior and an immediate response from the rule engine.

The previous figure shows a similar situation, when we want to validate some data and then define a task inside our process to achieve this information's validation or decoration. Usually we send a set of objects that we want to validate or decorate, and we expect an immediate answer from the rule engine. The type of answer that we receive depends on the type of validation or the decoration rules that we write. Usually, these interactions interchange complex data structures, such as a full graph of objects.

And that's it! Those two examples show the classic interaction from a process engine to a rule engine. You may have noticed the stateless nature of both the examples, where the most interesting features of the rule engine are not being used at all. In order to understand a little bit better why a rule engine is an important tool and the advantages of using it (in contrast to any other service), we need to understand some of the basic concepts behind it.

The following section briefly introduces the Drools Rule Engine and its features, as well as an explanation of the basic topics that we need to know in order to use it.

The Drools Rule Engine

The reason why rule engines are extremely useful is that they allow us to express declaratively what to do in specific scenarios. In contrast to imperative languages such as Java, the Rule Engine provides a declarative language that is used to evaluate the available information.

Usually most people, who are not familiar with rule engines but have heard of them, think that rule engines are used to extract if/else statements from the application's code. This definition is far from reality, and doesn't explain the power of rule engines.

First of all, rule engines provide us a declarative language to express our rules, in contrast to the imperative nature of languages such as Java.

In Java code, we know that the first line is evaluated first, so if the expression inside the if statement evaluates to true, the next line will be executed; if not, the execution will jump to the next if statement. There are no doubts about how Java will analyze and execute these statements: one after the other until there are no more instructions. We commonly say that Java is an imperative language in which we specify the actions that need to be executed and the sequence of these actions.

Java, C, PHP, Python, and Cobol are imperative languages, meaning that they follow the instructions that we give them, one after the other.

Now if we analyze the DRL snippet (DRL means Drools Rule Language), we are not specifying a sequence of imperative actions. We are specifying situations that are evaluated by the rule engine, so when those situations are detected, the rule consequence (the then section of the rule) is eligible to be executed.

Each rule defines a situation that the engine will evaluate. Rules are defined using two sections: the conditional section, which starts with the when keyword that defines the filter that will be applied to the information available inside the rule engine. This example rule contains the following condition:

when
$p: Person( age > 18 )

This DRL conditional statement filters all the objects inside the rule engine instance that match this condition. This conditional statement means "match for each person whose age is over 18". If we have at least one Person instance that matches this condition, this rule will be activated for that Person instance. A rule that is activated is said to be eligible to be fired. When a rule is fired, the consequence side of the rule is executed. For this example rule, the consequence section looks like this:

then
$p.setEnabledToDrive(true);
update($p);

In the rule consequence, you can write any Java code you want. This code will be executed as regular Java code. In this case, we are getting the object that matches the filter—Person ( age > 18 )—that is bonded to the variable called $p and changing one of its attributes. The second line inside the consequence notifies the rule engine of this change so it can be used by other rules.

A rule is composed of a conditional side, also called Left-Hand Side (LHS for short) and a consequence side, also called Right-Hand Side (RHS for short).

We will be in charge of writing these rules and making them available to a rule engine that is prepared to host a large number of rules.

To understand the differences and advantages between the following lines, we need to understand how a rule engine works. The first big difference is behavior: we cannot force the rule engine to execute a given rule. The rule engine will pick up only the rules that match with the expressed conditions.

if(person.getAge() > 18)

And

$p: Person( age > 18 )

If we try to compare rules with imperative code, we usually analyze how the declarative nature of rule languages can help us to create more maintainable code. The following example shows how application codes usually get so complicated, that maintaining them is not a simple task:

If(…){
If(){
If(){
}
}else(){
if(…){
}
}
}

All of the evaluations must be done in a sequence. When the application grows, maintaining this spaghetti code becomes complex—even more so when the logic that it represents needs to be changed frequently to reflect business changes. In our simple example, if the person that we are analyzing is 19 years old, the only rule that will be evaluated and activated is the rule called "over 18 enabled to drive". Imagine that we had mixed and nested if statements evaluating different domain entities in our application. There would be no simple way to do the evaluations in the right order for every possible combination. Business rules offer us a simple and atomic way to describe the situations that we are interested in, which will be analyzed based on the data available. When the number of these situations grows and we need to frequently apply changes to reflect the business reality, a rule engine is a very good alternative to improve readability and maintenance.

Rules represent what to do for a specific situation. That's why business rules must be atomic. When we read a business rule, we need to clearly identify what's the condition and exactly what will happen when the condition is true.

To finish this quick introduction to the Drools Rule Engine, let's look at the following example:

rule "enabled to drive must have a car"
When
$p: Person( enabledToDrive == true )
not(Car(person == $p))
then
insert(new Car($p));
end
rule "person with new car must be happy"
when
$p: Person()
$c: Car(person == $p)
then
$p.setHappy(true);
end
rule "over 18 enabled to drive"
when
$p: Person( age > 18, enabledToDrive == false)
then
$p.setEnabledToDrive(true);
update($p);
end

When you get used to the Drools Rule Language, you can easily see how the rules will work for a given situation. The rule called "over 18 enabled to drive" checks the person's age in order to see if he/she is enabled to drive or not. By default, persons are not enabled to drive. When this rule finds one instance of the Person object that matches with this filter, it will activate the rule; and when the rule's consequence gets executed, the enabledToDrive attribute will be set to true and we will notify the engine of this change. Because the Person instance has been updated, the rule called "enabled to drive must have a car" is now eligible to be fired. Because there is no other active rule, the rule's consequence will be executed, causing the insertion of a new car instance. As soon as we insert a new car instance, the last rule's conditions will be true. Notice that the last rule is evaluating two different types of objects as well as joining them. The rule called "person with new car must be happy" is checking that the car belongs to the person with $c: Car(person == $p). As you may imagine, the $p: creates a binding to the object instances that match the conditions for that pattern. In all the examples in this book, I've used the $ sign to denote variables that are being bound inside rules. This is not a requirement, but it is a good practice that allows you to quickly identify variables versus object field filters.

Please notice that the rule engine doesn't care about the order of the rules that we provide; it will analyze them by their conditional sections, not by the order in which we provide the rules.

This article provides a very simple project implementing this scenario, so feel free to open it from inside the chapter_09 directory and experiment with it. It's called drools5-SimpleExample. This project contains a test class called MyFirstDrools5RulesTest, which tests the previously introduced rules. Feel free to change the order of the rules provided in the /src/test/resources/simpleRules.drl file. Please take a look at the official documentation at www.drools.org to find more about the advantages of using a rule engine.

What Drools needs to work

If you remember the jBPM5 API introduction section, you will recall the StatefulKnowledgeSession interface that hosts our business processes. This stateful knowledge session is all that we need in order to host and interact with our rules as well. We can run our processes and business rules in the same instance of a knowledge session without any trouble. In order to make our rules available in our knowledge session, we will need to use the knowledge builder to parse and compile our business rules and to create the proper knowledge packages. Now we will use the ResourceType.DRL file instead of the ResourceType.BPMN2 file that we were using for our business processes.

So the knowledge session will represent our world. The business rules that we put in it will evaluate all the information available in the context. From our application side, we will need to notify the rule engine which pieces of information will be available to be analyzed by it. In order to inform and interact with the engine, there are four basic methods provided by the StatefulKnowledgeSession object that we need to know.

We will be sharing a StatefulKnowledgeSession instance between our processes and our rules. From the rule engine perspective, we will need to insert information to be analyzed. These pieces of information (which are Java objects) are called facts according to the rule engine's terminology. Our rules are in charge of evaluating these facts against our defined conditions.

The insert() method notifies the engine of an object instance that we want to analyze using our rules. When we use the insert() method, our object instance becomes a fact. A fact is just a piece of information that is considered to be true inside the rule engine. Based on this assumption, a wrapper to the object instance will be created and returned from the insert() method. This wrapper is called FactHandle and it will allow us to make references to an inserted fact. Notice that the update() and retract() methods use this FactHandle wrapper to modify or remove an object that we have previously inserted.

Another important thing to understand at this point is that only top-level objects will be handled as facts, which implies the following:

FactHandle personHandle = ksession.insert(new Person());

This sentence will notify the engine about the presence of a new fact, the Person instance. Having the instances of Person as facts will enable us to write rules using the pattern Person() to filter the available objects. What if we have a more complex structure? Here, for example, the Person class defines a list of addresses as:

class Person{
private String name;
private List<Address> addresses;
}

In such cases we will need to define if we are interested in making inferences about addresses. If we just insert the Person object instance, none of the addresses instances will be treated as facts by the engine. Only the Person object will be filtered. In other words, a condition such as the following would never be true:

when
$p: Person()
$a: Address()

This rule condition would never match, because we don't have any Address facts. In order to make the Address instances available to the engine, we can iterate the person's addresses and insert them as facts.

If our object changes, we need to notify the engine about the changes. For that purpose, the update() method allows us to modify a fact using its fact handler. Using the update() method will ensure that only the rules that were filtering this fact type gets re-evaluated. When a fact is no longer true or when we don't need it anymore, we can use the retract() method to remove that piece of information from the rule engine.

Up until now, the rule engine has generated activations for all the rules and facts that match with those rules. No rule's consequence will be executed if we don't call the fireAllRules() method. The fireAllRules() method will first look for activations inside our ksession object and select one. Then it will execute that activation, which can cause new activations to be created or current ones canceled. At this point, the loop begins again; the method picks one activation from the Agenda (where all the activations go) and executes it. This loop goes on until there are no more activations to execute. At that point the fireAllRules() method returns control to our application.

The following figure shows this execution cycle:

This cycle represents the inference process, since our rules can generate new information (based on the information that is available), and new conclusions can be derived by the end of this cycle.

Understanding this cycle is vital in working with the rule engine. As soon as we understand the power of making data inferences as opposed to just plain data validation, the power of the rule engine is unleashed. It usually takes some time to digest the full range of possibilities that can be modeled using rules, but it's definitely worth it.

Another characteristic of rule engines that you need to understand is the difference between stateless and stateful sessions. In this book, all the examples use the StatefulKnowledgeSession instance to interact with processes and rules. A stateless session is considered a very simple StatefulKnowledgeSession that can execute a previously described execution cycle just once. Stateless sessions can be used when we only need to evaluate our data once and then dispose of that session, because we are not planning to use it anymore. Most of the time, because the processes are long running and multiple interactions will be required, we need to use a StatefulKnowledgeSession instance. In a StatefulKnowledgeSession, we will be able to go throughout the previous cycle multiple times, which allows us to introduce more information over time instead of all at the beginning. Just so you know, the StatelessKnowledgeSession instance in Drools exposes an execute() method that internally inserts all the facts provided as parameters, calls the fireAllRules() method, and finally disposes of the session. There have been several discussions about the performance of these two approaches, but inside the Drools Engine, both stateful and stateless sessions perform the same, because StatelessKnowledgeSession uses StatefulKnowledgeSession under the hood.

There is no performance difference between stateless and stateful sessions in Drools.

The last function that we need to know is the dispose() method provided by the StatefulKnowledgeSession interface. Disposing of the session will release all the references to our domain objects that are kept, allowing those objects to be collected by the JVM's garbage collector. As soon as we know that we are not going to use a session anymore, we should dispose of it by using dispose().

The power of the rules applied to our processes

Going back to our very simple integration patterns, you may notice that back in the old days, the process engines interacted with the rule engines in a very stateless fashion, leaving out more than 90 percent of the rule engine features.

In jBPM5, the rule engine and the process engine have been designed to work together in a stateful context, which gives us a rich environment to work in. This new design encourages us to build smarter and simpler process diagrams, as well as to leverage the power of the rule engine to identify business situations that require attention.

If we re-examine the first process rule integration pattern (the one that analyzes an order to choose the right path), we will notice that in jBPM5 we are defining the XOR gateway using expressions or rules. This is the most basic usage of rules in our processes; if we choose to write the gateway's conditions using the rule language, a rule will be generated.

This section is intended to show you some of the available alternatives to model and design different behaviors, depending on your business situation. You will notice that there are several ways to do similar things; this is because the engine is extremely flexible, but sometimes this fact confuses people.

Gateway conditions

As we said before, the most basic way of using rules is applying them inside process gateways. Because it is the simplest way, we need to master it and know all the possibilities.

Each outgoing sequence flow can define a condition that must be fulfilled by the available data in the context. An evaluation will be done to select the path for each process instance. As mentioned before, these conditions can be expressed in Java code (using the imperative nature of the language) or in DRL, in which we can leverage its declarative nature in order to analyze more complex conditions.

Java-based conditions

This is the simplest scenario and the most intuitive for non-rules users. Most people who feel comfortable with process engines will probably decide to use Java to express the conditions.

If we now open the test class called GoodOldIntegrationPatterns provided inside the project called jBPM5-Process-Rules-Patterns, the method called javaBasedDecisionTest() shows you an example using Java conditions. Notice that these tests load the process file process-java-decision.bpmn. Feel free to open and debug these tests in order to follow the process execution.

If we open this process in the Process Designer, it looks like the following figure:

It is important for us to notice the following points:

The conditions are not inside the gateway, they are placed inside the sequence flows.

Each condition expression needs to be evaluated and it must return a Boolean value using the return keyword.

We can use the context variable to access the process variables. The knowledge runtime also allows us to access the rule engine context.

Each sequence flow's condition is evaluated in a sequence; in an exclusive gateway (the one used in the example), the process will continue through the first condition that returns true, without evaluating any remaining conditions.

Any Java expression can be included in these conditions.

The engine evaluates these conditions at runtime for each process instance, but at compile time, these expressions are checked.

99 percent of the time that we choose to use Java expressions inside the gateway's conditions it is because we need to evaluate the process information. For wider analysis, the following section explains how we can use the power of rules.

Rule-based conditions

Let's take a look at the same situation but with a rule-based approach using the DRL language. If we choose to go with this option, at compilation time the engine will create the appropriate rules to perform the evaluations. If you want to, take a look at the example provided in the test class called NewCommonIntegrationPatternsTest—the method called testSimpleRulesDecision() shows how these conditions can be written in the DRL language.

Consider the following DRL snippet:

Person( age < 18 )

This will only propagate the execution for that sequence flow when there is a person that matches that age restriction

One interesting thing to note here is the fact that the previous example does not look at the process variables; instead, it is matching the objects/facts inside the current knowledge session. In this case, we need to have at least one Person object that matches these conditions in order for the process to continue.

If we want to check for conditions in the process variables using the DRL approach, we need to do something like this:

In order to make this evaluation, we need to have inserted the process instance as a fact inside the knowledge session. We usually insert the process instance after creating it, in order to be able to make inferences. For example:

You can check the test called testSimpleDecisionWithReactiveRules(), which shows this example in action.

If we don't want to insert the process instance manually into the session, there is a special ProcessEventListener, which comes out-of-the-box to automatically insert the ProcessInstance object before the process is started and also to keep it updated when the process variables change. Look at the README file provided with this article's source code to find more information and tests showing the behavior of the RuleAwareProcessEventListener object.

Notice that when we start using rules and processes together, we usually want to put the engine in what we call reactive mode. This basically means that as soon as a rule gets activated, it will be fired. The engine will not wait for the fireAllRules() invocation.

There are two ways of achieving this mode:

Fire until halt

Agenda & process event listeners

The fire until halt alternative requires another thread to be created, which will be in charge of monitoring the activations and firing them as soon as they are created. In order to put the engine in a reactive mode by using the fireUntilHalt() method, we use the following code snippet:

The test called testSimpleDecisionWithReactiveRules() inside NewCommonIntegrationPatternsTest shows the fire until halt approach in action. Notice that the Thread.sleep(...) method is also used to wait for the other thread to react.

The only downside of using the fire until halt approach is that we need to create another thread—this is not always possible. We will also see that when we use the persistence layer for our business process using this alternative is not recommended. For testing purposes, relying on another thread to fire our rules can add extra complexity and possibly race conditions. That's why the following method, which uses listeners, is usually recommended.

Using the agenda and process event listeners mode allows us to get the internal engine events and execute a set of actions as soon as the events are triggered. In order to set up these listeners, we need to add the following code snippet right after the session creation, so that we don't miss any event:

The test testSimpleDecisionWithReactiveRulesUsingListener() inside the NewCommonIntegrationPatternsTest class shows how these listeners can be used.

Notice that we are attaching DefaultAgendaEventListener and DefaultProcessEventListener; both define several methods that will be called when internal events in the engine are generated. The implementations of these listeners will allow us to intercept these events and inject our custom and domain-specific code. In this example, we are overriding the behavior of the activationCreated(...) and afterProcessStarted(...) methods, because we need to fire all the rules as soon as an activation is created or a process has started. If you have the source code of the projects (jBPM5 and Drools), look at the other methods inside DefaultAgendaEventListener and DefaultProcessEventListener to see the other events that can be used as hook points. This approach, using listeners, gives us a more precise and single-threaded approach to work with.

It is important to know that both approaches—fire until halt and agenda and process event listeners—give the same results in the previous tests, but they work in different ways. We need to understand these differences in order to choose wisely.

Do you remember when we used the DRL filter with the following restriction in our gateway?

Person( age< 18 )

We will be filtering all the objects inside the current session. This is an extremely flexible mechanism to write conditions not only based in the process information but also in the available context. We need to understand what exactly happens under the hood in order to leverage this power.

We need to know how and when to use it. If we learn how to write effective rules in the Drools Rule Language, we will be able to express quite complex conditions inside our business processes.

To show you what kind of things can be done using this mechanism, let's analyze the following example called multi process instance evaluations.

Multi-process instance evaluations

In jBPM5, if we start multiple processes in the same knowledge session, all of them will run in the same context. One of the big advantages of running multiple processes under the same knowledge session umbrella is that we can analyze those instances and make global decisions about them. This feature allows us to go one level up in the organization level, so we avoid being focused on just one process execution at a time. Most of the old process engines were focused on offering mechanisms to work with a single process instance.

This fact pushes process composition (embedding a process inside another to have different layers and process hierarchy) as the only way to take control of multiple process instances that are related to each other. Compared to the new paradigm, of having our process instances as facts inside the rule engine, we can now start writing rules that will evaluate multiple processes instances to find more complex and specific situations.

Take a look at the following process definition, which chooses between two different paths based on the available resources at execution time.

When we start the first process instance, the XOR diverging gateway will evaluate the following conditions:

For path 1: Resources ( available > 5 )

For path 2: Resources ( available <= 5 )

Each task requires and consumes one unit of the available resources. Here we have defined Resources() for the example, but it can be anything that you need to have in order to perform the task. For instance, if these tasks were automated activities and we were interacting with an external system that charged our company per transaction, these resources would represent the amount of money that the company was spending consuming that service. If these tasks were human activities performed by external contractors, these resources could represent once again the budget that the company can spend in order to perform the work. For both cases, when we are out of resources, we need to stop working.

Let's imagine that we start with 50 resources available. Technically, these resources will be represented by a new instance of the Resources class with the available attribute set to 50. We can easily predict what will happen and how much a complete process execution will cost us. We know that if our process instance chooses path number 1, five tasks will be executed, consuming five units of our resources. Path number 2 will cost us only three units.

For simplicity's sake, in our example we will consider that only one instance of our Resources class can exist inside the session. This instance will contain the number of available resources at all times.

If we start creating instances of this process definition with the restrictions mentioned previously, the first nine instances will choose path number 1, whereas instance number 10 will choose path number 2. After executing instance number 10, we will have only two remaining resources available. When instance number 11 starts, it will choose path number 2 but will fail while executing the second task because of the lack of resources.

You can check this scenario in the test class called MultiProcessEvaluationTest. The test method called testMultiProcessEvaluation() shows this execution behavior. The process file that is being used is called multi-process-decision. bpmn—it looks like the following figure:

Some important points that will help you to understand what is happening are as follows:

The example process uses Script Tasks; you will not use these Script Tasks in real situations.

Each Script Task executes the code to reduce the available resources by one unit. The example implements a very simple approach; in a real implementation, you will probably have a service to do this. The following code is executed inside each Script Task:

This code basically executes a query against the knowledge session. You can consider a query to be exactly the same as a rule without the consequence, which allows us to retrieve information from within the session. This snippet is calling the getResources query that can be found inside the resources. drl file. It looks like this:

query getResources()
$r: Resources()
end

Because we know that there is one and only one Resources() fact, we just get the first result using the iterator().next() method. Once we get the reference to the resource object, we just decrement the available resources. In order for the engine to know about this change, we need to update the fact so we need to get the fact handle that was created at insertion time. We have two options: one is to keep a registry where all the fact handles for our application are kept, or we can just use the getFactHandle() method, which accepts the object and returns the fact handle.

Notice that inside the resources.drl file, a rule was included to check when there are no more resources available:

rule "Out of resources"
when
$r: Resources( available == 0 )
then
throw new IllegalStateException(
"No More Resources Available = "+$r);
end

This rule is a little bit harsh, but it clearly states that we cannot execute a task if we are out of resources. IllegalStateException will break the execution, stopping the current process instance.

In this simple example, we are starting one process instance after the previous one is already finished.

The next section will build on this example, adding more complexity and showing more advanced decisions.

Rule-based process selection and creation

Our previous example showed us how we can share information between different process instances and make decisions inside each particular instance based on global information.

Now we will see an enhanced version of this process that uses more of the available domain information in choosing between different paths, as well as saving resources, which improves the process' business performance. First of all, we will add a restriction to the condition declared in the diverging XOR gateway. We will analyze the process variable called Person for each process instance. If the person associated with the process has a Platinum Plan and there are enough resources available, Path number 1 will be chosen. For all of the other persons, path number 2 will be selected. This business decision allows us to execute more process instances while taking special care of our platinum customers and saving resources for the rest of the plans.

Now, in order to really take advantage of the rule engine, we will write some rules to verify that we will not be able to start a new process if we don't have enough resources to finish it. We have several ways of implementing this, but here I will describe only two options:

Delegate the creation of the process instance to the rule engine

Warn—based on rules—when a process will not be able to finish its execution

Starting with the first option, we can say that an extremely useful feature inside the Drools Rule Engine is that we are able to start processes and influence the process executions from inside the business rules. If our process requires some input data, our rules can be in charge of gathering that data, and as soon as the data is ready, we can trigger the process creation.

In order to start a process from within a business rule, we only need to know the process ID (the name of the process definition) and the input parameters required by the process. Using a very simple rule, we can start a process instance for each Customer() that we have in our session:

This rule will be activated for each customer that we insert into our knowledge session and it will create an instance of CustomerBasicProcess, setting the customer as a process variable.

There is a test called testProcessCreationDelegation() inside the MultiProcessEvaluationTest class that demonstrates the previous rule (simple-process-trigger.drl) in action.

Take note that because we are trying to start a process from a rule consequence, this rule needs to be fired in order to create the process. Because we are creating the process instance inside the inference cycle and our process will run from start to end in a single shot, all the activations created by the process execution will be queued up and executed as soon as the process ends. This is usually not a problem, because most of the time we will have long running processes—but you need to be aware of this behavior.

To show this feature in more advanced use cases, let's twist our example. Instead of having a gateway, to choose between two paths in our process, we can just split our process into two well-focused and simpler processes:

As you can see, we don't need an XOR gateway anymore; instead, a set of rules defines which process to start. Each process definition contains only the tasks required for each specific scenario. If we start having complicated conditions inside our gateways, we can consider this approach of creating a decoupled set of rules and start different process instances. In order to make these kind of changes, you will need to understand the business scenario and evaluate if you can split it into more than one process definition. Now you have two different process definitions to maintain, so it is a trade-off between simplicity in your models and maintainability. Using these mechanisms, we can build custom processes and use the rule engine to choose exactly when we need to use our business processes based on the contextual information that we have available.

The test class called RuleBasedProcessSelectionTest contains four tests showing these concepts in action. These tests show a simple scenario where various features of the rule engine are being used to check the resource availability and choose between two different processes.

Another possibility, instead of splitting the process, is to use subprocesses to only define a fragment of the process that can be selected at runtime. Using a subprocess also creates a new process definition, but this definition can be reused. Most of the time, using sub-processes is recommended to define several layers of abstraction, where subprocesses represent more specific tasks that need to be done.

In this case, the rules (which start the process instance) need to define the subprocess that will be used by that instance. The rules, in this case, will need to decide which subprocess ID will be used and sent as a parameter. At runtime this parameter will be evaluated and the correct subprocess will be instantiated.

Note that in both cases (when we split the process versus when we use subprocesses), the resource allocation can be done more effectively, because we know how much each set of tasks will cost.

Summary

In this article, we have covered important patterns that we need to know in order to leverage the power of the rule engine in improving our business processes. It is important for you to remember that there are usually several ways to model a problem and you will be responsible for the flexibility of the solution. You can use the patterns mentioned in this article in order to decide which is the best option.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.