Drools Integration Modules: Spring Framework and Apache Camel

Packt Publishing

Are you a Drools developer seeking self-improvement? If so, this cookbook could quickly enhance and broaden your skills with a host of easy-to-follow recipes on the advanced implementation of this flexible business rules engine.

Implement the java.io.Serializable interface to the objects of your domain model that will be persisted.

Create a persistence.xml file inside the resources/META-INF folder to configure the persistence unit. In this recipe, we will use an embedded H2 database for testing purposes, but you can configure it for any relational database engine:

Finally, we have to write the following code in a new Java class file, or in an existing one, in order to interact with the Stateful knowledge session and persist this state into the H2 database without further actions:

How it works...

In order to use the Spring Framework integration in your project, First you have to add the drools-spring module to it. In a Maven project, you can do it by adding the following code snippet in your pom.xml file:

This dependency will transitively include the required Spring Framework libraries in the Maven dependencies. Currently, the integration is done using the 2.5.6 version, but it should work with the newest version as well.

Now, we are going to skip the rule authoring step because it's a very common task and you really should know how to do it at this point, and we are going to move forward to the beans configuration.

As you know, the Spring Framework configuration is done through an XML file where the beans are defined and injected between them, and to make Drools declaration easy the integration module provides a schema and custom parsers. Before starting the bean configuration, the schema must be added into the XML namespace declaration, otherwise the Spring XML Bean Definition Reader is not going to recognize the Drools tags and some exceptions will be thrown. In the following code lines, you can see the namespace declarations that are needed before you start writing the bean definitions:

As you can see, there is only one stateful knowledge session bean configured by using the tag with a ksession1 ID. This ksession1 bean was injected with a knowledge base and a grid node so that the Drools Spring beans factories, which are provided by the integration module, can instantiate it.

Once the drools beans are configured, it's time to instantiate them using the Spring Framework API, as you usually do:

In the Java main method, a ClassPathXmlApplicationContext object instance is used to load the bean definitions, and once they are successfully instantiated they are available to be obtained using the getBean(beanId) method . At this point, the Drools beans are instantiated and you can start interacting with them as usual by just obtaining their references.

As you saw in this recipe, the Spring framework integration provided by Drools is pretty straightforward and allows the creation of a complete integration, thanks to its custom tags and simple configuration.

See also

Configuring JPA to persist our knowledge with Spring Framework

How to do it...

Carry out the following steps in order to configure the Drools JPA persistence using the Spring module integration:

How it works...

Before we start declaring the beans that are needed to persist the knowledge using JPA, we have to add some dependencies into our project configuration, especially the ones used by the Spring Framework. These dependencies were already described in the first step of the previous section, so we can safely continue with the remaining steps.

Once the dependencies are added into the project, we have to implement the java.io.Serializable interface in the classes of our domain model that will be persisted.

After this, we have to create a persistence unit configuration by using the default persistence.xml file located in the resources/META-INF directory of our project. This persistence unit is named drools.cookbook.spring.jpa and uses the Hibernate JPA implementation. Also, it is configured to use an H2 Java database, but in your real environment, you should supply the appropriate configuration. Next, you will see the persistence unit example, with the annotated SessionInfo entity that will be used to store the session data, which is ready to be used with Drools:

Now, we are ready to declare the beans that are needed to enable the JPA persistence with an XML file, where the most important section is the declaration of the Spring DriverManagerDataSource and LocalContainerEntityManagerFactoryBean beans , which are very descriptive and can be configured with the parameters of your data engine. Also, one of the most important declarations is the KnowledgeStoreService bean, using the tag, that will be primarily used to load the persistence knowledge session:

After the bean definitions, we can start writing the Java code needed to initialize the Spring Framework application context and interact with the defined beans. After loading the application context by using a ClassPathXmlApplicationContext object, we have to obtain the stateful knowledge session to insert the facts into the working memory, and also obtain the ID of the knowledge session to recover it later:

Once we are done interacting with the knowledge session and inserting facts, firing the rules, and so on, these can be disposed. They can be restored later using the KnowledgeStoreService bean , but we have to create a new org.drools.runtime.Environment object to set the EntityManager and TransactionManager used in the persistence process before trying to load the persisted knowledge session. The org.drools.runtime.Environment object can be created as follows:

Finally, with the Environment object created, we can obtain the KnowledgeStoreService bean together with the KnowledgeSession bean and the StatefulKnowledgeSession ID to load the stored state and start to interact with it as we do usually:

As you saw in this recipe, the knowledge session persistence is totally transparent to the user and automatic without any extra steps to save the state. By following these steps you can easily integrate JPA persistence using Hibernate, or any other vendor's JPA implementation, in order to save the current state of the knowledge session using the Spring Framework Integration.

Integrating Apache Camel in your project

This recipe will explain how to programmatically integrate Drools with the Apache Camel Framework to define execution routes, which will consume and execute Drools commands. The advantage of this integration is that Apache Camel makes possible the implementation of more advanced enterprise integration patterns. As you may already know from the previous recipes, the interaction with knowledge sessions is done by sending command objects or their XML representation to defined routes. However, for further details you can read this recipe.

How to do it...

In the following steps, you will see how easily the Apache Camel Framework can be integrated with JBoss Drools:

n order to use the Apache Camel integration, you have to add the drools-camel module to your project. If your project is managed using Apache Maven, as is recommended, then you can add the required dependencies by adding the following snippet code in the pom.xml file:

As usual, you need to create a DRL file to add rules and create the knowledge session. However, we are going to use the same rules created in the first recipe (Setting up Drools using Spring Framework), and we can continue moving forward.

After this, you are ready to start implementing the integration, firstly, by creating a Stateful knowledge session with the previously defined rules. Create a CamelIntegration class and get ready to write the code, as follows:

How it works...

The Apache Camel integration allows us to interact with a Drools stateless or stateful knowledge session through a pipeline, transforming the XML commands into executable commands, and executing each of them. It is the evolution of the old drools-pipeline module that is not available anymore. The advantage of this integration is that now it is possible to implement most of the available Enterprise Integration Patterns to solve a specific design problem with an elegant solution.

With this integration, you can also use any of the available Apache Camel Components in the endpoints declaration to create declarative services. You can find these Camel Components athttp://camel.apache.org/component.html. For example, you can consume messages from a JMS Queue/Topic, send them to the Drools Component to execute them, and then send the execution results to another system using Apache MINA. As you can see, this brings a more powerful interoperability mechanism to integrate Drools with other systems. After this introduction, we can go forward through the recipe steps.

First, you have to add the drools-camel library to your project. We recommend the use of Apache Maven to manage the projects. If you are following this advice then you can modify the pom.xml file and add the following dependency in the dependencies declaration section:

This dependency will include the 2.4.0 version of Apache Camel, among other dependencies, which was optimized by the Apache Camel developers to provide a complete integration.

At this point, we can skip the rules authoring and the knowledge session creation steps, and move on to the most important ones.

The integration is coupled with another Drools module called drools-grid that allows an interaction with Drools knowledge sessions, independent of the JVM location. In this case, it is primarily used to execute the commands locally. This module is a transitive dependency of the drools-camel module, so you don't have to worry about this dependency.

At this point, you have to create an org.drools.grid.impl.GridImpl object instance and add to it a WhitePages service , which is a directory used to register all the available services. Using this GridImpl object , you have to create a GridNode that will have the responsibility to find the registered knowledge sessions and execute the commands on it with the previous registration of the knowledge session in the GridNode. The only step remaining is the creation of a JndiContext object that will be used later, and the binding of the GridNode on it:

Now, we can create a CamelContext object using the previously created JndiContext. As we are programmatically configuring Apache Camel, a DefaultCamelContext is going to be used, but if you wish to use Spring Framework or OSGi, then there are appropriate CamelContext implementations provided by Camel for these.

ow, this is the part where you can define the routes, which is one of the most powerful features of this integration because of its very large library of components provided by Apache Camel to build pipelines. The routes are created by using a RouteBuilder object and adding the route definition using the Camel Java DSL from() and to() definitions. Once the routes are defined they must be added in the CamelContext object instance and the CamelContext must be started, otherwise the routes aren't going to be available. The following code snippet shows how to declare a simple route using a RouteBuilder object and add it to the CamelContext before getting started:

In the previous route definition, we were consuming messages from a direct endpoint and sending them to a Drools endpoint, which has the following syntax:

drools://{1}/{2}

where the parameters are as follows:

{1}: The Grid Node identifier that was registered in the CamelContext

{2}: The identifier of the knowledge session registered in the Grid node with identifier {1}

The knowledge session identifier is optional if it is supplied in the BatchExecutionCommand message. When this identifier is not configured the Grid node will obtain the knowledge session using its internal directory.

This route is very simple, but routes can be made more complex by adding Enterprise Integration Patterns (EIP) , such as Message Filter or Content Based Router, which can be implemented using filter() and choice() predicates.

The interaction with CamelContext object is realized by obtaining a ProducerTemplate object instance from it and sending the BatchExecutionCommand to the input endpoint by using the requestBody() method , as shown in the following code:

The BatchExecutionCommand instance will be delivered from the direct endpoint to the Drools endpoint by Apache Camel, where the registered GridNode will unmarshall/marshall the message if necessary, execute it, and return the results in an ExecutionResults object. With this object, you can access the different types of returned results, which will depend on the commands sent. You can see this in the following code snippet:

Summary

In this article, in the fisrt recipe we saw that the Spring framework integration provided by Drools is pretty straightforward and simple, and allows the creation of a complete integration with other modules. In the second recipe, we saw how easily we can integrate JPA persistence using Hibernate, or any other JPA implementation, in order to save the current state of the knowledge session using the Spring Framework Integration. In the last recipe, we saw how to create a complete integration of Drools with Apache Camel with its most used features.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.