Earlier this year Zepheira, LLC needed an object oriented rule engine to enable different participants to manage continually changing rules. After developing a solution with Drools, it was observed that there were too many restrictions on the accessible data and the rules were getting too complicated to be effective with multiple participants each with different interests. Together we combined the modelling language - OWL (Web Ontology Language) - with the Object Oriented language Groovy to create a rule engine that was effective over large datasets and included many of the features of OOP including the ability to override selected rules for particular sub-domains within the model.

We developed the model using OWL and extended it by adding the ability to define message types (LHS rule declarations) and method bodies (RHS rule execution). By separating the two and associating every rule with a concept class, we were able to apply OOP features to the rules. The syntax we used can now be seen here:http://www.openrdf.org/doc/elmo/1.4/user-guide/x457.html#AEN466

By developing Groovy domain logic in an RDF database we enabled many stakeholders access to review and alter the rules to facilitate their own what-if scenarios. It also made it possible to manage much more complex situations by utilizing the strong vocabulary of OWL to describe the concepts that could not easily be represented in a simple object model.

We achieved the flexibility we needed and reduced the complexity of the rules by using an object oriented rule engine. The rule engine has since been licensed under the BSD licence as part of the OpenRDF Elmo project at http://www.openrdf.org/

About ZepheiraZepheira is a US-based professional services firm with expertise in semantic technologies and Enterprise Data Integration. For more information, visit: www.zepheira.com.

Monday, December 22, 2008

A rule engine should be used when a domain model has many rules that may change over time and need to be closely managed by domain experts.

By using a rule engine, rules become much more tangible. They can be browsed, searched, reviewed, and altered in a way that facilitates their management without the complications and restrictions of a full-blown IDE.

Rule engines are often avoided by DDD-ers (Domain Driven Designers) and OOP-ers (Object Oriented Programmers) because they pull domain logic out of the model and into a rule structure, disregarding many of the core features of OOP. (Separating the rules prevents classes from encapsulating their own behaviour, and doesn't allow inheritance or any form of overriding super class behaviour.)

Drools, a popular Java rule engine, uses an OOP language (Java-like) to write the RHS (rule execution) and to some degree, the LHS (rule condition). However, the rules themselves are still very much detached from the model. The rule is more like an /if/ condition with a /then/ procedure. If the code does not belong to an extend-able class hierarchy, it is anything but OO.

Tools could be written to allow a OO model, written in a language like Java, to be indexed and allow it to be browsed and searched - many IDEs already do this to a degree. However, the challenge is that the Java language (like many other languages) does not distinguish between the concepts (the classes and properties) and the behaviours (the rules). In Java, it is particularly difficult to distinguish between a property that retrieves and stores values and a method that applies logic to the state of the model.

To combine the features of OOP with the advantages of a rule engine, the domain model needs to be stored in a more formal structure to facilitate its management. It needs to be in a structure that clearly distinguishes the concepts and their properties from the behaviour rules, while providing OOP features to them. Such a structure would need to be index-able to allow it to be browsed and searched and straight-forward enough to allow alteration of individual rules without the risk of data loss.

Thursday, December 18, 2008

The repository pattern is too often under used when developing a domain model. I like to take a spin off the Domain Driven Design repository pattern and apply it to all collections within the domain.

Whenever the model needs a collection of entities, I use a repository. It is basically a glorified collection, controlling access to underlying entities in a type specific way. The main advantage to using a souped-up collection is that as needs change the implementation can be changed while maintaining the same interface. This abstraction also co-locates similar queries and query building logic into a single class structure, minimizing its duplication.

Writing access and bulk update queries involves a significant investment for new models. The abstraction of using a repository interface allows you to focus on the business logic early on, while working with small datasets and in memory collections. As the model interfaces begins to stabilize more focus can be in optimizing entity access.

For complex models that require uniquely optimized data access and updates. The repository pattern allows integrated query building logic to be separated and shared within it own class structure.

An anti-pattern to be aware of when using the repository pattern is to ensure that you don't try and combine aggregates and repositories together. A repository should not have any properties, only a collection of entities. Violating this significantly complicates the implementation and leads to role confusion exacerbating the problem.

Friday, December 5, 2008

Silverlight, Flex, and JavaFx are all trying to capture a new market of rich Internet applications that promise to be the best of both desktop applications and web applications. However, none of them are accessible to agents. Like desktop applications, they can only be used for the purpose they were designed for and cannot be used for indexing, harvesting, or used in mashups. This is not surprising as none of their businesses depend on open interoperability. The only big web players that depend on open interoperability are Google and Yahoo!, which is why Google funds the Mozilla and Chrome browser projects.

The web's biggest success is the inter-connectivity and vast amount of information available in the same format (HTML). It will be interesting to see if these new RIA platforms will have any effect on the trend of moving applications to the web.

Monday, December 1, 2008

When Spring MVC (Model View Controller) was first released in 2003, it helped clarify the roles and bring a clearer separation between the model, the view and the controller to Java web development. Complicated HTML pages became easier to maintain and changes were straight forward to implement. However, web applications have changed significantly since then.

With the recent explosion of new JavaScript libraries the separation between MVC has once again become blurred. Many HTML pages today are loaded in stages (a la AJAX), one entity at a time. Traditional Spring MVC seems overly complicated for small single entity results.

Creating rich AJAX web applications can still follow the MVC design pattern, although it might require stepping out of the Java/JavaScript comfort zone.

Consider a typical asynchronous request:1) User's activity triggers an HTTP request.2) The server processes the request and may invoke changes to the model and/or return part of the model (an entity) to the client.3) The script that sent the request, manipulates the response and displays it for the user.

In the above we can still see the MVC pattern: the model is the server's response, the view is the manipulation of the response for display, and the controller is the server. However, this differs from the original Spring MVC of 2003 - the view has moved to the client and the model is (in part) serialized in an HTTP message body.

Part of the confusion with AJAX development is around the role of the "view". It gets blurred between the serialization of an entity model and how JavaScript displays it. Inconsistencies in this area can cause many maintainability issues as the interaction between the client and server becomes confusing.

By viewing dynamic HTTP responses as serializations of entities from a data source (model), and leaving the "view" for the client, clarity and maintainability can be achieved. The only standard display technology that works equally well for large entities and entity fragments is XSLT/HTML. Today's modern browsers all support XSLT 1.0 transformations using JavaScirpt. By using XML for the model interchange and XSLT/HTML for the view display, JavaScript usages can be limited to what it does best: filling in missing functionality of the browser.

By limiting the role of JavaScript, its reusability is maximized. In the future the amount of JavaScript required by web applications should be significantly reduced. In addition, projects like webforms2 (at google code) promise to bring tomorrow's HTML5 and Web forms 2.0 to today's browsers.

XSLT brings more flexibility and reusability to the view layer (vs JSP-like technologies). By using client-side JavaScript/XSLT with other AJAX technologies modern web applications can achieve the richness of desktop applications, while still using best practises of the MVC design pattern.