Using the Rules Pattern to Improve Code Maintainability

Picture this scenario; you have been tasked with writing some code that creates a sales order in the system. You choose to use a standard BAPI provided by SAP. Nice, clean code that is running in production.

Then, a few weeks later, an application consultant informs you that the business needs have changed for one customer only that uses the solution; they need a check first that the material ordered is in stock in a quantity suitable for one delivery – don’t create the order if that is not possible.

So you modify your original solution and add a bit of complexity. Not much, just one ‘IF’ branch and some handling logic for that one customer only. No problem.

The next week a new requirement appears in your email; If the delivering plant is greater that 500 km from the Ship-to party and the order is of a certain type, do not create the order. More checking code goes into your original, elegant solution.

Two weeks later, another check requirement to implement; a month later one more. By now, your code is beginning to ‘smell’ a bit – not your fault, how could you have known these little checks and subsequent implementations were going to add additional complexity to your once clean code, whose only single responsibility originally was to create sales orders from the data provided.

One year down the line of this, and the code has gone from being somewhat inelegant to virtually unmaintainable; the checks have formed a morass of spaghetti code that is hard to read through and understand what all of the business requirements were. Any additional logic may introduce unintended side effects that slip through regression testing, because there are so many paths through the code that testing becomes a nightmare; consultants have moved on, developers have moved on, and understanding the complexity of the code is a tall order indeed.

Not only that, but every time the code is modified, it means that the source code is opened up, edited and transported, adding additional risks that once tested code is inadvertently changed in an unintended manner, with the subsequent post-mortem to discover what actually went wrong after the code has been released to the productive environment.

Sound familiar?

The Root Cause

What is the root cause of all of this mayhem? In a word, unpredictability. It was impossible to gather all of the requirements at design time, and if working according to agile methodologies this would not even be attempted. Requirements evolve over time, new customers and business conditions come onboard, so even if the requirements analysis had been completely comprehensive at the beginning of the design process, new requirements would invariably crop up.

In any case why leave it to the original code to do all of the checks? After all, it was designed to do one thing: create sales orders. Making it do anything else violates the first principle of SOLID design; the Single responsibility principle, which states that:

“There should never be more than one reason for a class to change.”

What if there was a way to separate all of these code checks out into smaller, independent units of code that could be swapped in and out of use, without recourse to opening up the original code and recompiling it? What if those checks could be re-used, eliminating repeated code? What if some customers needed some checks that others did not.

What if the code could be prevented from the ravages of code rot from the outset?

Enter the Rules Pattern

I am indebted to Steve Smith who is the originator of this design pattern.

The logic for executing these additional tasks gets intertwined with the original code

My implementation of the rules pattern goes a little further than the original pattern, so I’ll break it down into sections, then see how they all work together at the end. At the core of all of this is a single component conceptually identified as a Rule.

The Rule

A rule is a fundamental, atomic check for a specific condition or state of a system. In plain English, it’s a check that a certain condition can be fulfilled.

Rules typically need to be set up, i.e. be provided with information about the state of the system, and executed, to evaluate the outcome of the rule.

The basic architecture is shown below.

Let’s have a look at the salient features of the diagram above. We have:

Interface ZIF_RULE

The contract definition for the rules. Contains two methods: PREPARE() and EXECUTE().

PREPARE(): Prepare the rule. Pass in any needed data that is necessary for the EXECUTE method.

The cast is due to the fact that I am passing in an object of a known type that needs to be explicitly stated in this example. The important point is that I am just setting a sales order document number and sales order item number in this statement, nothing more.

EXECUTE(): Execute the rule. Note the parameters that the method returns:

EX_STATE returns one of three levels: PASS, FAIL or WARN, referring to the state of the rule after evaluation.

EX_ABORT is an indication that whatever happens with the other rules, setting this to ‘TRUE’ is enough to prevent any further rule checks since whatever happens; it’s a kind of system level ‘all bets are off!’ indicator.

There is a reason for this; in my implementation, if a rule fails, I may wish to carry on processing but perhaps skip an operation (say, creating a sales order, because one already exists); on the other hand, if the method returns EX_ABORT = ‘TRUE‘, it means just give up on processing because whatever happens, the process will not succeed (say, a vendor batch number is passed in that does not match a batch in the system). I could have added another level to EX_STATE, but I find separating the signals out this way clearer to understand in the program.

Obeying the SOLID Single Responsibility Principle, it should be clear that this rule is doing one thing and one thing only – checking that a sales order exists in the database.

This code looks almost ridiculously trivial, and that’s exactly how it should be; easy to understand. Imagine 20 rules like this all incorporated into the original source code to create a sales order, along with the conditions to determine if a particular rule should be executed for a particular customer; then the reason for organising the code in this manner perhaps starts to make sense.

Abstract Class ZCL_RULE

Although not strictly needed, since any class can implement the needed interface ZIF_RULE,

the abstract class ZCL_RULE can be used to implement the interface and all other rules can inherit from it; this leads to a neat way to implement new rules without the need to recompile existing code, as I will show in a later blog.

Concrete Classes ZCL_RULE_1…ZCL_RULE_n

These are the classes that implement a particular rule.

The Rule Builder and Rule Builder Factory

The Rule Builder

The Rule Builder does just that – builds the rules to be used.

The purpose of this interface is to put the rules together for use in a specific code location.It contains two methods: BUILD_RULES() and GET_RULES().

BUILD_RULES() creates an instance of each rule that is to be executed, and invokes the method ZIF_RULE~PREPARE() on each rule.

The Rule Builder Factory

Responsible for deciding which rule builder to build, based on arbitrarily chosen input parameters; it could be anything needed to discriminate which Rule Builder to fabricate. It only has one method – MAKE_ZCL_RULE_BUILDER(), returning the appropriate Builder.

Here’s an example of the implementing code that uses an EDI partner as a parameter:

Any specific partner can have their own specific rule builder implementation, meaning that if a particular customer has specific rules to check, they can be instantiated for that customer only.

Customers having their own rules builder dramatically reduces code complexity; imagine having one universal rule builder that had to check what rule to implement based on customer and possibly many other parameters.

Note the WHEN OTHERS statement. This is important; it means that for any customer that does not have any specific rule checks, a generic rule builder is instantiated that contains rules common for all unspecified customers.

The Rules Evaluator

This is the final part of the puzzle; the Rules Evaluator:

The Rules Evaluator is a stand-alone utility class that has one job only, satisfied by the EVALUATE() method – to parse the rules passed into it and return EX_STATE and EX_ABORT. It can be used for any collection of rules in a table of references to rules.

Most of the logic deals with aggregation of the results of the collective evaluation of the individual rules.

A warning state can be issued only if the current overall state of evaluation if the preceding evaluated rules is a pass state.

Any state can go to an error state, but it is irreversible; no subsequent rule evaluation can change that state.

An abort condition is preserved to pass back with the method parameters.

Putting it all Together

Having explained the components in detail, it will probably add clarity to see it in action.

Below, I have some code that is responsible for processing an inbound IDOC message. Part of that functionality is to create a sales order as part of a multi-step process of:

Create Sales Order

Create Delivery with reference to Sales Order

Allocate batches to Delivery

Post Goods Issue

The Sales Order is to be created provided that:

One has not already been created (due to successful processing of this stage of IDOC handling in a previous handling of the IDOC)

The customer is not in credit block

Several other checks

So how does it get executed? At run time, the code that handles the IDOC builds the rules and then evaluates them as follows. First, the Rule Builder Factory is invoked, in order to build the correct set of rules for a particular customer:

Overall, in the original code responsible for creating the sales order, disruption of the original code has been reduced to a few lines of code and comments, with a minimal addition to cyclomatic complexity:

Assigned tags

Related Blogs

Related Questions

You missed an important step. Shooting the analyst who didn’t do his job describing the requirements in the first place 🙂

If only more programmers would spend time thinking about the design of their code, they’d use patterns like this. One problem I’ve encountered often is that the skill level of programmers is so low, they don’t even realise that they should implement a new class to the interface. Instead they change the implementation. Which works lovely for their scenario… and screws up all the other scenarios…

I’d love to have all requirements up front, and a box of bullets for those who failed to provide them 🙂

Unfortunately it never happens that way, so all you can do is try to indemnify your solution against the eventuality. I work with a very smart Technologist (as he calls himself – he hates to be pigeonholed) who, whenever he embarks on a new implementation asks himself the question ‘what could go wrong with this implementation?’; he calls it a ‘pre-mortem’ rather than waiting for the solution to go wrong and then holding a ‘post-mortem’.

I guess for me this design is a consequence of that kind of ‘pre-mortem’ thinking.

Concerning your comment regarding skill levels and subsequent maintenance of the solution – we will be handing this solution over to a third party outsourcing partner in a few months, and yes I am really worried about correct support of the solution. I have thought about this a bit (another pre-mortem) and have managed to persuade management that after the handover, there will be an associated multiple choice questionnaire, and any ‘resource’ that does not get a high enough score in the test will not be allowed to maintain the solution. Re-certification will also happen on a periodic basis (with different sets of questions, just to be sure 😉 ).

This rule framework reduces complexity for developers, but given a large number of rules, how to explain the system behavior to the end user?

I think a log containing the description and the evaluation result of each rule applied (e.g. “Check that referenced sales order exists” and “passed”/”failed”) is needed. This list could be added to the IDoc status or displayed.

This would give feedback, providing a simple check for testing and improve business acceptance.

There was actually more to the implementation than I presented, but I cut out some stuff in an attempt to focus on the core concepts. For instance, each rule appends status records to an IDoc status table in order to see what rule failed and why; so in essence I had a variant of what you had suggested.

Concerning your comment: ” how to explain the system behavior to the end user?” – I have struggled with, and am still struggling with how to communicate the behavior effectively. I think that this blog was an attempt to see how successfully I could communicate the concepts. Not sure if I did a very good job, but I will take any feedback in the comments and see if I can improve on that.

My comment about using BRF+ was based on the exchange between you & Jacques, regarding tracing the rules, illustrating the behavior to the end-user etc. I just wanted to know if there’s any specific reason for using the pattern over BRF+.

The point about a design pattern is that it does not matter how you implement it. You have a generic interface, or you use BRF+, or the HANA rules framework or whatever they come up with next,

Either way you are still following the pattern, and what a good pattern it is.

At the moment I have a requirement that has to be implemented in two different countries and they want to do it in totally different ways – one by order type, one by a field in the customer master. Regardless in both cases it boils down to “yes/no” based on the sales order data.

And in both cases the rules will change in the future – a lot. So abstracting the rule follows the good old “open closed principle”.

I have to work with a “Frankenstein’s Monster” if you forgive the pun..

That is, more modern code uses BRF+, our head office created their own version of BRF+ long before the real BRF+ came along, there are Z configuration tables, complex conditional algorithms, and the actual IMG.tables referenced by Z code.

They are all rules, and need to be treated by the calling program in the exact same manner.

Moreover some give yes/no answers, some give a structure of values as a result.In the latter case you could say you are doing several rules at once

i.e. for sales organisation A the widget value is X

for sales organisation A the wudget value is Y

you could have one rule table for each, but when you have 15 return values one rule table seems more logical.

In my experience there is plenty of code out there that has been written in the manner in which I described; I have to maintain a lot of code that has had a legacy of many developers working on it, with a diverse approach to code implementation. Sometimes there are opportunities for improvement (and I include myself in that observation).

The example scenario described was a hypothetical context, within which to perform the main objective, which was to present information that I believed could be worth disseminating to others,

The point I was trying to make was that if you develop your code using good practice, modularisation and all the other various techniques out there that promote good coding, you should not end up with a rats nest of if’s, thens and whatevers.

I always try to keep in mind the guy following me who has to maintain my code….

“Not only that, but every time the code is modified, it means that the source code is opened up, edited and transported”

And with this pattern new rules are somehow magically implemented without making any changes in the code and without a transport? Wouldn’t you still need some kind of a code change in a class, not to mention that CASE to use the right type? I’m confused…

This looks like a good old modularization in a new dress but maybe I’m missing something.

Thanks for the observation; it’s an entirely bona-fide comment. I need to break this down into two areas of response.

I think that there is very rarely a black and white clear cut delineation of issues in any problem to be solved, and this is no different.

Firstly; technically, you are entirely correct that in the scenario that I presented, there will need to be some minor code changes made to:

Create a new subclass, and

Hook it into the CASE statement in the factory.

However, as Suhas pointed out, the question for me would be: in which scenario do I invite minimum disruption? I would posit that making changes in one location where expected (the factory) might not be too disruptive, and certainly does not disrupt the original code.

Additionally I concur; a transport is needed for the new subclass and factory.

Secondly, it is possible to add new classes without even making changes to the factory, further aligning to the ‘open for extension’ principle; there is a function module – SEO_CLASS_GET_ALL_SUBS – that will return all subclasses for a given superclass:

If you write your factory to use this, it is possible to dynamically determine any new instance of a class that appears in the system and have an instance of it created automatically in the factory method.

At the end of the day, I had to make a tradeoff between how much information to include in the blog posting in order to convey the essence of the message, versus the risk of omitting something that may lead to obfuscation. This was my first ever blog on any social media platform, so a bit of fine tuning is no doubt in order.

Finally, apologies if I have posted something that is perceived to be misleading; I hope that the essence of the message came across with the objective of sharing something that I believed to be useful, intact and untarnished.

Thanks for the explanation! I think the blog is perfectly fine other than some slight exaggeration of the virtues. In any case, it’s much better than the vast majority of the SCN blogs these days. Special thanks for sticking up with just one font and color. 🙂

It’d probably be helpful if you started with more dynamic option, as described in your comment, that does not require the factory changes. It’d just make more sense IMHO. Otherwise some readers might think if you still need to change the factory then it defies the purpose.

I agree though that development is rarely black and white. We need to know both the tools and when it’s practical to use them.