Real life coding

Tag Archives: framework code

Articles concerning separation of concerns generally deal with the separation of concerns between different classes. However, there is more to this than separating out classes.

What is the point of separation of concerns? It is a good concept to follow because it reduces ‘bleeding’ of code changes (side-effects) in one area from affecting code in other areas. It also means that to fix a bug or change a feature you only have to make the change in one, small place. It also contributes to code reuse as each piece of code has a single job that can often be used in different contexts.

So, separation of concerns is not really about classes, it’s about code generally. This means you should apply separation of concerns at all levels within a solution.

For example:

Place your framework code (e.g. your generic code) into its own project. You can reuse this generic code in any solution; make it part of your personal toolkit.

Place your data access code in a separate project from your business logic. This means you can transfer to some other data storage technology (like the cloud) without affecting your business logic. Most people already do this in a layered architecture.

Place your data structures in their own project (be they DTOs, EF Entities, or ADO.NET datasets), separate from data access or business logic; if you decide to extend your system or implement an SOA architecture, having the data structures separated will make them easily accessible from multiple places. Again, many people already to this and if you don’t you really should.

Separation of concerns within a method

At the method level you want to separate out the ‘ceremony’ code that supports the business logic from the actual business logic itself. Ceremony code includes things like logging and parameter checking; things that you need to have for robust code, but aren’t specifically involved in the business logic.

You will often read SOLID proponents state that you should create a virtual (but not abstract) base class with the business logic, then derive into a class that wraps parameter checking around the methods, then derive a third class from that to wrap logging around that, etc.

To me this sounds awfully complicated and long-winded.

Aspect Oriented techniques can be a big help for the general aspects (like logging), and contract-based programming can help with parameter validation (especially Microsoft’s contract library with compile-time checking). However, sometimes there isn’t an exact contract that performs the check you want or you’re just working on legacy code that doesn’t use advanced techniques and there isn’t time to fit them retrospectively.

In such cases I would suggest the following technique.

Example code here is from a simple logging system. It’s probably true that most developers would write it in the form:

What we have here is the basic business logic surrounded by error checking (or, from another point of view, execution optimisation) code. That’s not separate, that’s highly coupled. If there are multiple parameter validation checks to be made, then the conditional statements build up and up and it can become difficult to see which code is actually the business logic. If you’re nesting all those conditional statements, you actual business logic might be indented so far you have to scroll to the right to even see it.

If we invert the if-statement, we can make these two concerns separate blocks within the method:

So we now have two, separate chunks of code – the ceremony code followed by the business logic code. If there are multiple checks to perform, split them out into individual if statements, one after the other, each exiting the method or function if the parameter is bad. Having separate if statements is also a level of separation – each conditional block has one validation to make.

This technique is both quicker and simpler that inheriting into wrapper classes and it is easy to identify which is which and changes in one block won’t break the other (although any unit tests will test both). After you have done with writing/refactoring the method with separation, you could extract all the conditional parameter checks into a separate method (returning a Boolean) if you really felt the need, or you can leave it intact if you find that easier to read and debug. This method isn’t as cast-iron as inheriting multiple levels of classes, but I’d place it at about 80% of the effect for 20% of the effort which I think is a good pragmatic trade for projects that need to get finished.

One thing that took me a while to figure out is that in business software, code doesn’t have to be perfect; bugs can be left in the system. I originally came from a science background and the sort of code I started out on had to be perfect; a simulation with a bug is worthless and a piece of medical software with a bug can be fatal. Business software is never going to be fatal so a minor bug or two is not a problem. Leaving known bugs in the system is an anathema to many programmers (the justified programmer stereotype being obsessive perfectionists). The problem is that as a developer you can become completely absorbed in the code you are writing and forget that this code is just one cog in the machine. One thing that you need to remember when writing business software:

writing business software is not about writing software, it’s about improving the business

So, does spending two days chasing down that annoying screen update issue help the business, or perhaps could the users put up with that for now? Every bug fix is a business case: is it cheaper to get the developers to re-write code for that data breakage that occurs once every blue moon, or to just manually fix the database when it happens? Working on code has a cost (your wages at the very least), and so that cost must be used to its best advantage. As frustrating as it might seem at times, leaving minor bugs in place is often the best business strategy.

Of course, the other side of the coin is that as these minor annoyances build up, the system moves more and more towards being viewed as ‘bad software’ by the people who actually use it, and so we have to end up walking the line between wasting resources fixing annoyances that don’t really bring business benefit and having the users think that we developers can’t do our jobs properly.