In some projects I work, we kinda of follow git flow. It is awesome! The problem is that some how the branches have become a technique to control what get’s released (kind of feature toggles). It means that when features are ready, their branches are put on hold until we decide we can release them.

Feature Branching is a poor man’s modular architecture, instead of building systems with the ability to easy swap in and out features at runtime/deploytime they couple themselves to the source control providing this mechanism through manual merging.

Dan Bodart

An example

Let’s say we have feature branches. So Reverend Green and Professor Plum start working on their branches, G and P receptively. In the center there is the Develop branch D, where the branches are merged to when ready. At some point Reverend Green’s branch is merged creating commit D4. Now Professor Plum’s branch is out of date and must be synchronized by merging in from develop, this creates commit P5.

What’s the big deal? The biggest this P5 merge is the more likely Professor Plum’s branch end up being completely useless. The problem comes for semantic changes in the code: A method that no longer does what it did before, a class that has been removed, some refactoring (methods extracted, arguments added/removed) and so on. The code might even compile, but the purpose of the branch will be lost.

Merge paranoia

Long living feature branches take you to a point where most merges are dangerous. Some times because they are too big. Others because the branches that are supposed to be released are way too out of date.

All this avoiding the merge thing is called merge paranoia. It’s a recognized condition caused by code base in Software Peter early stages. It means that the technical debt has become so big that any tiny little change could bring the whole thing down.

This is a problem, but source control cannot do absolutely nothing about it.

Lost work.

I took some repository, created a PowerShell script and got open branches’ difference with develop, first and last commit dates, date spams: from last commit, between first and last. The numbers were alarming.

Just a brief:

Branch

Files Changed

Today()-LastCommit.Date

LastCommit.Date-FirstCommit.Date

B01

1435

665

3

B02

1389

665

0

B31

302

133

160

B37

207

101

77

B38

47

80

0

B39

30

65

7

B40

8

58

0

B41

8

58

0

B42

7

57

0

B43

1

56

0

B44

1

56

0

There are 44 branches, the ones missing have been hidden to abbreviate. The youngest branch has almost 2 month old, the older: almost 2 years. Most people don’t remember what they have done after a couple of weeks. Time span between first and last commits is as big as 5 months. I could safely affirm: from B01 to B39 there is absolutely nothing to do, they are lost….forever. This repository has had 257 branches. That means 15.2% of all the work done here.

So…what do we do?

Time came: fresh blood and a brand new project. Let’s try the thing as it was defined!! For real!! It turned out: It works!!

We have user stories broken down into tasks. Each task must take less than a day, or it should be broken down into more. For each task we create a new branch, at least. That (those) branch (es) will exist for a couple of hours. When done, branches are merged into develop via pull request, peer code reviewed, approved, merged and closed.

Merge conflicts are very common, but, extremely easy to solve. We have tons of unit tests and static code analysis that tell us when we have broken something or increase the technical debt too much.

Productivity boost has been amazing!! But there is still a lot of room for improvement.

This code won’t be valid in C#. The reason is obvious, it leads to execution errors. Somehow they allowed it for arrays.

Formally

Overriding function’s parameters or return type are said to be covariant if they have a more specific type than the one being overriden. They are said to be contravariant if it is the other way around.

It’s always safe to return a more specific type (covariance) and receive (parameters) a less specific one (contravariance).

Covariant delegate return

Sum and wrap up

Covariance is safe for outputs, thus the out keyword, contravariance is safe in inputs, so it’s in keyword. The compiler will take care of no allowing variances if no modifier is applied to interface and delegates. It will also take care of allowing proper variance with proper modifiers. Finally it won’t allow using the wrong modifiers.

Intro

I love extensibility. If it comes in automatic-based-on-conventions form, even better. I love plugins, reflection and everything else that would allow my application to grow and change without touching existing code. I know…it’s hard, but I like it too much. So I’ll keep trying…and failing.

A bootstrap is a class that is instantiated only once. It gets executed at some point during the initialization sequence and performs some simple initialization from its constructor.

It might not look like much, but bear with me a little longer, I will show you some juice.

Real life use case

Let’s take AutoMapper for instance. Before using it, it needs to be configured.

Mapper.CreateMap<Source, Destination>();

This line could be part of our application startup sequence, but as the domain evolves more of these lines will be necessary. Instead, we can move this kind of code to a bootstrap, we can have one of these bootstraps per each sub domain. Anytime a new subdomain joins the picture there would be absolutely no need to modify exiting code.

Now…There something about AutoMapper. I really don’t like static things, only very small stateless functions. But since AutoMapper is the best of its kind and there is no simple way around its statically-ness, I’d rather abstract it and hide its use away.

Cool, right? The fancy fluent interface is a plus…in real life I might not have the time for it, but now…let’s fly. Behind this IMapping there could be a very simple implementation that just call the static method. Off course, there could be much more complex mappings, but this ain’t about abstracting AutoMapper, so let’s take that subject some other time.

Testability

Another huge advantage of abstractions is testing. We can provide a fake IMappings and seal this proper behavior with blood over stone (or even better: with unit tests).

This code says that after creating an instance of PersonAutoMapperBootstrap, its dependency, IMappings which in this case is a Mock<IMappings> will be called with an IMapping with specified SourceType and DestinationType. Here I’m using: xUnit, Moq and AutoFixture. You will find a lot about testing with these tools on Ploeh’s.

Initialization sequence

I use behavior chains for my initialization sequences. With time I might need more steps, remove old…or what ever. This pattern has been very useful for me. For the bootstraps I need 2 steps.

As the application grows more and more error handlers will join in. We can keep up just creating more and more bootstraps. There won’t be a need to modify existing code and yet we’ll be extending the application.

Execution order

Sometimes, it will make sense to execute a bootstrap after another(s). If the DAG of prerequisites gets really messy, we should create some convention to make sure they are executed in proper order, e.g: A PriorityAttribute. For the simple case, specifying the prerequisites as dependencies is enough.

Outro

Bootstraps simplify extensibility in a decoupled testable manner. They are a simple technique that solves a non very small problem. They could be seamlessly used along with many technologies with no sweat. We might just require some automatic discover-ability (i.e: Reflection). They are in use in many commercial-real-life projects I have been involved.

Introduction

A very popular architecture for enterprise applications is the triplet Application, Business Logic Layer (BLL), Data Access Layer (DAL). For some reason, as time goes by, the Business Layer starts getting fatter and fatter losing its health in the process. Perhaps I was doing it wrong.

Somehow very well designed code gets old and turns into headless monster. I have ran into a couple of these monsters that I have been able to tame using FubuMVC’s behaviour chains. A pattern designed for web applications that I have found useful for breaking down complex BLL objects into nice maintainable pink ponies.

Paradise beach service

I need an example to make this work. So let’s go to the beach. Spain has some of the best beaches in Europe. Let’s build a web service to search for the dream beach. I want the clients to enter some criteria: province, type of sand, nudist, surf, some weather conditions as some people might like sun, others shade and surfers will certainly want some wind. The service will return the whole matching list.

There would be 2 entry points:

Minimal. Results will contain only beach Ids. Clients must have downloaded the json Beach List

Detailed. Results will contain all information I have about the beaches.

The weather report will be downloaded from a free on-line weather service like OpenWeatherMap. All dependencies will be abstract and constructor injected.

This is very simple and might look as good code. But hidden in this few lines is a single responsibility principle violation. Here I’m fetching data from a DAL and from an external service, filtering, ordering and finally transforming data. There are five reasons of change for this code. This might look OK today but problems will come later, as code ages.

Let’s feed it some junk food

In any actual production scenario, this service will need some additions. Logging, to see what is going on and get some nice looking graphs; Cache to make it more efficient, and some Debug information to help us exterminate infestations. Where would all these behaviors go? To the business, of course. Nobody likes to put anything that is not database specific into the DAL. The web service itself does not have access to what is really going on. So…everything else goes to the BLL. This might look a little exaggerated, but believe me…it’s not.

If you don’t own any code like previous: Bravo!! lucky you. I have written way too many BLLs that look just like this one. Now, ask yourself: What, exactly, does a “Paradise beach service” have to do with logging, caching and debugging? Easy answer: Absolutely nothing.

Usually there would be anything wrong with this code. But every application needs maintenance. With time, business requirements will change and I would need to touch it. Then a bug will be found: touch it again. At some point the monster will wake up and there would be no more good news from that point forward.

Actual business logic

Let’s see what I’m actually doing:

Find candidate beaches. Those in specified province with wanted features and type of sand.

Get weather report about each of the candidates.

Filter out those beaches not matching desired weather.

Order by popularity.

Transform the data into expected output.

This is how you would do it manually with a map and maybe a telephone and a patient operator to get the weather reports. This is exactly what a BLL must do, and nothing else.

I will implement a BLL for each of previous steps, they will have just one Execute method with one argument and a return value. Each step will have a meaningful intention revealing name that will receive an argument with the same name ended with Input and return type with the same name ended with Output. Conventions rock!!

I know what you’re thinking: I took a 15 lines of code (LOC) program and transformed it into a 100 or more one…You are right. But let’s see what I have. Five clean and small BLLs, each represent a part of our previous single BLL. Their dependencies are abstract which will also make easy to test them thoroughly. They are easily to manage, because they are so small, they will be very easy to maintain, substitute and even reuse. For instance you don’t really need to have performed a life weather search to get a list of beaches and weather conditions to be filtered, you just need to create the input for each of the steps and voilà, you can execute that particular step. At the end I added a step to translate CandidateBeach into BeachMin which is the response I really need for our original service. I also extracted interfaces for each of the steps, it helps with abstractions and some other things I’ll do later.

What do you know? I’m back to 15 LOC, maybe less. I think this code doesn’t even need explaining. I took our steps and chain them into a Behavior Chain. From now on we will refer to Steps as Behaviors. I’m kind of where I started, but now our service depends on external extensible, reusable and abstract behaviors. Still, it must know them all. This makes difficult adding a new behavior. Another thing I will have almost identical code for the other entry point. I must do something to improve these two.

Mechanize it

I know…Sarah Connor wouldn’t agree. I have this tool which takes some objects and automatically chains them together into a function but before let’s see what a service depending on functions would look like.

I’m using ServiceStack as web framework. Basically both Any method in the examples are web service entry points. As you can see they delegate the actual work to functions injected thru the constructor. At some point, which for ServiceStack is the application configuration, I need to create these functions and register them into the IoC container.

Each behavior kind of depends on previous but it doesn’t really know it

The chain is created from functions which could be instance or static methods, lambda expressions or even functions defined in another language like F#

The ExtracBehaviorFunctions method takes in objects and extracts their Execute method or throw an exception if there is none. This is my convention, you could define your own

The Chain method takes in delegates and creates a function by chaining them together. It will throw exceptions if incompatible delegates are used

Additions

I will enrich our BLLs by means of transparent decorators. Using Castle.DynamicProxy I will generate types which will intercept the calls to our behaviors and add some features. Then I will register the decorated instances instead of the original. I will start with cache and debugging. The cache is a trivial in memory, 10 min. More complicated solutions can be easily implemented.

Here I decorated every behavior with debugging and only findCandidates with caching too. It might be interesting to add some cache to weather report as well, but since the input might be a very big list of beaches caching won’t be correct. Instead I will add caching to both the DAL and the weather service.

By using more powerful IoC containers, like AutoFac you would be able to create more powerful decorators, both automatically generated and manual. You won’t ever have to touch your BLL unless there are Business Requirement changes or bugs.

When to use

When your BLL is a set of steps that are:

Well defined. The responsibilities are clear and have clear boundaries.

Independent. The steps don’t know each other.

Sequential. The order cannot be changed based on input. All steps must be always executed.

The behavior chain functions are kind of static, they are not meant to be altered in execution. You can create, though, a new function to replace an existing one based on any logic of your specific problem.

How it works

The generation code isn’t really that interesting. Just a lot of hairy statements generating lambda expressions using the wonderful Linq.Expressions. You can still look at it on the source code. Let’s see instead how generated code works. This is how the generated function looks like, or kind of.

This will start the server in configured port, 52451 by default. Now you need to create a client program. You can manually create a client project by using ServiceStack. Or any other web framework. You can also use included Linqpad file at <project_root>\linqpad\search_for_beaches.linq which basically does as follows:

Conclusions

The high code quality is very important if you want a maintainable application with a long lifespan. By choosing the right design patterns and applying some techniques and best practices any tool will work for us and produce really elegant solutions to our problems. If on the other hand, you learn just how to use the tools, you are gonna end up programming for the tools and not for the ones that sign your pay-checks.

Intro

Before you launch a rocket into space you must warm up your engines, do many other weird things and count down from 10. Something like 10…9…8…and so on. Any application must do kind of the same. There are several steps that must be executed before the application is ready to do its work.

I’ve been gentle with myself and hid the actual dirty tasks behind some static methods. Some of these methods might grow a lot with time. Some might die, others might be born. This code will probably change a lot. There will be a lot of common tasks, for instance third party libraries, Dependency Injection and plugins might require some reflection scanning of a set of assemblies. These common tasks will result in weird refactorings or code repetition. In any case this code’s maintainability will wear off and since this part does not contain any domain logic it will not be under great favor from developers nor management.

Deep in it

Gentleness is over. We need to see what’s inside. Most of this code is hypothetical. The plugin system is trivial: all libraries in the plugin folder will be loaded and from that point onward they will be part of AllAssemblies, so when the Dependency Injection (AutoFac in this case) kicks in the plugins will be registered.

These initialization steps are well defined, have clear boundaries; are independent, they don’t really know each other; are sequential and all of them are always executed. This is text book behavior chains.

Links

I will convert each step into a class, some steps I will break into very very small ones. Then, will create inner classes Input and Output for the Run() method, the one that will do the actual work.

This step was not there before. It looks so trivial you might feel tempted to remove it. I can tell you: it’s very useful as there are usually many steps that need to scan all the assemblies for some types fulfilling some criteria. This will save us some repeated code.

So far this just looks like making it longer, right? Yes, but it’s also much more flexible as each step has been transformed into a pure function. The steps can be reused, reorder, and many other REs. I know…that won’t get me a sale. I have more…

Here I have split the finding of caches from it’s implementation. We have now two kinds of unrelated steps we can improve (or break) on their own. By splitting we decouple, decoupling brings lot of good things.

We have taken a single step and 100% test covered it. This means, it doesn’t really matter what the other steps do, this one will do its job…for ever and ever until the end of time. Provided a collection of caches, this guy will call WarmUp() in all of them.

We just need to create tests to ensure the other steps do their work.

Third parties

Integrating with third party libraries is one of the things that make our code go bad. Each with their own concepts that force us to do things their way, which is not always right.

Here I ran off imagination. The thing is any example that would pop into my mind requires some not very pretty code. So let’s just say I can convert each of the ConfigureLibraryX() into a step.

Gluing it together

At some point I have to create the chain with all the steps I have created. I will create a static class just for it. It will create the initialization function in its class initializer and will have only one method: Run().

Some step no longer needed: remove it from the step list and remove its test and class. Some step needs another input, modify its Input, make sure some of the previous steps would provide, remember to update your tests. You could even create a convention based technique to automatically discover the steps.

Outro

We all have initialization sequences. The best ones I’ve seen are very sophisticated hierarchies. This is just another way to go, more kind of functional. Yes you’d need to type more. Don’t know about you, I don’t mind. With this technique your steps will be modules, with all the advantages that come with modularity. It’s easy, very, to modify the sequence. I definitely see some ROI that could be sold to the management.