Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else – so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.

The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

In takes about 2 seconds to realize that it won’t work – since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm – so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.

This is basically return type covariance and its not supported in C# and will likely never be. With good reason too – if the base class contract says that you’ll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API’s chainability – telling clients to call methods in certain arbitrary sequence is a no – no. And this is where extensions provide a neat solution. Its in two parts

Keep only state in TenantControllerBuilder.

Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.

Rate this:

Share this:

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers.

Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers. Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests.

As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

Test setups require various objects in different configurations – and that’s exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability – tests have become DAMP.

So here’s what the Builder API looks like from the client (the test case):

While I knew what to expect, it was still immensely satisfying to see that:

We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.

The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.

You might think that this is just sleight of hand – after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven’t shown the code – so definitely something tricky is going on ;)?

Well not really – the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I’ll post a follow up covering that.

Rate this:

Share this:

I’ve been absent a few weeks from the blog. Life got taken over by work – been deep in the Javascript jungles and Coffeescript has been a lifesaver.

Based on my earlier peek at Coffeescript, we went ahead full on with Coffeescript and I have to say it has been a pleasant ride for the team with over 4.7KLoc of Javascript (with Coffeescript source weighing in around 3.7KLoc including comments etc) that now I can confidently recommend it for any sort of Javascript heavy development.

I’m going to list down benefits we saw with Coffeescript and hopefully someone else trying to evaluate it might find this useful:

Developers who haven’t dove deep into Javascript’s prototype based model find it easier to get up to speed sooner. Yes – once in a while they do get tripped up and then have to look again into what’s going under the covers – but this is normal. The key point is that its much much more productive and enjoyable to use Coffeescript.

The conciseness of the Coffeescript definitely goes a long way in improving readability. One of the algorithms implemented was applying a bunch of time overlap rules. We also used Underscore.js – and between Coffeescript and Underscore.js, the whole routine was within 20 lines, mostly bug free and very easy for new folks to pick upand maintain over time. Correspondingly, the generated JS was much more complicated (though Underscore helped hide some of loop iteration noise) – and it wouldn’t have been too different had we written the JS directly.

Another benefit was that the easy class structure syntactic sugar helped quickly prototype new ideas and then refine them to production quality. With developers who’re still shaky on JS, I doubt the same approach would have worked since they’d have spent cycles trying to get their heads wrapped around JS’s prototype based model.

Coffeescript also allows you to split the code to multiple source files and merge all of them before compiling to JS – this allowed us to keep each source file separate and reduce merges required during commits.

Finally, performance is a non issue – you do have to be a little careful otherwise you might find yourself allocating function objects and returning them back when you don’t mean to but this is easily caught in reviews.

One latent doubt I had going into this was the number of times we’d have to jump in to the JS level to debug issues. With a larger Coffeescript codebase spread across multiple files, this is a real concern since the error line numbers wouldn’t match with source and if we have to jump through hoops to fix issues. Luckily, this wasn’t a problem at all – over time, in cases of either an error in JS or just inspecting code in the browser, its easy to map to the Coffeescript class/function – so you just fix it there and regenerate the JS. Secondly, the generated JS is quite readable – so even when investigating issues, it’s quite trivial to drop breakpoints in Chrome and know what’s going on.

The one minor irritation was if there was a Coffeescript compile issue, then when joining the file, the line number reporting.fails and then you have to compile each file independently to figure out the error. Easily automated with a script – so that’s just being nitpicky.

Anyway, if you got here looking for advice on using Coffeescript, then you’ve reached the right place and maybe this post’s helped you make up your mind!

Rate this:

Share this:

I’ve just ran across Coffeescript… can’t believe what sort of a hole I’ve been living in.

It’s a source to source compiler (ie when you ‘compile’ a coffeescript script, you get javascript source.)

So why would you want a source to source compiler for Javascript?
Well, as apps become more and more ‘front-end’ heavy with DHTML/Ajax bling bling, the javascript that holds all that together also becomes more and more complex. Yeah, sure you used Jquery (or ‘insert your favourite js framework’) – but that’s not even scratching the surface. You’re still writing tons of js code, and dealing with its idiosyncracies and tearing your hair apart.

Enter Coffeescript – clean syntax with elements of style borrowed from ruby and python, this is super clean and efficient. You write your code in coffeescript which is neat, clean and concise. What it generates is very idiomatic and clean javascript.

The javascript version is the generated from the coffeescript version above . Head over to coffeescript.org page – they have an online interpreter where you can try out coffeescript code and it generates equivalent javascript source.

If you’re wow’ed with that (I am) – and just in case you’re saying good bye to javascript, here’s the nub.. since its a source to source compiler, unless you understand what’s going on under the covers, you’ll hit a problem soonish when you have to debug something.

So, Javascript isn’t optional – but if you have that bit covered, there’s no reason to have to ‘live’ with the iffy side of javascript. Take a look something like coffeescript and have a little fun along the way.

Rate this:

Share this:

So my affair with Vim continues – and I seem to have discovered VIM’s macro super powers. The obvious next step is to shout from the rooftops and hence this blog post (and there’s hardly anything original – apart from the fact that I’ve just had a ‘aha’ moment when it comes to macros and thought it might help other budding vimmers out there…

A little primer – Macros let you repeat a set of commands. The way to go about it is to press q<macro_letter> where <macro_letter> is between lowercase a-z. This starts recording a macro in Vim (and you see a recording message at the bottom). Now hit commands you want to repeat later and press q when done to finish recording. VIM records all the keystrokes you enter in the register you specified as the macro name. To now execute the macro, position the macro on the line and then hit @<macro_letter> and Vim will faithfully replay your commands.

Its a great time saver – especially for complex editing tasks where search/replace doesn’t cut it. But, if you’re feeling a dissappointed after coming this far (after all, I promised a aha moment), then hang on.

Today’s discovery was that you can edit macros that you’ve recorded quite easily and save them back!!! THIS IS HUGE. Why so? Because when you record a macro, its quite normal to jump around quite a bit or get one or two keystrokes wrong. In fact, its for this reason that I could never use Emacs’s macro facility and failed to just ‘get it’. However, in VIM, you could just open a scratch pad editor and hit "<macro_letter>p – that’s double quote-letter-p to paste the contents of register containing your macro. You see your macro keystrokes – so go ahead and edit them and then use "<register>y<movement> to save your edits back to the register. You can now execute the macro with a @<macro_letter> as if that’s the way it was recorded in the first place.

Another obvious tip – you can execute the contents of any register as if it were a macro with a @. Not sure when that could be helpful – but knowing that its possible is good.

Rate this:

Share this:

Now that I feel quite comfy with VIM, over the weekend I needed to edit a config file in my Ubuntu 10.10 Virtualbox machine quickly. Instead of GVim, I just opened the file in console VIM. As I hit i to get into insert mode, a bunch of weird character boxes were inserted. That was not good at all 😦 – just when you think you’re comfortable with something if it does something totally weird. In any case, I was in too much of a hurry to bother and went about editing my file with gVim. Also, backspace was wonky (same weird characters) – so I felt better. For some reason that I fail to understand, why must Linux make proper backspace and delete handling such a pain! In any case, it’s something that I’ve dealt with enough times to know that there’ll be something on Google.

Later on, tried to see what all the fuss was about. Googling around, I found :help :fixdel and that seemed simple enough. Alas, when I tried it out, it didn’t fix the issue at all. Also, I seemed to be getting weird characters just pressing i to get into insert mode – and the VIM wiki page didn’t have anything about that. Neither did Google turn up anything that seemed related.

So today early morning, on a whim, read up a little on VIm terminal handling. I have the following in my .vimrc

set t_Co=256

Maybe it was the color escape code that was coming in – so checked out :echoe &term which returned xterm under gnome-console and builtin_gui under gvim. So I’ve put the following bit in my .vimrc and it seems to have fixed things nicely: