Although I don’t dress up with a fancy black suit and hunt criminals by night, holiday season means living a double life for me. Most of my friends study in Vienna or Graz and only visit during holidays, so whenever they are all in town I have to maintain a student life during the night while still working during the day. That usually means getting up at 9am while staying up till 5am.

Believe me, when 6 people don’t accept no for an answer and want to play poker at your place, you’d better restock on coffee and make sure you eat light so you don’t ruin your stomach completely. (I start regretting having built my own poker table)

And still, it’s holidays so trying to maintain a 8 hour per day pace is almost impossible. Besides your family obligations at various christmas celebrations and the usual shopping madness there isn’t really enough space to get focussed on something long enough to actually finish it in a good way.

So today I came back to office and started filling out the holes in my application I left during the holidays. Working my way from //TODO: statement to the next, revisiting the old code I noticed one thing: Code quality doesn’t matter.

When I write code I can’t forget it afterwards. I mean, I suck at remembering syntax or class names (man I love google for bringing back my memories over and over again), but if I feel like a solution isn’t elegant enough or a module should be restructured to make more sense I’ll think about it whenever not occupied and eventually come up with something better. What would have taken me hours to get right the first time, was fixed in a matter of minutes after I had time to think about it. So this leads to the interesting conclusion that nothing I’ll write today will actually matter next week, as long as I constantly rethink and rework my code I’ll end up with high-quality code no matter how bad it was when I first wrote it.

So the most important thing to consider when writing code the first time is not let implementation details “leak” out to other parts of your system, so reworking one part of the system won’t affect the other. Also writing tests for one-line methods may seem dumb and repetitive, but once you start juggling around stuff while a release date is coming at you at alarming speed, those one-line tests will assure that your app won’t blow up once deployed.

The issue here is that Nhibernate maps all string values by default as nvarchar(255) and so inserting something bigger to a field causes this nasty sql error. My mapping declaration looked like this:

<property name="Comments" />

After some searching I found Ayende’s post on NHibernate and large text fields gotchas that almost solved the issue, except for one thing, I didn’t know where to put the sql-type attribute. Turns out it’s defined in chapter 15 of the NHibernate doc (while mapping files are chapter 5). The sql-type=”NTEXT” attribute can only reside on the <column node beneath the <property node. So the correct mapping looks like this:

I got introduced to LinQ through the famous posts by ScottGu on Linq-to-Sql and always thought of LinQ as some really cool language thing that automagically enabled me to write queries within .NET.

All of ScottGu’s samples look like actual SQL Queries with real keywords within the language like:

var result = from s in strings
where s.StartsWith("d")
select s;

So, when I was just briefly trying to get stuff done I used all those keywords as they seemed to fit and didn’t really try to understand the “deeper” concept behind those (as they magically generated SQL queries). MS introduced new keywords into the language, so be it, I was looking them up in MSDN and used them as such when using the LinQ-to-Sql datacontext.

Now, that I (and apparently Microsoft) have departed from LinQ-to-Sql I somehow forgot about LinQ for quite some time simply because I had no need for in-memory-queries for quite some time. And to be honest, I also never really thought about applying that strange LinQ syntax to objects in memory (I considered the above mentioned LinQ query more as a “c# strongly typed version of SQL” rather than a in-memory query method)

So, I was quite amazed of what you can actually do with LinQ if you abandon this strange undiscoverable SQL syntax and simply use method chaining. The above query can be rewritten without any “keyword magic” but with plain objects to look like this:

var result = strings.Where(s => s.StartsWith("d"));

The beauty of it? All the LinQ overloads reside on the IEnumerable<T> interface, and most of these methods will return an IEnumerable<T> so you can “chain” those method calls together like this:

And now the whole thing started to make sense. I can easily grasp how this is supposed to work, instead of looking at awe at the “SQL query” that magically works. And that’s where I went wrong the first time.

Instead of learning LinQ to objects first, I got caught in the database centric world of LinQ-to-Sql that made me not think of LinQ as anything other than a Database query tool.

The common way to learn OOP design and best practices is through books on Java or books on C# that got translated from Java. And since Java and C# both work pretty similar as far as objects are concerned, I never saw delegates put to good use except for events where they were automagically generated and used by the winforms designer. I always knew delegates as a method signature that had to be present for an event to work (since the event requires a delegate type), but knew little else about their usage and workings.

So, I was quite amazed when I found out how useful delegates really are besides when doing events. The delegate is simply put a “method signature type”, something like a interface for a method (or you could call it “function pointer” if you’re an oldschool C guy). So it is essentially a type that represents a method, allowing you to call or pass around any method that matches that signature as a object.

So imagine the following two classes that share a common signature but no supertype or interface:

Now imagine you need to have those Subject classes in a list and you need to loop through them and call Notify on each of them. No problem if they share a supertype or a interface, and if they don’t you could extract an interface easily.

But if you don’t control that code (because it’s from a 3rd party etc), you’re forced to write some sort of adapter class that provides a common interface if you want to call that one method on all of them in some unified way.

Now C# has solved this problem gracefully by treating methods as a type, so if you omit the parentheses on your method call it will return a object representing that method. And that object can then be stored in a variable of a delegate type that matches the original method’s signature.

So I could define a delegate that can then be called to reference methods in different classes as long as they match my delegate signature:

public delegate void doNotify();

I could then create a variable of type doNotify that contains a reference to my Notify method in Class1 and call that variable instead of the concrete method on Class1:

doNotify method1 = new Class1().Notify;
method1();

The real beauty of this is that I can also pass that delegate around to other methods that know nothing about the type, only about that one method signature. Therefore I could write code like this:

Where this came really handy (and where I found out about it) was when doing a pretty simple list interface that had a dropdown field that controlled sorting of the list. It was pretty standard, I had 3 methods that sorted the list (and I know I should have extracted strategy classes, but that seemed too heavy for the task at hand) and depending on the selected item in the dropdown one of those 3 should get called. The usual course of action would be to switch on the value of the dropdown, but that would have created a maintenance nightmare in the long term (inserting and dealing with the value would be somewhat redundant and I’d have to update 2 code passages for one change in the future). So the simplest solution I came up with was to create a delegate for the sort methods (public delegate void doSort()) and create a class that took a text and that delegate and exposed it as fields. Now I could use the DisplayMember property on my dropdown to display the text for the sort function and when needed I could just call the sort function through the delegate.

This then leads me to the apparent lack of generic controls in .NET that make all this interface work feel awkward and wrong because you are casting to and forth all the time, but that’s another story.

That all being said, I can’t say I encourage heavy use of delegates because the concept can be so easily abused. In most cases it’s better to use interfaces and object composition because they are not only better known, but also more explicit (you don’t see the delegate use as easy as some implemented interface). Use them wisely and you have a very powerful tool at hand, overuse it and you will end up with some pretty hard to read code.

There are many things I love about Amazon: free shipping, recommendations, customer reviews, good product pictures, awesome website, gift lists. I could go on and on, all those things make Amazon the #1 shopping site on the internet, and I love buying my stuff there. They offer free shipping on everything above 20€ and all books ship for free too. So, whatever you buy, you never think about shipping, you never think about returns or anything. You always know, there’s this big-ass company called Amazon that takes customer satisfaction etc pretty serious and will not cause you any troubles on returns etc. They always ship on time and they have almost everything.

Now, Amazon decided some time ago that “almost” everything isn’t enough and they went to really everything. They opened the Amazon platform up through a program called Marketplace, where other vendors can sell their products through the Amazon.com website. And man do I hate that feature! I never went to Amazon because they had everything, I went there because they had almost everything completely hassle-free. Now, when I am searching for something I end up at 3rd party vendors 90% of the time (feels like Amazon stopped selling themselves almost) that have different shipping policies (almost nobody offers free shipping except for Amazon) and different return policies. Hell, I’ve seen 1-man-shops surface on Amazon that didn’t even have a rightful email address (listing a hotmail address in your company profile does NOT generate trust!).

Last week I decided to order a Canon EOS 450D DSLR and went to Amazon to shop for accessories like memory, batteries etc. Guess what? It took me almost 30 clicks to get to a product that was offered by Amazon! When searching for a SanDisk SDHC Extreme III 8gb card I found it 4 times, from 4 different 3rd party vendors but not from Amazon. I gave up after 30 minutes of searching and ordered another SD card, completely pissed and very unsatisfied. And there is no way to turn the marketplace off, there is a filter that lets you select vendors but apparently this isn’t working when selecting Amazon (you still end up with a listing of 3rd party vendor sold products, yikes!).

So why did Amazon destroy one of the most pleasant buying experiences of all time? I go to Amazon to buy their products, not to buy some other guys products he is selling from his basement, getting charged shipping costs and not knowing how returns or reclamations are going to be handled! Are those people insane? If I’d be into that kind of things I would be shopping at Ebay not at a respected and well trusted Amazon!

I can only hope Amazon will change their mind on this and create some “Amazon products only” button, so I can enjoy shopping again. Now it’s more a quest to filter 30 products for their vendor.

So, I’m a bit under pressure right now. I don’t blog too much because I’m bound to a immutable release date of 7th of January. If I don’t get the software out until 7th of January the customer will have to wait for another year before he can make the switch from his (pain in the ass) 15 year old MS-DOS software.

So you see I’m not only a little bit stressed, I’m looking forward to a Christmas holiday I’ll probably spend locked into some room finishing the software day and night. As that deadline approaches, there are some things I have noticed in my current development that suddenly don’t seem fit any more due to time constraints.

I know my business layer needs a major refactoring (a day or so) to keep my domain logic “clean” of clutter. But I know I can’t spend a day refactoring something that has to ship next week (I want to have the app user tested by 7th). Still I can think before I code. One thing I see in legacy code all the time is how people stop thinking once they approach deadlines, but start to just mindlessly copy/paste stuff to make it work somehow.

Here’s one principle you should NEVER forget:

DRY – don’t repeat yourself

Not repeating yourself through simple copy and paste will at least make it easy for you not to have to rewrite the whole thing once a change has to be made. I can retrofit extensive tests and good SoC, but I am doomed once I have to change the same piece of code throughout the whole application.

If I’m so stressed that I have to write crappy code, It’s still my duty to fix it sometime later, and that’s not possible if one business rule (it may be as simple as amount/numberOfRates) is done at 20 different places! That then leads me to the next point, fixing stuff later requires at least some automated tests (nothing is worse than writing fixes that have more bugs in them).

Testing isn’t difficult, and it saves you more time than it takes. You may think you’re faster if you run the app once and see if it works, but apps don’t run as fast as unit tests, and once you have to run it several times to make it work you would have saved time by writing a unit test first. And although my tests look ugly and may be incomplete, I have verified that the most important calculations and interactions work (and can rerun those tests and verify that stuff still works later when I try to clean up the mess).

So, by applying (even incomplete) testing to my app, and not violating DRY I still retain the ability to easily extend and manipulate my application in the future. Once I’m done I can easily run NCover and find out what code I didn’t cover in my tests..

I’m a keyboard addict. I love keyboards and they love me. I never managed to break one in any way because I have never used one long enough to break it. I consider my keyboard the most important tool as a programmer, and that’s why I constantly try to get the best available.

I guess that is over, I haven’t bought a new keyboard for over a year now.

My last 3 keyboard buys have all been the Microsoft Natural Ergonomic Keyboard 4000 so I have one for every workplace I happen to be at. My 2 computers at home have one, my desk at work has one, it’s just that great!

I got introduced to Ergonomic keyboards through the Logitech Ergonomic Desktop Freedom Pro (or at least I think it was called that way) and used it long enough to really learn how to properly type. It was good compared to what I was using before (10 pound IBM bricks) and it was the first keyboard I actually bought myself (for the ridiculous amount of 280€ as a 16 year old). Unfortunately Logitech discontinued the ergo series and Microsoft’s old Natural keyboard was discontinued too so I switched back to normal keyboards for quite some time (trying out all sorts of fancy keyboards like the Fingerprint Keyboard etc) before I found the Microsoft Natural 4000.

At that point I haven’t used a ergonomic keyboard for some years so I was hesitant, but I got lured by the price. 40€ is nothing after having spent 100€ on the Logitech G15 (worst keyboard ever) and I was blown away by what I got!

Typing is so convenient on this keyboard, and my wrists feel a lot better after extended periods of work than they did before on the (ergonomic nightmare) G15 (we’re talking about magnitudes of >300% here). The leather I rest my wrists on feels very comfortable and soft even after a year of extensive use.

The keyboard also comes with some sort of tilt-attachment that creates a reverse slope. It’s unusable if you want to play games, but for typing it is very comfortable to not have to bend your wrist to access keys.

Also notable is that the keyboard comes with a standard pgup/pgdown layout instead what makes working pretty easy once again (who designed the DELL keyboards should be crucified for the Pos1/End placement!).

Some shortcomings:

No Next/Previous Media functionality. Only Play/Pause and Volume control buttons.