Until recently I thought that’s cool, after all the tests passed every time I ran them.

But what if the passing tests are just a coincidence?
What if I am testing a financial application that will only accept orders between 8am and 6pm on weekdays, and the order-date gets initialized to DateTime.Now in my test?

Given the normal work days in most countries that code would run for most developers just fine, but when some notorious late-worker like me comes in the tests start failing for no apparent reason.

Time or place of execution, should not have an impact on a test at all. Given the same code and the same test, the result should always be the same.

So, whenever you initialize a value to something from your current execution context (time being the prime example here), you create a possibility that this test will break in some unexpected ways over time.

So if you really want to fill some DateTime with a value for testing use a constant (like DateTime.MinValue/MaxValue). So whenever you re-run this test all inputs are the same as they have been when you wrote the test.

Although I am still actively working for Pixelpoint, running strong on finishing my projects, I also went back to Klagenfurt University to finally finish my computer science bachelor degree.

Only problem being that (since I am now caught up somewhere between work and university) I can’t fit any lectures into my schedule. I can barely attain courses that require me to do so, so lecture slides are a rather vital part of my studying right now, and not being able to access them is really a showstopper.

What really bothers me is that a university professor who is teaching computer science, doesn’t get the concept of an open exchange of information and knowledge!

All educational material for courses by the syssec group is accessible through an external website that is password protected and where username/password combination is only given out during lectures.

Why?

We are talking about slides for a university lecture, by Austrian law a public event that anyone can attain (yes! every person who wishes to can simply sit in there and listen). Anyone who wants to can simply walk into the ÖH office and can buy a printed copy of exactly those slides for less than 1€. Heck, everyone can get to those slides anyway so why are we keeping people out?

I’ve seen them, there are no state secrets in there. To be honest: I don’t even like these slides too much (they were meant to be presented by a professor mind you). Still they are a valuable source of information that I believe shouldn’t be kept away from anyone.

On the other hand there is O. Univ.-Prof. Dipl.-Ing. Dr. Laszlo Böszörmenyi who manages to somehow have ALL of his course material be publicly (and freely) available to anyone. So, I doubt there is any legal reasoning behind locking away the course material and I demand that those password protections for the lecture slides are released. A university ought to be a place where knowledge is shared, so why stop at the boundary of a small institution like the University of Klagenfurt?

I have been using generics quite heavily lately for writing decorators to Repository classes that do logging and caching on top of the repository (I’ll talk about that another time).

Since I implemented an asynchronous cache clear method I immediately ran into some troubles with shared resources like the DB connection and so I figured the whole problem would be solved with a simple lock around the cache fill.

I really didn’t spend too much time to look at the whole code, most notably it’s not the complete code (the WPF desktop client and the Silverlight client are missing from the repository).

Usually when confronted with a new code I immediately try to look at the tests to see what the code is about (since I don’t want to build the database on this machine).

Finding the tests is easy, the solution is rather well structured and split into multiple projects to separate concerns. Unfortunately that’s the only good thing about the tests. Since there are only 2 classes with tests, both which I find tragically funny:

This should be a test for the Proxy class, but there are no asserts in there. I mean, if you test something, at least make sure you test that what you did worked. Not getting an exception from your code isn’t really a test at all (wait for the guy who mucks all exceptions with try/catch!).

From what I can judge (and I’m surely in no position to do that since my latest code was quite untested too), there isn’t one test in two different test projects that actually does something (besides Assert.Inconclusive calls at the end, or no asserts) and so I wonder why someone bothered creating those projects at all.

Also, most code in there uses a static Factory classes that I would abandon in favor of dependency-injection to facilitate testing.

(You could spare yourself some pain if you’d have two implementations of the Factory class instead of doing the Dummy branch in every method)

Now, this is hard I know. Most other code I looked at in there is quite nice, the DataAccessLayer seperation is quite nice, and also the strict DTO declaration is really cool, and now hitting on the tests and the factory is quite bad. Also the project structure is a really pleasant sight (although I keep missing projects :)).

But I’m a test and deendency-inection nut, so what matters most to me is what I’ll pick on first. It takes time to come up with good code, and with some refactorings this codebase can really shine (it’s well done after all).

During production nothing changes, but in a unit test scenario I can pass a fake DataAccessFactory into the static factory and swap the whole implementation (enabling me to use Rhino.Mocks or whatever mocking framework I like instead of writing TestDummys myself).

This way we can even have the TestDummy class living inside the test assembly instead of littering the production assembly.

As you can see, we now have complete control over the factory during testing, without affecting the rest of the code in any way.
Another sideeffect of this is that the static DataAccessFactory or the actual DataAccessFactory implementation has no need to change if we need to make changes to the DummyFactory.

Software design is hard. Not so much because it’s so hard to come up with, but because it takes a very long time to really hit the sweet spot where you really feel it’s good.

Doing this on your own is almost impossible, because you constantly have to switch off your personality and challenge the assumptions you just made when writing something. I ask myself all the time “Is this module really right here? Should I break this up into smaller modules?” .. And frankly, I am the wrong person to answer that question since I made the mistake in the first place. So I challenge myself all the time into the mindset of another person (be it a user, tester etc..) and try to forget what I was just thinking for a minute to decide if I’m still on track or not.

It’s like driving alone in a car through unknown terrain, you have to stop all the time and pull out the map to see if you’re still driving in the right direction or not.

And honestly, it sucks big time. Stopping is always bad. Dropping out of your “zone” and doing a complete context switch hurts the flow and I feel mentally exhausted very fast, and nothing seems to get done in the long run.

So, at my current project I decided that I can’t go on on my own. I tried multiple times to get to a good design through lots and lots of whiteboard, spiking and experimenting. But I never quite nailed it, I always felt like 20% away from the real thing, but with a burned bridge in front of me.

So, when I called my employer last Sunday I asked one thing “Do I get some budget to bring in a second pair of eyes to work on this particular problem?”. And the awesome answer was “Just do it and spare me the details.”.

Next day I called Harald Logar and he agreed to stop by and go through the code with me for a day. I gave him a very brief heads up on what I was working on (mind you, he’s a complete outsider to the project) and what problem I’m trying to solve.

When he came in next day I explained the vision, and showed him some tests I prepared before that should demonstrate the “desired” behavior of the system. After that little introduction, we were already implementing like crazy.

It was amazing! Although I was doing most of the typing, Harald was constantly there to challenge my assumptions, answer my questions and throw in his own ideas when necessary.

But what was really an game changer for me that day was that we were not only good, we went faster than I had ever done in the past. We rewrote the complete data access logic of a rather complex system in less then 2 hours (complex means dynamic proxies, caching and some non-trivial retrieval stuff).

We then spent the rest of the day optimizing the system (performance is very critical for the project) and I think we both learned a great deal about the inner workings of .NET collections and how they work performance wise. (We also implemented a very cool cache solution that clears the cache on a background worker thread to avoid downtime)

So, needless to say I’d do this again any time. Harald was a joy to work with, and I think by the end of the day we were both very proud of what we accomplished.

So, curious me I immediately peeked at the source and found some really cool stuff in there that I may very well use in the future. There aren’t only missing things in there like list concatenation, also generator methods for sequences (great when doing mock expectations) but also cool things I never thought of before but that might come in handy like a consume method that triggers Linq execution immediately without consuming memory (You could do that with .ToList() but that would allocate a IList<T> in memory).

So, don’t miss out on the fun in MoreLinq. I’m sure there is something cool in there for everyone. And also, don’t forget to suggest to Jon Skeet good method names if you come up with one :

Upon helping a friend of mine assembling a new kick-ass gaming PC. Once the whole system was put together he called me the next day stating that the system feels incredibly slow and crashes whenever he starts up a 3d game.

I rushed to help and after some tinkering we found out that the CPU (a Core 2 E8500 2x3.16 Ghz) was operating at almost 80°C. By contrast, my Core2 E6600 (2x2,4) reaches 36°C under full load. So, since the CPU cooler was spinning, I am quite convinced that it has something to do with a faulty mainboard or (rather unlikely) a faulty CPU. My initial thought was to try updating the BIOS in case the problem may go away.

So I went to the ASUS download site, selected the appropriate mainboard model (P5Q) and downloaded a zip file containing the newest BIOS. Now imagine my excitement when I was presented with a .ROM file inside that zip. Pretty cool huh? So I went on to download the Afudos BIOS update tool V2.36 that should install the .ROM bios. Started it: sorry doesn’t work on Windows. (WTF?)

Please insert a clean, unformatted disk into A:\ drive and boot the system into DOS mode. In DOS mode, please type in C:\> FORMAT A: /S or click on“My Computer”icon under Windows O/S, right click on drive A:\ and choose“Format”. By using the procedure above, you can create a boot disk without AUTOEXEC.BAT and CONFIG.SYS files.

Drive A:\ ? DOS? Autoexec.bat? Config.sys?

ASUS: Are you out of your mind?

When I needed to flash my Dell XPS M1330 Bios with a new version, I didn’t even have to leave my browser to do so. Some weird ActiveX thingy just started and updated my BIOS revision while I was casually checking my email. And ASUS is really telling me this is the way to go if I want to update my brand new socket 775 P45? C’mon, that’s so 1994 – not even funny any more.

LinQ is by far the most empowering language technology I’ve seen in years, and it has really helped me in many cases get to a more functional style of programming, enabling clearer syntax and better overall code.

But, it also has it’s pitfalls. Since LinQ attaches a .Count method to any enumerable, why would you still use IList for read-only collections? It’s so damn easy to simply write code like this:

IEnumerable<T>.Count() would be a O(n) operation if it would follow the IEnumerable semantics through enumerating through all items. Since the definition of a Enumerable is that you need to iterate over every item in a linked list to determine it’s length.
Actually, the implementation (I looked with Reflector into the Count() method) does exactly that:

But, that would always guarantee a O(n) execution time and would slow most applications to a crawl (since it’s so easy to use .Count() everywhere) Microsoft implemented a little shortcut right before the above code:

That’s why calling .Count() doesn’t hurt so much as long as you are calling it on a IEnumerable that’s also an ICollection, all you’re doing is a cast and a field read.

That’s also why in my testing IEnumerable.Count() wasn’t soo much slower than IList.Count since the only difference that slowed IEnumerable was the typecast (I’m too lazy to generate some data on a non ICollection IEnumerable with a real O(n) execution time).

Just keep in mind that once you are iterating over a “real” IEnumerable that has no collection underneath, you should try to avoid calling .Count() too often since it’s not only a cast/read but a iteration over all elements of a list.
Also keep in mind that usually when working with the extension methods on IEnumerable you risk to perform a O(n) operation, so use it wisely (especially when you don’t control the source of your IEnumerable<T>, you could get passed anything).