I received a couple of emails recently asking how to implement an XML-RPC service in an ASP.NET MVC application. In case anyone is interested this is how to do it (this is an expanded version of an earlier post).

Check that everything is working by pointing your browser to the url for the handler, for example something like http://localhost:33821/api/statename in this case when running from Visual Studio. You should then see an automatically generated help page for the service. If this is ok then point your XML-RPC client to the service and start making calls.

Unfortunately it's difficult to mock DateTime.Now because it's a static method, which leads to more complicated solutions such as injecting an abstract dependency into objects or using a C# lambda such as Ayende's SystemTime class:

This works, sort of, but isn't a general solution to this type of problem.

Visual Studio 11 Fakes

Some of the comments mention TypeMock Isolator and the Moles project from Microsoft, and it so happens the Visual Studio 11 beta reveals that Moles has been productized into Visual Studio as the Fakes Framework. This can inject two types of dummy implementation into unit tests: stub types for interfaces and overridable methods, and shim types for static and non-overridables methods:

Stub types Stub types make it easy to test code that consumes interfaces or non-sealed classes with overridable methods. A stub of the type T provides a default implementation of each virtual member of T, that is, any non-sealed virtual or abstract method, property, or event. The default behavior can be dynamically customized for each member by attaching a delegate to a corresponding property of the stub. A stub is realized by a distinct type which is generated by the Fakes Framework. As a result, all stubs are strongly typed.

Although stub types can be generated for interfaces and non-sealed classes with overridable methods, they cannot be used for static or non-overridable methods. To address these cases, the Fakes Framework also generates shim types.

Shim types Shim types allow detouring of hard-coded dependencies on static or non-overridable methods. A shim of type T can provide an alternative implementation for each non-abstract member of T. The Fakes Framework will redirect method calls to members of T to the alternative shim implementation. The shim types rely on runtime code rewriting that is provided by a custom profiler.

Delegates Both stub types and shim types allow you to use delegates to dynamically customize the behavior of individual stub members.

Faking DateTime

To test the DateTime code, create a unit test project and right click on one of the referenced assemblies in Solution Explorer. This displays a context menu which has an "Add Fakes Assembly". Select this and two more referenced assemblies are automatically added to the project:

Microsoft.QualityTools.Testing.Fakes

Microsoft.VisualStudio.QualityTools.UnitTestFramework.10.0.0.0.Fakes.

Visual Studio will automatically generate a file called Microsoft.VisualStudio.QualityTools.UnitTestFramework.fakes in a directory in the project called Fakes. This XML file is used to configure the assembly for which fakes are generated and the namespaces and types that are included. We want to generate a shim type for DateTime so we can change the file to specify the mscorlib assembly:

Building the project results in Visual Studio creating an assembly containing the fake types in the FakesAssemblies directory. We need to then add a reference to this assembly so we can use the fake types in our test.

Calling ShimsContext.Create() within the context of a using statement means that the shim will be de-registered before the test function exits. If this is not done the shim will remain active for subsequent test and so might cause them to run in an unexpected way.

Although this doesn't address the limitations of C# that Karl described in his post (assuming you see them as limitations), at least we're not having to modify the original code using DateTime.Now to make it testable. Once the fakes are configured in a test project writing tests using the fakes is straightforward.

Testing Time

While Karl's post was not really about the problem of mocking DateTime.Now in itself, it is a good example of a non-deterministic test that can cause problems. Martin Fowler blogged about this type of test in his post Eradicating Non-Determinism in Tests, in particular the issues associated with testing time related functionality:

Few things are more non-deterministic than a call to the system clock. Each time you call it, you get a new result, and any tests that depend on it can thus change. Ask for all the todos due in the next hour, and you regularly get a different answer.

The most important thing here is to ensure that you always wrap the system clock with routines that can be replaced with a seeded value for testing. A clock stub can be set to particular time and frozen at that time, allowing your tests to have complete control over its movements. That way you can synchronize your test data to the values in the seeded clock.

Always wrap the system clock, so it can be easily substituted for testing.

One thing to watch with this, is that eventually your test data might start having problems because it's too old, and you get conflicts with other time based factors in your application. In this case you can move the data, and your clock seeds to new values. When you do this, ensure that this is the only thing you do. That way you can be sure that any tests that fail are due to time-movement in the test data.

Another area where time can be a problem is when you rely on other behaviors from the clock. I once saw a system that generated random keys based on clock values. This systems started failing when it was moved to a faster machine that could allocate multiple ids within a single clock tick.

I've heard so many problems due to direct calls to the system clock that I'd argue for finding a way to use code analysis to detect any direct calls to the system clock and failing the build right there. Even a simple regex check might save you a frustrating debugging session after a call at an ungodly hour.

While preparing for job interviews after I arrived in the US I rehearsed questions such as — how would you improve your favourite programming language — what new features would you like in your favourite IDE — and so on. As it happened I never got asked any of these questions but browsing through the Visual Studio UserVoice site I noticed that one of my desired Visual Studio enhancements is under consideration but won't make it into Visual Studio 11. This is Function return value in debugger. I've never liked having to modify code to be able to see the value being returned from a function, for example changing code like this:

string Foo()
{
/// ...
return Bar();
}

So that a local variable can be used to watch the return value from the call to Bar():

string Foo()
{
/// ...
string ret = Bar();
return ret;
}

One of the site admins added a comment:

For those out there who have experience debugging native C++ or VB6 code, you may have used a feature where function return values are provided for you in the Autos window. Unfortunately, this functionality does not exist for managed code. While you can work around this issue by assigning the return values to a local variable, this is not as convenient because it requires modifying your code.

In managed code, it’s a lot trickier to determine what the return value of a function you’ve stepped over. We realized that we couldn’t do the right thing consistently here and so we removed the feature rather than give you incorrect results in the debugger. However, we want to bring this back for you and our CLR and Debugger teams are looking at a number potential solutions to this problem. Unfortunately this is will not be part of Visual Studio 11.

Oh well, back to using local variables for the time being, even if they result in code review comments such as "Remove unnecessary variable".

I've come across this problem with .NET code. Often a single exception type will cover several different error conditions and so when writing the corresponding unit tests it's tempting to assert on the exception's Message property. Of course this is bad because it assumes the text of the message won't be changed.

NUnit

NUnit encourages the checking of exception messages when using the ExpectedException attribute, for example

It's better to have some way of specifying an error code which can be checked independently of the message, for example say we have an exception base class supporting an error code from which we derive custom exception classes:

Visual Studio

Visual Studio does the right thing and doesn't have a built-in way of checking the exception message. You can even use your own attributes derived from its ExpectedExceptionBaseAttribute class. This allows us to implement an attribute which can be used to check the error code in exception classes derived from the above ExceptionBase class:

Unfortunately, although this works fine when Add is invoked explicitly, it fails to compile when attempting to use it in a collection initializer. The compiler only recognizes member functions called Add, not extension methods. C# PM Alex Turner wrote on
Connect:

You're right that the spec is ambiguous here! There was no explicit decision in C# 3.0 to prevent this from working; it was simply an implementation limitation we accepted when we discovered it late in the product cycle. We see no design reason not to add this now, but unfortunately, we are starting to lock down on our feature set and won't be able to get to this feature for this release. We'll definitely keep this in mind when planning for our next release!

...I've added a vote for collection initializers binding to Add extension methods to the OneNote notebook we use internally to track C# compiler and language requests. We can't promise if or when we'll get to this feature, but we'll definitely keep it in mind during planning for the next release!

I tried this with the .NET 4.5 Developer Preview and it still fails to build, but of course this may change before the final release.

On the other hand, VB does support the use of extension methods called Add in collection initializers but its syntax for lambda expressions rather misses the point of the exercise:

While thinking about which features I would like added to C# I came across an old post by Eric Lippert —
In Foof We Trust: A Dialogue — in which he presents an imaginary dialog between himself and a C# user who would like an operator or operators similar to typeof() but which would take the name of a method, field, or property instead of a type name. I mentioned something similar in old posts
here and
here. This feature would certain reduce the need to hard-code method/property names or use lambda expressions, for example in implementations of INotifyPropertyChanged.PropertyChanged, and would help refactoring tools. It was interesting to read:

I agree, that would be a lovely sugar. This idea has been coming up during the C# design meeting for almost a decade now. We call the proposed operator “infoof”, since it would return arbitrary metadata info. We whimsically insist that it be pronounced “in-foof”, not “info-of”.

He goes on to describe that there are design issues which make this much more costly to implement and test than it appears at first sight and that there are always budgetary constraints on which new features his team can deliver. He sums up:

It’s an awesome feature that pretty much everyone involved in the design process wishes we could do, but there are good practical reasons why we choose not to. If there comes a day when designing it and implementing it is the best way we could spend our limited budget, we’ll do it. Until then, use Reflection.

Looks like that one will remain on the wishlist but it turns out that one other feature I would like has already been implemented for some time now: collection initializers for dictionaries. For example, you can write this:

This is a particular case of using a collection initializer which turns out to be a fairly complicated, not to say contrived, language feature. If your type implements IEnumerable — presumably a sanity check that it represents some sort of collection — and has one or more methods called Add(), the compiler will map the collection initializer expression onto calls to the Add() function(s). Mads Torgerson wrote about this in 2006: What is a collection?

I just came across an interesting exercise in analysing the design of product packaging. Design consultancy
Antrepo took several well-known brands and progressively stripped away the packaging detail on each of them. Along with several of the people commenting on the post I mostly prefer the third versions of each product. These are not as busy as the first and second but still retain something of the brand identity. They might be too simple in the long run though — perhaps the richness of the original versions is required to retain interest when you are exposed to the packaging over and over again. Or maybe I'm simply atypical of the target market.

ArraySegment

I was looking at an algorithm which involves processing segments of an array recursively and I thought the code would be neater if instead of passing the array plus the offset and length of each segment I could use an array data type which provides a view onto a segment of an array. The ArraySegment type appeared in search results and it's name sounded promising. I'd not heard of it before but it's been around since .NET 2.0 and is used in various types including Socket and LogRecordSequence. Unfortunately it is essentially just a way of specifying a segment and doesn't provide any methods to access the data within the segment:

It also doesn't have a constructor which takes an ArraySegment so making it awkward to use for recursive algorithms, but it's a ready-made class if you need to specify segments of an array without making copies of the array, as in this example.

SubArray

At first I thought it would be nice if SubArray was an array type but I quickly realized you cannot derive from System.Array. According to MSDN:

The Array class is the base class for language implementations that support arrays. However, only the system and compilers can derive explicitly from the Array class. Users should employ the array constructs provided by the language.

So it's not possible to create an array type which represents a sub-array. The solution was to implement a type which derives from IList<T>. This provides methods for indexing and enumerating the sub-array. The Java interface List has a method called subList which provides some prior art for this:

Returns a view of the portion of this list between the specified fromIndex, inclusive, and toIndex, exclusive. (If fromIndex and toIndex are equal, the returned list is empty.) The returned list is backed by this list, so non-structural changes in the returned list are reflected in this list, and vice-versa. The returned list supports all of the optional list operations supported by this list...

...The semantics of the list returned by this method become undefined if the backing list (i.e., this list) is structurally modified in any way other than via the returned list. (Structural modifications are those that change the size of this list, or otherwise perturb it in such a fashion that iterations in progress may yield incorrect results.)

In the case of the new SubArray type we'll assume the underlying collection is an array, i.e. of fixed length, so the methods of IList<T> which change the length of the list will throw NotSupportedException. We will also allow an instance of a SubArray to be created as an offset into an existing SubArray (with the Offset property of the "sub-SubArray" referring to the original array and not to its parent SubArray). Finally as with ArraySegment the SubArray type will be a struct. Throw in some extension methods to make it easier to use and came up with this:
SubArray.cs

Example Usage

Reconstruct a binary tree from arrays containing the inorder and preorder traversal of the tree (assuming no duplicates).

Brian's post is a good overview of how F# type inference works. He illustrates how type inference in F# gives the language an edge over C#:

Though type inference is “just” syntax sugar, it really can matter; there are cases where you’d never write cool functional programming code in C# because you get completely swamped under by the type annotations. As an example see here; one of the C# functions in that example is this monstrosity:

I've had discussions relating to the use of var in C# where people say that you need the type annotations to be able to understand code when reading it but I'm not convinced. With type inference it seems like you're seeing the code at a higher level, making it easier to understand without so much clutter from implementation details, but I haven't worked on a large scale project using a language with type inference so I can't really say for sure one way or the other.

As I sort out things for the move to the US I'm discovering various artefacts from my career as a software developer. These have been stored unseen in boxes in the loft for many years so it is quite a nostalgia trip to see them again. For example, I came across a printed README file for FlexDump, a hexadecimal record viewer/editor I wrote as my first non-trivial Windows application:

FlexDump is a programmer's tool for working on files containing fixed length records. Unlike a typical hex editor where are offsets are from the beginning of the file, FlexDump displays a single record at a time with offsets calculated from the beginning of each record. Thus no more does the programmer have to keep converting field offsets and record number into an offset from the start of the file

This was a good example of writing some software to scratch a personal itch[1] — at work then we were using C-ISAM which stores its data in fixed length records and I needed a way of checking the raw data. At some point we moved to an open-source implementation of C-ISAM called D-ISAM[2] and I spent some time hacking on this to clear up some of its bugs, a useful experience in learning the value of open software. Implementing FlexDump was also a good way of learning an up-and-coming technology — this was the early 1990s when Windows 3 was just beginning to take off.

The README contains a screen snapshot (though I now notice that the file being viewed didn't contain fixed length records, so wasn't a good example; I should have spotted that at the time):

I developed FlexDump on a 386SX[3] machine which cost me around £2000, not much performance for a lot of money in those days. The README mentions the performance:

FlexDump is written in C++ using Borland Turbo C++ version 3.0. However, it does not use Object Windows and could be ported to Microsoft C++ 7.0 without any problems. Turbo C++has been used because it's speed of use of its development environment, necessary because of the rather slow hardware used — a 16 MHz 386SX with 2M memory.

It's difficult to imagine using a machine with only 2Mbyte of memory. I think it also only had a 20Mbyte hard disk.

The README also mentions that FlexDump was going to be released as shareware but I demonstrated it in an interview at Uniplex and got a job as a Windows developer working on the onGo Office project, and so moved onto better things as a Windows developer.

[1] As in the first of Eric Raymond's guidelines for creating good open source software, described in his essay The Cathedral and the Bazaar:

Every good work of software starts by scratching a developer's personal itch.

[2] D-ISAM seems to have survived in this product, via at least one rewrite.

[3] The 386SX had a 32-bit internal architecture but used a 16-bit data bus to reduce the cost of the circuit board.

I'm working my way through some of the more interesting Build conference sessions and this morning I watched the Future directions for C# and Visual Basic talk by Anders Hejlsberg. He's a very good presenter and the demos all worked unlike those in some of the other sessions I've watched. The section on the Roslyn "compiler as a service" project had some very cool demos involving smart refactoring, writing C# code via an interactive window, complete with intellisense, and pasting VB code into a C# file and seeing the code automatically transformed to C#. Well worth watching as a glimpse of the possibilities that will be opened up for IDE tools in the future.

The section on asynchronous programming was a recap of previous talks over the last year but it inspired me to write my first Metro app to investigate how the new async support in C# 5 will make it easier to call multiple asynchronous calls. The scenario I came across in a recent Silverlight project was having to display a busy indicator (now provided by the Windows Runtime ProgressRing class) while asynchronous calls are being made and always ensure the busy indicator was switched off when the calls finished, regardless of whether one of them failed. The pattern used to implement this resulted in code with something like the following (simplified) structure:

The code is ugly and error-prone, for example a bug which occurred more than once was omitting to switch off the busy indicator in the event of an exception being thrown. The async support in C# transforms this into something much simpler (assuming C#5-style async versions of the remote calls are available):

I noticed a couple of posts recently which discussed the activity of blogging — by Gabriel Weinberg on his
blog, and Rick on
Flip Chart Fairy Tales.
They both mention a key benefit of blogging — that it forces you to understand what you're writing about.

Gabriel:

Blogging forces you to write down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.

You have a lot of opinions. I'm sure some of them you hold strongly. Pick one and write it up in a post -- I'm sure your opinion will change somewhat, or at least become more nuanced.

When you move from your head to "paper," a lot of the hand-waveyness goes away and you are left to really defend your position to yourself.

Rick:

Blogging forces you to put some effort into understanding your material and constructing a reasoned argument. Most bloggers, even the ones who irritate the hell out of me, usually have something interesting and thought-provoking to say, some of the time. The fact that we have to put some thought into our posts acts as a brake on our more idiotic tendencies

I find that even just thinking about writing a post engages me more with the subject matter. In fact I think about writing considerably more posts than I actually write. I know… but I still gain a lot from it. I particularly like it when I start researching something and it turns out to be more interesting than I expected. For example, my last post on
Arrays and Enumerable.Last()
seemed to be about a pretty trivial topic, almost not worth writing about, but when doing the research it I discovered an
interesting post
from 2004 about the design of the .NET Framework which had a direct bearing on what I was writing about.

Also, I find that working in an Agile development environment with not a huge amount of design documentation my technical writing skills get a little rusty and some blogging every now and then helps to sharpen them up.

Finally, from a practical point of view, I recently discovered something else about blogging — it helps to have a blogging environment with as little friction as possible (witness the burst of posts here recently). I rewrote my blogging engine as an ASP.NET MVC3 application and in the process added two features which make it considerably easier to write posts. First, I added support for
Markdown[1]
, which is so much better than hand-crafting HTML, which I'd been doing ever since I started in 2001; and second, I implemented accurate preview which makes proof-reading easier and reduces the risk of typos and other mistakes slipping through to publication (the preview can also be easily copied and pasted into an email in its exact intended final format if I want to give it to someone for review).

Code which determines the index of the last item in an array like this has always irritated me a little, particularly when doing complex array manipulation:

object lastItem = myArray[myArray.Length - 1];

It's obvious enough but it would be nice to have a cleaner way of getting the index of the last item or the last item itself. The latter has a good solution, using the Linq Enumerable.Last() extension method:

object lastItem = myArray.Last();

I initially thought this could be sub-optimal because the implementation might traverse the whole array but according to Ed Maurer at Microsoft, in a
reply on a Connect thread, Last()is optimized for source sequences which implements IList<T>:

Thanks for your investigation of the performance of Enumerable.Last(). I've inspected the implementation, and it has an optimization to deal with cases in which the source sequence implements IList<T> like your array case - cast to IList<T> and use the indexer method. I believe the implementation employs the most practical optimization available to us, and we won't invest further to improve the performance of this method. Thanks again for your comments.

The Mono implementation of Last() applies this optimization (but also illustrates the level of performance hit using Last()):

Back in 2004 Brian Grunkemeyer
blogged
about what makes this possible:

When we were designing our generic collections classes, one of the things that bothered me was how to write a generic algorithm that would work on both arrays and collections. To drive generic programming, of course we must make arrays and generic collections as seamless as possible. It felt that there should be a simple solution to this problem that meant you shouldn't have to write the same code twice, once taking an IList<T> and again taking a T[]. The solution that dawned on me was that arrays needed to implement our generic IList. We made arrays in V1 implement the non-generic IList, which was rather simple due to the lack of strong typing with IList and our base class for all arrays (System.Array). What we needed was to do the same thing in a strongly typed way for IList<T>.

There were some restrictions here though - we didn't want to support multidimensional arrays since IList<T> only provides single dimensional accesses. Also, arrays with non-zero lower bounds are rather strange, and probably wouldn't mesh well with IList<T>, where most people may iterate from 0 to the return from the Count property on that IList. So, instead of making System.Array implement IList<T>, we made T[] implement IList<T>. Here, T[] means a single dimensional array with 0 as its lower bound (often called an SZArray internally, but I think Brad wanted to promote the term "vector" publically at one point in time), and the element type is T. So Int32[] implements IList<Int32>, and String[] implements IList<String>.

He goes on to describe how implementing this was decidedly non-trivial.

Finally going back to the first issue, now to determine the index of the last item in any array, it's possible to use Array.GetUpperBound():

int idx = myArray.GetUpperBound(0);

But this is a bit ugly because it's relying on the fact that an array type of T[] is a special case of System.Array. Perhaps it's better to use an extension method:

It's slower than using length - 1 but potentially safer because calling it on a zero-length array will result in InvalidOperationException being thrown. And it's not quite as succint as Perl's $#array syntax but that's another story.

Earlier today I noticed Miguel de Icaza
(@migueldeicaza)
was tweeting about the private access modifier in C#, including:

I really should make Mono's C# compiler warn every time someone puts "private" in members of a class. One way of teaching the language.

and

If you can't memorize the trivial c# visibility rules, you can't be trusted to write c# with lambdas, oop, generics, iterators and async

and the knockout blow, the WWJD argument:

Steve would not have approved that redundant atrocity that is 'private' had he designed c#

It started me thinking about the use of private. What does it actually give you? One way of looking at this is to consider what harmful effects there are if you don't use it but I can't think of any. It's the default so nobody is going to be able to inadvertently access your by default private class members from outside the class. If, unaware of the default, they try to do this the compiler will complain.

You say ugly, I say explicit :) I understand your POV, but I also see the benefits of showing, "I really made this choice."

I disagree about it being more clear and I don't see why you should have to demonstrate you thought about the choice of making the member private. Having private as the default means you don't have to think about it — private is the safe default as I mentioned above.

My conclusion is that using private just gives us a warm fuzzy feeling that we're writing better code but in fact doesn't make the slightest difference, and so, applying Occam's Razor, we shouldn't use private.