When you reference the handlers assembly from your test assembly and this assembly initializes a bus instance and the endpoint to send messages to is a publisher it can happen that you see multiple messages being send to the endpoint. while you expected less messages. Inspecting the messages indicates subscription acknowledgement messages and you wonder why that is as you don’t want your test project to subscribe to anything at all! This happens because NServiceBus scans all available assemblies for messages handlers and when it finds these then it will subscribe to the events which results in the messages that you see in the (journal) queues.

Luckily this behavior can be disabled by calling .DoNotAutoSubscribe() during bus initialization as shown in the following example:

Queues are not automatically created anymore since NServiceBus v3. Queues are now created during the installation phase which you usually do not have in web apps or test apps. You can still have the old behavior by triggering the installation during bus creation. Make the following change to trigger this.

I am using git-tfs for a couple of weeks now and had some issues with it that using the checkintool results in one tfs changeset that combines all previous commits. This can often be usefull but in my case I would like to push all individual local git commits seperately to tfs. This is exactly what rcheckin does and this way your git commit history is mirrored to tfs. As a reminder:

git tfs checkin

Performs one commit containing all previous commits where either the default commit message can be used (concatenation of all git commit comments) or a new custom comment.

git tfs checkintool

Same as git tfs checkin but now you can easily change which changed files to commit via the TFS checkin dialog. It shows the concatenation of all git commit comments to use as a commit comment but can easily altered in the gui.

git tfs rcheckin

Commits each previous git commit to tfs and reusing the git comment for each commit.

My intension of this was. Please cache this file for one hour (SetMaxAge) and then check if the file is still valid (SetEtagFromFileDependancy) and if it is then cache it again for one hour.

However, it turns out that MVC thinks different :-). When the item is expired MVC returns status code 200 even though the clients submits a matching If-None-Match header. In such cases the webserver is allowed to respond with 304 and update the item with new information passed in the response header.

It could be that I have forgotten something in the code above but the following is the workaround I created for this situation. It needs a custom etag to use so you cannot make use of SetETagFromFileDependencies. The method basically does a Etag comparison and responds with a 304 if they match. If they do not match the method returns null but has set the above mentioned headers.

The reason that this is cool is that “File(Server.MapPath(“~/Content/mylargefile.dat”)” will ONLY be called with EtagFix returns null. This avoids ‘expensive’ action result implementations to be created and do work that they do not have to do.

Today I got the following Subversion message via TortoiseSVN while I wanted to commit some changes.

is not a working copy

It took me a while to figure this out but it has to do with the fact that I subst the root of the branch to my S: drive. When I went to my user folder (c:usersramonsrc*) and performed a commit it just worked as expected.

The reason for the substition is that when I compile the sources that the S: drive paths are embedded into the PDB files. When another developer then attaches the debugger to any of the applications then he only needs to have the same substitution to find all source code files. Besides that, it doesn’t matter on which project I work on and no matter which drive or folder the paths stay the same. This is especially usefull as I am a console and keyboard junkie.

A long time since my previous blog as nowadays I often tweet my ramblings but this one does not fit a tweet

Sometimes you are working with strong named assemblies and when you are having unit tests and want to access internals then you have to use the InternalsVisibleTo assembly attribute. So to discover the public key token I ran “sn.exe –tp project.publickey” and then you get the public key (long) and the public key token (short).

Friend assembly reference is invalid. Strong-name signed assemblies must specify a public key in their InternalsVisibleTo declarations.

Then I pasted the long variant in the InternalsVisibleTo attribute and it compiled but I knew for 100% that the short version had to work. After investigation there seem two ways to pass the required strong name public key information. You can choose if you want to pass the whole public key or the public key token.

When both assemblies are signed then you need to pass the full PublicKey.

Today I got reminded again that it is sometimes required to adjust thread pool settings. This time to test some possible connection issues and I required to open a number of connections simultaneously and also use them in parallel.

The test system is a virtual machine which only has one core so the defaults that .net uses are based on that. I first thought that the problems were caused by nunit but pretty soon found out that the case had to do with the thread pool. When I queried the current values via ThreadPool.GetMinThreads it told me that the thread pool used just one thread as minimal. After forcing that with ThreadPool.SetMinThreads to a thread count in where I could test my scenario I still had issues.

I am now using a custom Parellel.For which I adjusted so that I can set a different ChunkSize (set to 1) and ThreadCount for testing purposes.

At my job (Company Webcast) we have several API’s for our customers to use. One of those API’s allow webcasts to be created and modified and that interface has data containing dates. Our platform works with UTC date time’s as we are an international operating company so it is logical for use to store those as UTC.

We use WCF and yesterday we had a very weird issue where calls failed. After investigating the issue we found out that the cause was that how the date time got supplied to our service in the message

The following values are valid in an XML message:

<ns1:ScheduledStart>2010-05-26T17:00:00+02:00</ns1:ScheduledStart>

<ns1:ScheduledStart>2010-05-26T15:00:00Z</ns1:ScheduledStart>

A WCF service accepts both values but it treats them different which I did not expect! The first value became 2010-05-26 17:00 where DateTime.Kind is set to Local and the second becomes 2010-05-26 15:00 with DateTime.Kind set to UTC. This amazed me a bit as I assumed that both would always result in either a UTC or Local DateTime.

The reason it fails is that another argument states the time-zone from where the live webcast will be held. This is used in combination with the DateTime to convert the DateTime value to a local time to inform the viewer about the conditions of the event. This code assumed that the incoming DateTime value would always be of kind UTC.

So now our front-end api’s convert incoming DateTime values to a DateTime value with kind UTC.

This could also be a problem when you persist this DateTime to for example a database and your storage logic does not convert the DateTime from/to UTC or Local depending on your needs. We use NHibernate for storage and it does not by default has a way to set UTC/Local to a <property> definition. This can really become a problem when time is part of your business logic as it is in ours as we use it to schedule tasks and it is very important to know it a time is UTC or not especially when something happens on the other side of the world.

I often hear that NHibernate is not usable for selecting records as it is not possible to perform a not equal comparison with the criteria api. I must admin that it took me a while before I found out but it really is more logical when you know. Lets take a look at the following example: