We can then extend this code to feed the code coverage output into NDepend.

Downloading your dependencies

I’ve already covered this in the previous post I mentioned, so using the same helper method, we can also download our NDepend executables from a HTTP endpoint, and ensure we have the appropriate license key.

I’m using an extension I haven’t yet submitted to F# Make yet, but you can find the code here, and just reference it at the top of your F# make script using

#load @"ndepend.fsx"
open Fake.NDepend

After adding a dependency between the NDepend and EnsureDependencies target, then we’re all good to go!

Recording NDepend trends using TeamCity

To take this one step further, and store historical trends with NDepend, we need to persist a metrics folder across analysis runs. This could be a shared network drive, but in our case we actually just “cheat” and use TeamCity’s artifacts mechanism.

Each time our build runs, we store the NDepend output as an artifact – and restore the artifacts from the previous successful build the next time we run. Before this was a bit of a pain but as of TeamCity 8.1 you can now reference your own artifacts to allow for incremental-style builds.

In our NDepend configuration in TeamCity, ensure the artifacts path (under general settings for that build configuration) includes the NDepend output. For instance

artifacts/NDepend/** => NDepend.zip

go to Dependencies, and add a new artifact dependency. Select the same configuration in the drop down (so it’s self referencing), and select “Last Finished Build”. Then add a rule to extract the artifacts and place them to the same location that NDepend will run in the build, for instance

NDepend.zip!** => artifacts/NDepend

TeamCity Report tabs

Finally, you can configure TeamCity to display an NDepend report tab for the build. Just go to “Report tabs” in the project (not build configuration) settings, and add NDepend using the start page “ndepend.zip!NDependReport.html” (for instance).

We’ve been using F# make – an awesome cross platform build automation tool like make & rake.

As an aside (before you ask): The dotCover support in TeamCity is already excellent – as you’d expect – but if you want to use these coverage files elsewhere (NDepend, say), then you can’t use the out-of-the-box options very easily.

Downloading your dependencies

We’re using NUnit and MSpec to run our tests, and so in order to run said tests, we need to ensure we have the test runners available. Rather than committing them to source control, we can use F# make’s support for restoring NuGet packages.

DotCover is a little trickier, as there’s no NuGet package available (the command line exe is bundled with TeamCity). So, we use the following helper and create an F# make target called “EnsureDependencies” to download our dotCover and NDepend executables from a HTTP endpoint:

Next up is creating a target to actually run our tests and generate the coverage reports. We’re using the DotCover extensions in F# Make that I contributed a little while back. As mentioned, we’re using NUnit and MSpec which adds a little more complexity – as we must generate each coverage file separately, and then combine them.

At time of writing, if you replace IE with Chrome on Windows 8 then Chrome installs both a desktop and a Metro version of itself. Personally, as most of my time is spent in the desktop, I’d rather Chrome just always opened there.

Setting up some new infrastructure with a web and seperate db tier, I was hit with the usual MSDTC woes.

Error messages progressed bit by bit as I opened things up:

Attempt #1: The partner transaction manager has disabled its support for remote/network transactions.

Attempt #2: Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.

Attempt #3: The MSDTC transaction manager was unable to push the transaction to the destination transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn’t have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers.

I couldn’t get past the final error though. DTCPing is a very useful tool if you’re struggling with this, along with this TechNet article on what settings should be in place. One warning popped up that sent me in the right direction:

WARNING:the CID values for both test machines are the same while this problem won’t stop DTCping test, MSDTC will fail for this

As it happens, both machines were from an identical VM clone, and therefore had identical “CID” values. You can check this by going to HKEY_CLASSES_ROOT\CID. Look for the key that has a description of “MSDTC”.

Having found Brian’s article who had done the hard work previously, this set me on my way – essentially you just need to uninstall and reinstall MSDTC on both of the machines. The following worked for me:

If you’re migrating to a new website and need to map old IDs to new IDs, I’ve just discovered that the UrlRewrite plugin in IIS has a great feature I hadn’t come across before called rewriteMaps. This means instead of writing a whole bunch of indentical looking rewrite rules, you can write one – and then simply list the ID mappings.

The syntax of the RegEx takes a bit of getting used to, but in our case we needed to map

/(various|folder|names|here)/display.asp?id=[ID]

to a new website url that looked like this:

/show/[NewId]

You can define a rewriteMap very simply – most examples I saw included full URLs here, but we just used the ID maps directly:

You can reference a rewriteMap using {MapName:{SomeCapturedValue}}, so if SomeCapturedValue equalled 525 then you’d get back 114571 in the list above.

Because we’re looking to match a querystring based id, and you can’t match queryString parameters in the primary match clause, we needed to add a condition, and then match on that captured condition value instead, using an expression like this:

After encountering a strange deployment issue today, eventually it was tracked down to an x86 assembly being deployed to a x64 process. There’s a tool included with Visual Studio called corflags that was helpful here. Open up a Visual Studio command prompt, type corflags.exe assemblyname.dll and you’ll see something like this:

Being new to the world of NServiceBus, I just thought I’d share a few gotcha’s as I experience them.

When everything’s up and running there’s no easy way to see what’s going on as messages appear and disappear from the normal message queue very quickly. You can use an audit queue to log all messages appearing on a queue. To do this, in your app config you simply need to use the ForwardReceivedMessagesTo attribute, like so:

NServiceBus won’t automatically create an audit queue, so when you do so manually.

You can do this in code using:

NServiceBus.Utils.MsmqUtilities.CreateQueueIfNecessary(QueueName)

Alternatively, you can create it using the admin interface, but you need to ensure it has the same settings and permissions as the NServiceBus queues. Notably, that SYSTEM has permissions on the queue, and that it is transactional (if your queue is) – otherwise your audit queue will remain empty!

Running MsDeploy is awesome for automated deployments of websites, but it’s also possible to use it to deploy other applications to the file system – such as associated windows services. You just need to jump through a few more hoops to get things up and running.

I’m using TeamCity for our integration server, but the basic steps will work regardless of the system you are using. I tend to set up TeamCity to have a general “Build entire solution” configuration. This builds the entire project in release mode, and performs any config transformations you need (check out my post here if you to transform app.config files for your service).

Next, for each component and configuration we want to deploy (ie website to staging, website to production, services to staging, services to production), I create a new build configuration, with a dependency on the “build entire solution” configuration. This means we can assume that the build has completed successfully.

After the build, there’s a few steps that need to complete:

Stop the existing service and uninstall it

Copy over the output from the build to the target deployment server

Install the new service and start it

Stopping and starting the services

For the first and last steps, we can define two simple batch files for each, with a hard coded path of where we’ll install the service on the target server.

These should be saved in source control as part of your project resources (I put them in a Deploy folder), and so accessible from the build server. These are very basic at the moment – they could equally be PowerShell scripts doing far more complicated things or accepting configurable parameters – but this will do us for our example scenario!

We will use MsDeploy’s preSync and postSync commands to execute these batch files before and after it performs the synchronization on the file system.

preSync:runCommand – before we perform the deployment, we can pass the path to a batch file that will be streamed to the deployment server and executed. By default, this will be run under a restricted local service account (“The WMSvc uses a Local Service SID account that has fewer privileges than the Local Service account itself.” – from MSDN).

source:dirPath – this sets the path we want to copy files from. We’re using a parametrized build template in TeamCity to pass in the full path to the source directory, and the current configuration)

dest:computerName – this is actually several parameters combined. I tried various permutations, and this is what worked best for me. I’m not using NTLM authentication here (so authType=basic) because my staging and production servers are on an external network. The username and password are for an IIS Management Service user that we’ll set up in a minute (and are also parametrized by TeamCity – but you could hard code them here).

allowUntrusted – allows MsDeploy to accept the unsigned certificate from our target server. You don’t need this if you’re using an SSL certificate from a trusted authority.

postSync:runCommand – the command we run after a successful deployment.

There’s one gotcha with the preSync and postSync operations at the moment – any error codes returned by preSync or postSync (such as being unable to install the service or start it), the whole MsDeploy action still return success. I haven’t found a nice way round this yet – you’d have to write some powershell script to parse the output and detect errors. Microsoft know about the issue so hopefully it will be fixed in the next release.

Configuring MsDeploy

Before we try and run this command, we need to set up a few things on the target server we are deploying to. I’m assuming you’re already using MsDeploy to deploy websites, and so you can already see IIS Management Service, IIS Manager Permissions, IIS Manager Users, and Management Service Delegation appearing as options under “Management” in your main IIS server configuration screen.

Create a new IIS user from the IIS Manager Users screen. Alternatively, you can create a Windows user and use that instead.

Even though we’re installing a service, we still need a target IIS website to associate our credentials with. This could be a dedicated empty website (it doesn’t need to be running) or an existing one. Make sure you replace “DummyWebSiteName” in the command above with the name of the actual website you choose. The underlying path doesn’t matter, as we override the target path as part of our MsDeploy command.

Go into “IIS Manager Permissions” for the dummy website you are using, click “Allow user” and select either the IIS or Windows user you created above.

Next, go into “Management Service Delegation”. We need to create two permissions – one so we can deploy the files to the file system, and another so we can run the pre/post sync commands. For the first, click “Add Rule”, select “Blank Rule” and then type “contentPath” in the providers field, * in the actions, set the Path to the one where you are going to deploy the service to. Save that, and add another blank rule.

For this second rule, type “runCommand” in the providers field, “*” in actions, and choose “SpecificUser” under the Run As… Identity Type field. We need to run under elevated permissions in order to stop/start services and install them. Choose a user account that has these credentials.

File and user account permissions

In order for everything to work, we need to ensure that MsDeploy can access the folder we’re deploying to. We also need to extend the Local Service account so that it can impersonate a more elevated user in order to run the console commands necessary to stop/start and install services (note there are security implications for this – see MSDN for more details.).

Add read/write access to Local Service account to the target deployment folder

Finally, you need to restart the Web Management Service for this to take effect.

If all has been set up correctly, you should now be all good to go – services will automatically deploy and get started!

Ignoring/preserving files

In a similar fashion to when deploying websites, you may find you wish to preserve logging folders and similar during deployment. You can do this by adding some additional parameters to the MsDeploy command. For instance:

-skip:objectName=filePath,skipAction=Delete,absolutePath=\\Logs\\.*$

will preserve any files in the Logs directory.

Common error messages & troubleshooting

When starting out with MsDeploy it’s likely you’ll hit a fair number of permission denied errors – without too much more information. Logging is your friend.

Request logging - enabled through the Management Service configuration window in IIS, you will find requests logged to %SystemDrive%\Inetpub\logs\WMSvc

Below I’ve included some common error messages and some possible causes.

“Connected to the destination computer (“xyz”) using the Web Management Service, but could not authorize. Make sure that you are using the correct user name and password, that the site you are connecting to exists, and that the credentials represent a user who has permissions to access the site.”

Probably because the username and password you are using are invalid (they haven’t been set up) or do not have permissions set for the particular “dummy” website you are targeting.

“Could not complete an operation with the specified provider (“runCommand”) when connecting using the Web Management Service. This can occur if the server administrator has not authorized the user for this operation.”

Most likely you have not set up the correct delegated services through the Management Service Delegation window – either no runCommand permissions have been set, or the delegated user doesn’t have permissions to run the command.

Could not complete an operation with the specified provider (“dirPath”) when connecting using the Web Management Service. This can occur if the server administrator has not authorized the user for this operation.

Either you haven’t set the dirPath permissions via the Management Service Delegation window, or the Local Service account does not have read/write access to the specified directory.

Error during ‘-preSync’. An error occurred when the request was processed on the remote computer. The server experienced an issue processing the request. Contact the server administrator for more information.

This occurred for me if you haven’t given the Web Management Service permissions to impersonate another user using the sc privs described above, or you have, but haven’t restarted the service yet.