However, whilst testing we found that any of our builds that use the Typemock build activities failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was

So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.

After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.

However, Robert Hancock unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the solution he found on Daniel Petri Blog also fixed my issues as they were both UAC/desktop interaction related.

The solution was to create a group policy for the build agent VMs that set the following

User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode – Set its value to Elevate without prompting.

User Account Control: Detect application installations and prompt for elevation – Set its value to Disabled.

User Account Control: Only elevate UIAccess applications that are installed in secure locations – Set its value to Disabled.

User Account Control: Run all administrators in Admin Approval Mode – Set its value to Disabled.

Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected

I have been trying to script the installation of all the tools and SDKs we need on our TFS Build Agent VMs. This included BizTalk. A quick check on MSDN showed the setup command line parameter I need to install the build components was

/ADDLOCAL ProjectBuildComponent

So I ran this via my VMs setup PowerShell script, all appeared OK, but when I tried a build I got the error

Had an interesting issue during and upgrade from TFS 2012 to 2013.2 today. The upgrade of the files proceeded as expect and the wizard ran. It picked up the correct Data Tier, found the tfs_configuration Db and I was able to fill in the service account details.

However, when I got to the reporting section it found the report server URLs, but when it tried to find the tfs_warehouse DB it seemed to lock up, though the test of the SQL instance on the same page worked OK.

In the end I used task manager to kill the config wizard.

I then re-ran the wizard, switching off the reporting. This time it got to the verification step, but seemed to hang again. After a very long wait it came back with an error that the account being using to do the upgrade did not have SysAdmin rights on the SQL instance.

On checking this turned out to be true, the user’s rights had been removed since the system was originally installed by a DBA. Once the rights were re-added the upgrade proceed perfectly; though interestingly the first page where you confirm the tfs_configuration DB now also had a check box about Always On, which it had not before.

So the strange things was not that it failed, I would expect that, but that any of the wizard worked at all. I would have expected a failure to even find the tfs_configuration DB at the start of the wizard. Not have to wait until the verification (or reporting) step

Today I found I had a problem when trying to associate a Release Management 2013.2 release pipeline with a TFS build. When I tried to select a team project the drop down for the release properties was empty.

The strange thing was this installation of Release Management has been working OK last week. What had changed?

I suspected an issue connecting to TFS, so in the Release Management Client’s ‘Managing TFS’ tab I tried to verify the active TFS server linked to the Release Management. As soon as I tried this I got the following error that the TFS server was not available.

I switched the TFS URL to HTTP from HTTPS and retired the verification and it worked. Going back to my release properties I could now see the build definitions again in the drop down. So I knew I had an SSL issue.

The strange thing was we use SSL as out default connection, and none of our developers were complaining they could not connect via HTTPS.

However, on checking I found on some of our build VMs there was an issue. If on those VMs I tried to connect to TFS in a browser with an HTTPS URL you got a certificate chain error.

But stranger, on my PC, where I was running the Release Management client, I could access TFS over HTTPS from a browser and Visual Studio, but the Release Management verification failed.

The solution

It turns out the issue was we had an intermediate cert issue with our TFS server. An older Digicert intermediate certificate had expired over the weekend, and though the new cert was in place, and had been for a good few months since we renewed our wildcard cert, the active wildcard cert insisted on using the old version of the intermediate cert on some machines.

As an immediate fix we ended up having to delete the old intermediate cert manually on machines showing the error. Once this was done the HTTPS connect worked again.

Turns the real culprit was a group policy used to push out intermediate certs that are required to be trusted for some document automation we use. This old group policy was pushing the wrong version of the cert to some server VMs. Once this policy was update with the correct cert and pushed out it overwrote the problem cert and the problem went away.

One potentially confusing thing here is that the ‘verity the TFS link’ in Release Management verifies that the Release Management server can see the TFS server, not the PC running the Release Management client. It was on the Release Management server I had to delete the dead cert (run a gpupdate /force to get the new policy). Hence why I was confused by my own PC working for Visual Studio and not for Release Management

So I suspect the issue with drop down being empty is always going to really mean the Release Management server cannot see the TFS server for some reason, so check certs, permissions or basic network failure.

Rik recently posted about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.

So Rik automated the provision of new VMs and I looked at providing a PowerShell script to install the base tools we needed on our build agents, knowing this list is going to change a good deal over time. After some thought, for our first attempt we picked

TFS itself (to provide the 2013.2 agent)

Visual Studio 2013.2 – you know you always end up installing it in the end to get SSDT, SDK and MSBuild targets etc.

WIX 3.8

Azure SDK 2.3 for Visual Studio 2013.2 – Virtually all our current projects need this. This is actually why we have had capacity issue on the old build agents as this was only installed on one.

Given this basic set of tools we can build probably 70-80% of our solutions. If we use this as the base for all build boxes we can then add extra tools if required manually, but we expect we will just end up adding to the list of items installed on all our build boxes, assuming the cost of installing the extra tools/SDKs is not too high. Also we will try to auto deploy tools as part of our build templates where possible, again reducing what needs to be placed on any given build agent.

Now the script I ended up with is a bit rough and ready but it does the job. I think in the future a move to DSC might help in this process, but I did not have time to write the custom resources now. I am assuming this script is going to be a constant work in progress as it is modified for new releases and tools. I did make the effort to make all the steps check to see if they needed to be done, thus allowing the re-running of the script to ‘repair’ the build agent. All the writing to the event log is to make life easier for Rik when working out what is going on with the script, especially useful due to the installs from ISOs being a bit slow to run.

# add build service as local admin, not essential but make life easier for some projectsAdd-LocalAdmin -domain "ourdomain" -user "Tfsbuilder"

# Install TFS, by mounting the ISO over the network and running the installer

# The command ‘& $isodrive + ":\tfs_server.exe" /quiet’ is run

# In the function use a while loop to see when the tfconfig.exe file appears and assume the installer is done – dirty but works

# allow me to use write-progress to give some indication the install is done.Write-Output "Installing TFS server"Add-Tfs "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_team_foundation_server_2013_with_update_2_x86_x64_dvd_4092433.iso"

Write-Output "Configuring TFS Build"# clear out any old config – I found this helped avoid error when re-running script

# A System.Diagnostics.ProcessStartInfo object is used to run the tfsconfig command with the argument "setup /uninstall:All"

# ProcessStartInfo is used so we can capture the error output and log it to the event log if requiredUnconfigure-Tfs

# and reconfigure, again using tfsconfig, this time with the argument "unattend /configure /unattendfile:config.ini", where

# the config.ini has been created with tfsconfig unattend /create flag (check MSDN for the details)Configure-Tfs "\\store\ApplicationInstallers\TFSBuild\configsbuild.ini"

# install vs2013, again by mounting the ISO running the installer, with a loop to check for a file appearingWrite-Output "Installing Visual Studio"Add-VisualStudio "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_premium_2013_with_update_2_x86_dvd_4238022.iso"

# install azure SDK using the Web Platform Installer, checking if the Web PI is present first and installing it if needed

# The Web PI installer lets you ask to reinstall a package, if it is it just ignores the request, so you don’t need to check if Azure is already installedWrite-Output "Installing Azure SDK"Add-WebPIPackage "VWDOrVs2013AzurePack"

So for a first pass this seems to work, I now need to make sure all our build can use this cut down build agent, if they can’t do I need to modify the build template? or do I need to add more tools to our standard install? or decide if it is going to need a special agent definition?

Once this is all done the hope is that when all the TFS build agents need patching for TFS 2013.x we will just redeploy new VMs or run a modified script to silently do the update. We shall see if this delivers on that promise

I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…

Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error

Could not load file or assembly ‘Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The log showed it was failing to even do a get latest of the files to build, or anything on the build agent.

Turns out the issue was the PowerShell script that installed all the TFS build components and SDKs had failed when trying to install the Azure SDK for VS2013, the Web Deploy Platform was not installed so when it tried to use the command line installer to add this package it failed.

I fixed the issues for the Web PI tools and re-ran the command line to installed the Azure SDK and all was OK.

Not sure why this happened, maybe a missing pre-req put on by Web PI itself was the issue. I know older versions did have a .NET 3.5 dependency. Once to keep an eye on

If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.

On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the $(build agent working folder)\source\myproject\bin\debug folder. Unlike the output of the build target which puts them in the $(build agent working folder)\binaries\ folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.

I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.

The solution – editing the process template

Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.

Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration

/t:MyCloudApp:Publish – this tells MSbuild to perform the publish action for just the project MyCloudApp. You might be able to just go /t:Publish if only one project in your solution has a Publish target

/p:PublishDir=$(OutDir) – this is the magic. We pass in the temporary variable $(OutDir). At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.

In the build process template in the Initialize Variable sequence within Run on Agent add a If Activity.

Set the condition to MSBuildArguments.Contains(“$(OutDir)”)

Within the true branch add an Assignment activity for the MSBuildArguments variable to MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}\{1}\\”, BinariesDirectory, “Packages”))

This will swap the $(OutDir) for the correct TFS binaries location within that build.

After that it all just works as expected. The CSPKG file etc. ends up in the drops location.

Other things that did not work (prior to TFS 2013)

I had also looked a running a PowerShell script at the end of the build process or adding an AfterPublish target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.

Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.

All teams have ‘Stakeholder’, the people the are driving a project forward, who want the new system to be able to do their job; but are often not directly involved in the production/testing of the system. In the past this has been an awkward group to provide TFS access for. If they want to see any detail of the project they need a TFS CAL, expensive for the occasional casual viewer.

Access to the backlog, including add and update (but no ability to reprioritize the work)

Ability to receive work item alerts

The best news is that it will be a free license, so no monthly cost to have as many ‘Stakeholders’ on you VSO account.

Now most of my clients are using on-premise TFS, so this change does not effect them. However, the same post mentions that the “Work Item Web Access” TFS CAL exemption will be change in future releases of TFS to bring it in line with the ‘Stakeholder’.

So good new all round, making TFS adoption easier, adding more ways for clients to access their ALM information

Back in January I did a post How long is my TFS 2010 to 2013 upgrade going to take? I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 > 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 > 2013.2 on the same VM and also one to a production quality dual tier system.

The key lessons are

There a 150 more steps to go from 2013 RTM to 2013.2, it takes a good deal longer.

The dual tier production hardware is nearly twice as fast to do the upgrade, though the initial steps (step 31, moving the source code) is not that much faster. It is the steps after this that are faster. We put it down to far better SQL throughput.