All posts by rfennell

If you are trying to use Release Management or any deployment tool with a network isolated Lab Management setup you will have authentication issues. Your isolated domain is not part of your production domain, so you have to provide credentials. In the past this meant Shadow Accounts or the simple expedient of running a NET USE at the start of your deployment script to provide a login to the drops location.

In Release Management 2013.4 we get a new option to address this issue if you are using DSC based deployment. This is Deploy from a build drop using a shared UNC path. In this model the Release Management server copies the contents of the drops folder to a known share and passes credentials to access it down to the DSC client (you set these as parameters on the server).

This is a I nice formalisation of the tricks we had to pull by hand in the past. And something I had missed when the Update 4 came out

Whilst helping a Java based team (part of larger organisation that used many sets of both Microsoft and non-Microsoft tools) to migrate from Subversion to TFS I had to tackle their Jenkins/Ant based builds.

They could have stayed on Jenkins and switched to the TFS source provider, but they wanted to at least look at how TFS build would better allow them to trace their builds against TFS work items.

All went well, we setup a build controller and agent specifically for their team and installed Java onto it as well the TFS build extensions. We were very quickly able to get our test Java project building on the new build system.

One feature that their old Ant scripts used was to store the build name/number into the Manifest of any JAR files created, a good plan as it is always good to know where something came from.

But this did not work, I just saw the text ${env.TF_BUILD_BUILDNUMBER}” in my manifest, basically the environment variable could not be resolved.

After a bit more of think I realised the problem is that the Ant/Maven build extensions for TFS are based on TFS 2008 style builds, the build environment variables are a TFS 2012 and later feature, so of course they are not set.

A quick look in the automatically generated TFSBuild.proj file generated for the build showed that the MSBuild $(BuildNumber) was passed into the Ant script as a property, so it could be referenced in the Ant Jar target (note the brackets change from () to {})

I think this book is well worth a read for anyone, irrespective of their role in a team; it’s short chapters (usually a couple of pages per idea) means it easy to pickup and put down when you get a few minutes. Perfect for that commute

From my Hyper-V VMs the virtual router seems to be fine, they all have a single network adaptor linked to the virtual switch that issue IP addresses via DHCP. The issues have been for the host operating system. I wanted to connect this to the internal virtual switch to allow easy access to my VMs (without the management complexity of punching holes in the router firewall), but when I did this I got inconsistent performance (made harder to diagnose due to moving house from a fast Virgin cable based Internet connection to a slow BT ADSL based link who’s performance profile varies greatly based on the hour of the day. I was never sure if it was problem with my router or BT’s service).

The main problem I saw was that it seemed the first time I accessed a site it was slow, but then was often OK. So a lookup issue, DNS?

Reaching back into my distant memory as network engineer (early 90s some IP but mostly IPX and NETBIOS) I suspected routing or DNS look up issue. Routing you can do something about via routing tables and metrics, but DNS is harder to control with multiple network connections.

Note: Also on my Virtual Network Switch adaptor on the host machine I told it not to use the DNS settings provided from the virtual router, but this seemed to have little effect as when using nslookup it still picked the virtual router, until I changed the binding order.

On the routing front, I set the manual metric on IP4 traffic via the virtual router adaptor to a large number, to make it the least likely route anywhere. Doing this should mean only traffic to the internal 192.168.1.x network should use that adaptor

This meant my routing table on my host operating system looks as follows when the system is working OK

Outstanding Issues

Routing

I did see some problem if the route via the virtual switch appeared first in the list, this can happen when you change WIFI hotspot. The fix is to delete the unwanted route (0.0.0.0 to 192.168.1.1)

route delete 0.0.0.0 MASK 0.0.0.0 192.168.1.1

But most of the time fixed the binding order seemed enough, so I did not need to do this

External DHCP Refresh

If you swap networks, going from work to home, your external network will have a different IP address. You do have to restart the router VM (or manually renew DHCP to get a new address)

The solution I have used is to use Hyper-V checkpoint on my router VM. One set for DHCP and another with the static IP settings for my home network. Again not great but workable for me most of the time. I am happier editing the router VM rather than many guest VMs.

We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.

If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away

Whilst getting integration tests running as part of a Release Management pipeline within Lab Management I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)

The issue was two fold..

Permissions/Path Problems accessing the build drops location

The build drops location passed is passed into the component using the argument $(PackageLocation). This is pulled from the component properties, it is the TFS provided build drop with a appended on the end.

Note that the in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing causes a permissions error (might occur in other scenarios too I have not tested it).

Removing the slash or adding a . (period) after the fixes the path issue, so..

\serverDropsServices.ReleaseServices.Release_1.0.227.19779 – works

\serverDropsServices.ReleaseServices.Release_1.0.227.19779 – fails

\serverDropsServices.ReleaseServices.Release_1.0.227.19779. – works

So the answer is add a . (period) in the pipeline workflow component so the build location is $(PackageLocation). as opposed to $(PackageLocation) or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit

Cannot find the TRX file even though it is present

Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.

The issue was the call being used to check for the file existence.

[System.IO.File]::Exists($testRunResultsTrxFileName)

As soon as I swapped to the recommended PowerShell way to check for files

I have a few old Visual Studio Online (VSO) accounts (dating back to TFSPreview.com days). We use them to collaborate with third parties, it was long overdue that I tidied them up; as a problem historically has been that all access to VSO has been using a Microsoft Accounts (LiveID, MSA), these are hard to police, especially if users mix personal and business ones.

The solution is to link your VSO instance to an Azure Active Directory (AAD). This means that only users listed in the AAD can connect to the VSO instance. As this AAD can be federated to an on-prem company AD it means that the VSO users can be either

Company domain users

MSA accounts specifically added to AAD

Either way it gives the AAD administrator an easy way to manage access to VSO. A user with a MSA, even if an administrator in VSO cannot add any unknown users to VSO. For details see MSDN. All straight forward you would think, but it I had a few issues.

The problem was I had setup my VSO accounts using a MSA in the form user@mycompany.co.uk, this was also linked to my MSDN subscription. As part of the VSO/AAD linking process I needed to add the MSA user@mycompany.co.uk to our AAD, but I could not. The AAD was setup for federation of accounts in the mycompany.com domain, so you would have thought I would be OK, but back in our on-prem AD (the one it was federated to) I had user@mycompany.co.uk as an email alias for user@mycompany.com. Thus blocked the adding of the user to AAD, hence I could got link VSO to Azure.

The answer was to

Add another MSA account to the VSO instance, one unknown to our AD even as an alias e.g. user@live.co.uk

Whilst repaving my Lenovo W520 I had some issues with video cards. During the initial setup of Windows the PC hung. I rebooted, re-enabled in the BIOS the problematic video card and I thought all was OK. The installation appeared to pickup where it left off. However, I started to get some very strange problems.

My LiveID settings did not sync from my other Windows 8.1 devices

I could not change my profile picture

I could not change my desktop background

I could not change my screen saver

And most importantly Windows Update would not run

I found a few posts that said all of these problems could be seen when Windows was activated, but that was not the issue for me. It showed as being activated, changing the product key had no effect.

In the end I re-paved my PC again, making sure my video cards were correctly enabled so there was no handing, and this time I seem to have a good Windows installation