At work I use ConEmu for my console, it’s a great console to work on Windows with.
To keep things tidy I have all my code on my X:\ partition.
In ConEmu I have different “Tasks” setup for different configurations of Visual Studio and
pass /Dir X:\ as one of the task parameters so that a new Console’s current Dir is X:\.

When running “Developer Command Prompt for VS 2017” on my work computer I noticed that the directory
it was opening in wasn’t the current directory that ConEmu was setting, but C:\Dave\Source.

...
@REM Set the current directory that users will be set after the script completes
@REM in the following order:
@REM 1. [VSCMD_START_DIR] will be used if specified in the user environment
@REM 2. [USERPROFILE]\source if it exists
@REM 3. current directory
if "%VSCMD_START_DIR%" NEQ "" (
cd /d "%VSCMD_START_DIR%"
) else (
if EXIST "%USERPROFILE%\Source" (
cd /d "%USERPROFILE%\Source"
)
)
...

As you can see, it has two chances to pick a different directory before using your current one.

In my case, I had a folder at %USERPROFILE%\Source, which was empty, so I deleted it.

The other alternative is to set the VSCMD_START_DIR environment variable for your user account to your preferred directory.

Team Foundation Server 2012 XAML Build Agents do not work with TFS 2015

I discover this fact the weekend just gone whilst performing an upgrade to TFS 2015.3 from TFS 2012.4.

The plan was to only upgrade the TFS Server and leave the build infrastructure running on TFS 2012.
This seemed like a sound idea as I know Microsoft care about compatibility, and the upgrade was more complicated
than your usual one. I figured it would just keep working and that I’d upgrade the build agents later, boy was I wrong.

I may have even checked the documentation, which does not show a compatibility, but it isn’t explicitly called out,
so I could have glanced over it.

Look - No 2012

The problems with TFS 2012 build agents against TFS 2015 manifested as two different errors when I queued a build without a Drop Location.
Queuing a build with a drop location worked just fine.

Error 1 - Build agents not using the FQDN

The build infrastructure runs on a different domain to the Team Foundation Server.

We have tfs-server.corp.com for TFS and build-server.corp-development.com for builds.

The error manifested as:

The error that appeared twice was not very helpful.

An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

I eventually debugged this (details later) and found out that the last task on the build agent
was trying to access tfs-server with no DNS suffix of .corp.com to publish some logs.
As a temporary workaround I bobbed an entry in the hosts file entry to make tfs-server point to the actual IP of the TFS server.

Error 2 - the bad request

With the all the steps of the build resolving the server name, I came across the second error.

An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

My debugging would lead me to see that this was caused by TFS returning an HTTP 400 (Bad Request) for the exact same step as the first error.

It was at this point I figured something was really wrong and started searching for compatibility problems. In my effort to find a KB
or update I re-checked the documentation and noticed the lack of support as well as finding an MSDN forum post from RicRak where they
solved the problem by upgrading their agents off of TFS 2012.

Solution

My solution was to upgrade our entire build infrastructure (some 9/10 servers) to TFS 2015, and discovering you must install VS2015 on
the servers too to get the Test Runner to work.

One day of diagnosis and testing to get to the point of knowing TFS 2015 build agents would solve the problem and still build our codebase.
Another half-day was spend upgrading all the servers.

Diagnostics

How do you figure out when something like this goes wrong? TFS diagnostic logging did not provide any more information than minimum logging did.
The error only appeared at the very end of a build, it wasn’t related to a step in the XAML workflow, nor any variables in the build process.

This was the point I gave up thinking this could be fixed by a configuration change.

Conclusion

I wish I had read something like this before I planned the weekend. I did do testing, but because testing TFS in live is risky I had most of the
test instance network isolated and that required a lot of configurations; I just thought this error was just configuration based, lesson well and
truly learned.

It would have been nice to see this called out more explicitly on MSDN. In my opinion these are two bugs that Microsoft decided not to fix
in the TFS 2012 product life-cycle.

On the plus side, I learned some really neat debugging skills I didn’t know before.

Remember, if you’re upgrading from TFS 2012, plan to upgrade your build agents at the same time!

Back in 2014 I wrote a UNC to URI Path Converter using ASP MVC 4 and Visual Studio Team Services with a XAML Build process template to
continuously deploy the changes to an Azure Website. This was my first Azure Website and most of it was just using the default settings from
the New Project dialog in Visual Studio, all very “point and click”.

It worked well and had an average of a few hundred page requests a week and so far, I’ve been happy with everything as it “just worked”. The other
day I wanted to add a small feature and noticed that after pushing and deploying the change that Azure was warning me XAML builds would soon
be deprecated. So, whilst I was making some changes I decided it would be a good opportunity for me to get up to date on a few new technologies
that I have not used in anger.

I planned to setup the following for the website:

Rewrite in .NET Core.

Custom VSTS Build vNext.

Deployment Pipeline using Microsoft Release Management.

Rewrite in .NET Core

My previous .NET Core app at this point was a console application, so I took this as an opportunity to get to grips with setting up a build
and a suite of unit tests using xUnit.net. Getting this working in Visual Studio was straight forward following the xUnit.net documentation, but getting
the build to run on VSTS was a bit hit and miss. I eventually settled on a mix-match combination of dotnet command line tools and the
Visual Studio Test Runner.

Using the VS Test step solved the problem with dotnet test not been able to run the xUnit.net tests on the build server. I kept the individual
dotnet restore, dotnet publish (site) and dotnet build (tests) as I wanted control over the publish. I also have a suite of deployment
tests that based on the Full .NET Framework which I build using VS Build. These were the building blocks of my pipeline.

Custom VSTS Build vNext

By keeping control over dotnet publish I could pack the website ready to by pushed to Azure using Microsoft Release Management. I took
the output of dotnet publish and zipped it up into an archive and published this as a build artifact.

The build process also took the output of DeploymentTests build and zipped it into a separate archive and published that too.

I now had a website and a suite of “Deployment Tests” as artifacts from my build.

Deployment Pipeline using Microsoft Release Management

A deployment pipeline is where code goes through various stages and each stage provides increasing confidence, usually at the cost of extra time
(Martin Fowler: DeploymentPipeline). My pipeline was quite simple:

This process meant that the build was fast and only ran isolated fast unit tests against the code. Only then did it deploy onto a Pre-Production
server (another Free Azure Website), and run a set of integration tests against the Website via the API, if these tests passed, then I repeated
the process onto the Live website.

Using Microsoft Release Management, I was able to orchestrate this using a single Release definition, and defining two environments to deploy to.

I considered using Deployment Slots on Azure to do a deploy and then swap to the Slots after the tests passed, but Slots are only available
on the Standard pricing tier and I wanted to keep this free, so I setup another free Website instance and ran the tests on there.

I used a Variable against each Environment in Release Management to store the Azure Website Name.

These variables had two uses, the first was to keep the steps for each environment the same, I only need to set the variable to a different value.

The second was very cool, because the variables in TFS Build and RM are actually environment variables I could write the following
method in the code of my deployment tests:

I planned to write some User Interface tests using either Coded-UI or Selenium, but due to the Hosted Build agents not supported Interactive Mode
which is needed to run User Interface tests, I made them conditional and they only run in Visual Studio locally. I do have a plan to get
these running in the future.

The whole process looks like this:

Conclusion

Whilst this is a massively over engineered solution for such a simple website, it was fun to learn some new tricks and understand
how to put a release pipeline together using the VSTS and Azure platforms. I also used it as opportunity to tidy up my resources
in Azure and consolidate all my related resources into an Azure RM Resource Group, including the Application Insights I use to monitor it.

The main reason for moving is that SSL gives better SEO - and that my old blog was SSL so I’m sure there will be some SSL links scattered about the web. It also prevents any silly public networks injecting anything into any of my pages.

I’m using CloudFlare to secure to the communications from your browser to them. Thanks to Sheharyar Naseer for his excellent guide that got me up and running in no time, and to DNSimple for their excellent DNS Service that made it a piece of cake changing my Nameservers.

I’ve been building an FSharp Dashboard by following along this post from Louie Bacaj’s which was part of last years FSharp Advent calendar. I have to say it’s a great post and has got me up and running in no time.

If you want to skip the story and get to the FSharp and SignalR part scroll down to Changing the Hub.

One small problem I noticed was that I could not use any of the features of FSharp Core v4. For example, the new tryXXX functions such as Array.tryLast were not available.

After a bit of digging I happened across the Project Properties which were stuck on 3.1.2.1.

Turns out that the FSharp.Interop.Dynamic package is dependant on FSharp.Core v3.1.2.1.

So this turned into a challenge of how do I use SignalR without Dynamic. After a bit of googling I landed on
this page that showed Strongly Typed Hubs. So I knew it was possible…

Removing Dependencies

The first step to fixing this was to remove the FSharp.Core dependencies I no longer needed, these were:

I then just browsed through the source and removed all the open declarations.

Re-adding FSharp Core

Slight problem now, I no longer had any FSharp Core references, so I needed to add one in.
I’m not sure if this is the best way to solve this, but I just copied and pasted these lines
from a empty FSharp project I just created:

Getting the Context

With SignalR you cannot just new up an instance of a Hub, you have to use GlobalHost.ConnectionManager.GetHubContext<THub>. The problem is that this gives you
and IHubContext which only exposes the dynamic interface again. A bit more googling and I found that you need to pass our interface as a second generic parameter and you will get an IHubContext<IMetricsHub>.

I’ve recently just deployed a new Azure Linux VM for hosting a Discourse instance I run and noticed that is didn’t have a DNS entry on cloudapp.net. Last time I deployed one it was instantly given one in the format server-name.cloudapp.net, but this time it wasn’t and I had to set it up by myself.

Today I’ve just published my first App into the Windows and Windows Phone Store.

You can download using the image below, if you want to check it out. It’s 100% free and no ads.

It is a simple version of the pub game Shut the Box, I have page here with more information about game.

This was my first attempt at a Windows Application and I’ve really enjoyed the experience of building it.
I tried to use as many new things to me as possible to learn as much as I can through the process. A quick list
of new things I’ve explorer whilst working on this are:

Git

Visual Studio Online Kanban for planning and tracking work (up until now I’ve only used TFS 2012.4).

Working with the Windows Store was a bit of “hit and miss”, for a while I could not see get to the “Dashboard”
part of the site “because of my Azure account”, or so I was told. This seemed to resolve itself eventually, but
was very annoying at the time. I was not offered any explanation, only that I should create a new Microsoft Account
to publish apps through, which I was not prepared to do.

It took 3 attempts to get the application through certification. Firstly it failed because I had not run the Application Certification Kit and had a transparent Windows tile that is not allowed.
The second failure was because Russia, Brazil, Korea and China require certification of anything that is a Game
in the store. I decided not to publish it to those markets at the moment because I wanted it out there, and figuring
out how to complete the certification seemed like too much work. I may look into it again later, but for now I am happy.

This application has been a long time coming, mostly down to my lack of free time and/or willingness to work on
it, but I’m glad it’s finally published, now to try and release some updates and add some more nice features.

If you enjoy the game, please feel free to leave me a good rating / comment in the Store.

Major Update 1-Aug-2015: Changed VisitAttributeList to VisitMethodDeclaration to fix some bugs with the help of Josh Varty.

I’m a big fan of XUnit as a replacement for MSTest and use it extensively in my home projects, but I’m still struggling to find a way to integrate it into my work projects.

This post looks at one of the obstacles I had to overcome, namely the use of [TestCategory("Atomic")] on all tests that are run on TFS as part of the build. The use of this attribute came about because the MSTest test runner did not support a concept of “run all tests without a category”, so we came up with an explicit category called “Atomic” - probably not the best decision in hindsight. The XUnit test runner does not support test categories, so I needed to find a way to remove the TestCategory attribute with the value of Atomic from any method. I’m sure I could have used regex to solve this, and I’m sure that would have caused more problems:

I found that the syntactic analyser allowed me to input some C# source code, and by writing my own CSharpSyntaxRewriter, remove any attributes I didn’t want.

I started by creating some C# that had the TestCategory attribute applied in as many different ways as possible:

namespaceP{classProgram{publicvoidNoAttributes(){}[TestMethod,TestCategory("Atomic")]publicvoidOnOneLine(){}[TestMethod][TestCategory("Atomic")]publicvoidSeparateAttribute(){}//snip...
//And so on down to, right down to...
[TestMethod,TestCategory("Atomic"),TestCategory("Atomic")]publicvoidTwoAttributesOneLineAndOneThatDoesntMatch(){}}}

The CSharpSyntaxRewriter took a lot of messing around with to get right, but I eventually figured that by overriding the VisitMethodDeclaration method I could remove attributes from the syntax tree as they were visited.

To get some C# code into a syntax tree, there is the obviously named CSharpSyntaxTree.ParseText(String) method. You can then get a CSharpSyntaxRewriter (in my case my own AttributeRemoverRewriter class) to visit everything by calling Visit(). Because this is all immutable, you need to grab the result, which can now be converted into a string and dumped out.

The interesting part of the AttributeRemoverRewriter class is the VisitMethodDeclaration method which finds and removes attribute nodes that are not needed:

publicoverrideSyntaxNodeVisitMethodDeclaration(MethodDeclarationSyntaxnode){varnewAttributes=newSyntaxList<AttributeListSyntax>();foreach(varattributeListinnode.AttributeLists){varnodesToRemove=attributeList.Attributes.Where(attribute=>AttributeNameMatches(attribute)&&HasMatchingAttributeValue(attribute)).ToArray();//If the lists are the same length, we are removing all attributes and can just avoid populating newAttributes.
if(nodesToRemove.Length!=attributeList.Attributes.Count){varnewAttribute=(AttributeListSyntax)VisitAttributeList(attributeList.RemoveNodes(nodesToRemove,SyntaxRemoveOptions.KeepNoTrivia));newAttributes=newAttributes.Add(newAttribute);}}//Get the leading trivia (the newlines and comments)
varleadTriv=node.GetLeadingTrivia();node=node.WithAttributeLists(newAttributes);//Append the leading trivia to the method
node=node.WithLeadingTrivia(leadTriv);returnnode;}

The AttributeNameMatches method is implemented to find an attribute that starts withTestCategory, this is because attributes in .NET have Attribute at the end of their name e.g. TestCategoryAttribute, but most people never type it. I figured in this case it was more likley to exist than to have another attribute starting with TestCategory. I don’t think there is an elegant way to avoid using StartsWith in the syntactic analyser, I would have had to switch to the sematic analyser and that would have made this a much more complicated solution.

The HasMatchingAttributeValue pretty much does what it says, it looks for the value of the attribute been just Atomic and nothing else.

Once the nodes that match are found, it checks if the number of attributes on a method is equal to the number it wants to remove, if so the newAttributes list is not populated and the method is updated to keep its trivia, but without any attributes. This shouldn’t be the case for this specific scenario because just a TestCategory on its own doesn’t make sense.

Remove just the matching attributes

If there are some attributes that do not need removing, then just the matching one should be removed. For example:

[TestMethod, TestCategory("Atomic")]publicvoidOnOneLine(){}

When the visitor reaches the attributes on this method, it will populate the newAttributes list with just the attributes we want to keep and then update the method so that it has just the remaining attributes its trivia.

Conclusion

Using Roslyn was a bit of a steep learning curve to start with, but once I found out what I was doing, I knew I could rely on the Roslyn team to have dealt with all the different ways of implementing attributes in C#. That didn’t stop me from finding what appears to be a bug causing me to re-write bits of the script and this post, and some more edge cases when I ran it across a > 500 test classes.

However, if I were to try and use regex to find and remove some of the more complicated ones, and deal with the other edge cases, I’d have gone mad by now.

The TFS Global List is a Team Project Collection wide entity and, to the best of my knowledge, requires someone to be a member of the Collection Administrators group to be able to update it – there is no explicit group or permission for “Upload Global List”. This can be quite a problem if there are a number of Lists within your Global List that are updated frequently by the users of your Collection.

Your current options are either:

Ask the Collection Administrators for every little change (and complain if they take too long, they have a holiday, etc.)

Keep adding people/groups to the Collection Administrators group (and hand out way too much power to people who don’t need it).

We went for option #1, then option #2, until neither became sustainable.

Building the Template

To build the template I started by copying the DefaultTemplate.11.1.xaml file that ships with TFS 2012 and stripped out all of the activities and process parameters that were no longer required then added a new activity to invoke the witadmin command line tool to import the Global List.

I won’t go into detail of the process of how I changed the activities because there were quite a lot of steps. It is quite straight forward. However, a quick overview is: remove anything to do with compiling code, running tests or gated checkins, then add a new activity to invoke the witadmin command line. It will probably be easier understood by looking at the finished template - available to download at the end. I may write a follow up post with the exact details.

Using the template

To use the tempalte you need to have the Global Lists file checked into Version Control, you can follow the advice in the Wrox Professional Team Foundation Server 2013 book to create a Team Project for your all your Process artefacts, or if you just want to keep it simple:

Check that file into its own folder somewhere in souce control, in this example we will use $/TFS/GlobalList/GlobalList.xml (having it in its own folder helps).

Once you have the template downloaded, you need to check it into Version Control, usually $/MyTeamProject/BuildProcessTemplates/.

Create a new build definition.

Fill in the General tab however you like.

In the Trigger tab select Continous Integration.

In the Source Settings tab select the folder with your GlobalList.xml as Active ($/TFS/GlobalList/)

In the Build Defaults tab, select “This build does not copy output files to a drop folder”.

In the Process tab we need to do a few steps:

To install the template, click Show Details:

Click New… and browse to the template we checked in ($/MyTeamProject/BuildProcessTemplates).

Fill in the sections as follows:

I didn’t know the best way to get the URI of the Team Project collection, so I made it a argument you need enter.

If you are not using VS2012 on your build server, you will need to find a way to get witadmin.exe on there and then update the path to the location.

Once the above has been completed you should be able to the queue a new build using the new defintion and check the output to see if the global list has been successfully uploaded. Just open the build and check the summary, if everything went well you should see the following:

If there were any problem, check the “View Log”, the build is using Detailed logging which should include enough information to figure out what went wrong.

Conclusion

I’ve now stopped worrying about having to update the global list for everyone who needs something new adding and I no longer am affraid of lots of people been Collection Administrators who really shouldn’t have been. I can just grant check-in permissions to the folder that contains our global list and leave people to it.

Download

This is my last post on WordPress and first post on Jekyll GitHub Pages.

I’ve decided to abandon WordPress running on Azure Web Apps for a simpler static blog using Jekyll to convert Markdown to static content hosted on GitHub pages. I’ll go into the process I went though in a future post.

This posts is here as a marker of when I moved everything over. I’ve tried to get the permalinks in Jekyll to match the ones in WordPress - but breaking any that were from my brief stint on DasBlog. As far as I know everything should just be the same - including the RSS feed on /feed.