Release Management 2015 with Build vNext: Component to Artifact Name Matching and Other Fun Gotchas

I’ve been setting up a couple of VMs in Azure with a TFS demo. Part of the demo is release management, and I finally got to upgrade Release Management to the 2015 release. I wanted to test integrating with the new build vNext engine. I faced some “fun” gotchas along the way. Here are my findings.

Fail: 0 artifacts(s) found

After upgrading Release Management server, I gleefully associated a component with my build vNext build. I was happy when the build vNext builds appeared in the drop-down. Since I was using the root of the output folder, I simply selected “\” as the location of the component (since I have several folders that I want to use via scripts, so I usually just specify the root of the drop folder).

I then queued the release – and the deployment failed almost instantly. “0 artifact(s) found corresponding to the name ‘FabFiber’ for BuildId: 91”. After a bit of head-scratching and head-to-table-banging, I wondered if the error was hinting at the fact that RM is actually looking for a published artifact named “FabFiber” in my build. Turns out that was correct.

Component Names and Artifact Names

To make a long story short: you have to match the component name in Release Management with the artifact name in your build vNext “Publish Artifact” task. This may seem like a good idea, but for me it’s a pain, since I usually split my artifacts into Scripts, Sites, DBs etc. and publish each as a separate artifact so that I get a neat folder layout for my artifacts. Since I use PowerShell scripts to deploy, I used to specify the root folder “\” for the component location and then used Scripts\someScript.ps1 as the path to the script. So I had to go back to my build and add a PowerShell script to first put all the folders into a “root” folder for me and then use a single “Publish Artifacts” task to publish the neatly laid out folder structure. I looked at this post from my friend Ricci Gian Maria to get some inspiration!

Now I have a couple of PowerShell tasks that copy the binaries (and other files) in the staging directory – which I am using as the root folder for my artifacts. I configure the msbuild arguments to publish the website webdeploy package to $(build.stagingDirectory)\FabFiber, so I don’t need to copy it, since it’s already in the staging folder. For the DB components and scripts:

I configure the copy scripts to copy my DB components (dacpacs and publish.xmls) so I need 2 scripts which have the following args respectively:

Back in Release Management, I made sure I had a component named “FabFiber” (to match the name of the artifact from the Publish Artifact task). I then also supplied “\FabFiber” as the root folder for my components:

That at least cleared up the “cannot find artifact” error.

A bonus of this is that you can now use server drops for releases instead of having to use shared folder drops. Just remember that if you choose to do this, you have to set up a ReleaseManagementShare folder. See this post for more details (see point 7). I couldn’t get this to work for some reason so I reverted to a shared folder drop on the build.

Renaming Components Gotcha

During my experimentation I renamed the component in Release Management that I was using in the release. This caused some strange behavior when trying to create releases: the build version picker was missing:

I had to open the release template and set the component from the drop-down everywhere that it was referenced!

The Parameter is Incorrect

A further error I encountered had to do with the upgrade from RM 2013. At least, I think that was the cause. The deployment would copy the files to the target server, but when the PowerShell task was invoked, I got a failure stating (helpfully – not!), “The parameter is incorrect.”

At first I thought it was an error in my script – turns out that all you have to do to resolve this one is re-enter the password in the credentials for the PowerShell task in the release template. All of them. Again. Sigh… Hopefully this is just me and doesn’t happen to you when you upgrade your RM server.

Conclusion

I have to admit that I have a love-hate relationship with Release Management. It’s fast becoming more of a HATE-love relationship though. The single feature I see that it brings to the table is the approval workflow – the client is slow, the workflows are clunky and debugging is a pain.

I really can’t wait for the release of Web-based Release Management that will use the same engine as the build vNext engine, which should mean a vastly simpler authoring experience! Also the reporting and charting features we should see around releases are going to be great.

For now, the best advice I can give you regarding Release Management is to make sure you invest in agent-less deployments using PowerShell scripts. That way your upgrade path to the new Web-based Release Management will be much smoother and you’ll be able to reuse your investments (i.e. your scripts).

Perhaps your upgrade experiences will be happier than mine – I can only hope, dear reader, I can only hope.

4 Comments

Thanks for the great posts on RM with DSC, Colin. It's helped me a great deal. You're the best resource i've found so far on this topic.

One quick question, do you have much thoughts on how Release Management could work with Private to Private deployments? One companies internal network to another?

I only have high level thoughts on this - A Pull Server in the cloud where i push releases to and the other private end is watching the public pull server for releases. How this would actually be done technically i have no idea at the moment. Do you have any thoughts on this?

As for "public to private" deployments - I would get the build to output the drops to a "public" site, as you mention (such as Azure blob storage). Install and configure Release Management at the private site and create a release that uses an "external build".

Then you can have an Azure queue (or any other notification mechanism) that runs a script or console app at the private site. You could even just have a windows scheduled task check if there is a new drop. The script will download the drop to a local share. Then you just trigger the Release using the command line.

@Colin - Makes sense however there are still some manual steps in there which i'd like to avoid. Ideally it would all be automated in a similar workflow fashion to Release Management.

Also, i'm not sure how your scenario would cover the fact that the first two environments of the Release Workflow are in MY network, and it's potentially only the final two (Perhaps Pre-Production and Production) which are on a different private network.

I've begun exploring Octopus Deploy for this, but still really at the early stages of figuring this out.

End goal would be to have a Release Workflow which has the flexibility to deploy over multiple networks without any manual intervention aside from approvals.