Continuous Integration the Microsoft way!!! – level 200

Years of developer experience has produced a lesson the entire industry is paying attention to: leaving integration to the end of a project is a bad idea! It’s been proven a bad idea time and time again. Integrating often keeps errors small and manageable. In fact, many teams (mine included) integrate after every code check-in. The idea of continuous integration is to make integration a normal part of your software construction process. The core of continuous integration (abbreviated “[ci]”), is to feel the pain enough to make it go away. The first pain is that it takes a LOT of time. Solution: automate the process. We find that there are a lot of steps to integrate all the parts of a software system, so we automate all of them. This yields a living artifact called a “build script”. This build script is executable and runs every time someone commits a change to the codebase. If a change to the codebase produces an integration defect, the build will fail, and the team will become aware of a new integration defect immediately (not weeks from now). The build script helps refine the software process by identifying pain points and pushing the team to solve them.

A very popular [ci] combination of tools is [ccnet] for the build server (build reports, stats, and notifications), [nant] (an xml notation for the build script itself), [nunit] (for automated tests that run as part of the build), and the code compiler. (Note, in Java, the parents of all these tools make up the popular combination: [cruisecontrol], [ant], [junit], and the java compiler).

With .Net 2.0, Microsoft has changed the way Visual Studio projects are structured so that they actually form an [msbuild] build file. [msbuild] is Microsoft’s answer to [ant] and [nant]. The basic capabilities are the same (xml declarative build tasks, runnable from the command line). [mstest] is Microsoft’s answer to [nunit]. Unfortunately, [mstest] isn’t available unless you purchase Microsoft’s Team Test IDE. Microsoft doesn’t have an answer for [ccnet].

From reading Bil’s experience report, I am very glad that I saved my team A LOT of wasted time. My team did not dabble with converting from [nant] to [msbuild] or from [nunit] to [mstest]. Here is my thought process:

Our process works ([ccnet], [nant], [nunit], compiler).

[ccnet] stays. I don’t see an alternative that is compelling, and [ccnet] is not causing me pain.

[nant] stays. I see [msbuild] as a comparable alternative but with no compelling advantage over [nant] to force a conversion. [nant] is not causing me pain.

[nunit] stays. I see [mstest] as a subset of NUnit’s functionality; therefore, there is no compelling reason to switch. [nunit] is not causing me pain.

compiler: Switched to [msbuild] from the [nant] <solution/> task out of necessity. The [nant] 2005 msbuild task is limited, and it caused me pain. Moving to an <exec /> task calling msbuild.exe gave me exactly what I needed. It’s been working for 6 months without further attention.

Story wall of post-it index cards (story cards) for user stories as well as bugs.

[ddd].

OR mapping with [nhibernate]

Model-view-controller Winforms UI

Extensive automated test coverage.

Automated configuration management.

Automated database change and upgrades.

In this environment, we don’t have any time to waste. We have to solve problems and move on. We estimate in ideal hours, not days. Anything that causes pain is either fixed or kicked out.

———————————————

After all this, what does it have to do with Continuous Integration the Microsoft way? I propose that this is the way to do it with .Net. It works well, and teams all across the country do it in a very similar way. From reading Bil’s experience report on attempting a mix of tools in the Team Suite stack, all he experienced was pain. This isn’t the way to make tools.

To Microsoft: you have to do better ( I know you can). If by some odd change, these tools integrate well, and we just don’t know how to do it, please advertise the documentation that shows how to do it seamlessly in an automated, command line environment on a clean build server. We all learn by example, so if teams within Microsoft are doing interesting things with [ci], please let us know because right now, we don’t know how to get it to work.

To the rest of the industry: Do what works and evangelize it. I’ve spelled out what is working well for my team, but there are plenty of other solutions to the [ci] problem. If your team is doing CI with another mix of tools, please tell us about it!

Related

The only additions we have implemented is nCover to measure code coverage and a custom report that grabs feeds from FogBugz (bug tracking software) and compares logged bugs to code coverage. Implementing the first is a no-brainer.

The second is intended to provide us with information regarding “optimal” code coverage percentage to the number of defects found in the software as we do our releases. This is intended to decrease the ambiguity regarding what’s feasible (100% code coverage is obviously not necessarily feasible and suffers the effect of diminishing gains) and what we _should_ be shooting for.

Eric A.
3:42 am on February 12, 2007

My team is currently doing CI with MS tools. Because we’re a Gold Partner, we have Team Foundation Server and more than enough seats for Team Developer Edition IDE to go around. The pain has come with integrating tests into the automated build.
Team Test IS available with the Developer SKU. What’s not available is the ability to edit test lists. So, for the longest time, only one .DLL containing tests could be linked with a build. ICK! Eventually, through the wonders of reflection and an open-source bit of code that we found (thank you, Google), we were able to reverse-engineer the test lists.

GUI testing has been another issue. We finally broke down and forked over the (high) cost to get a Tester Edition SKU. So, we’ll be able to generate Web GUI tests with that. We just got it however, and I can’t report on the Web GUI testing yet. As a side effect, we can now edit test lists.

For CI reporting (build status), we had to write our own tool. We now have a systray icon that reports build status. But, I agree, this isn’t something that we should have had to write.

Overall, I’m not sure that much has been gained by using Team System. The interface for tracking work items is clunky at best. The team has found it to be more of a hindrance than an asset.

“Story wall of post-it index cards (story cards) for user stories as well as bugs.”
So, how are you going to store this vital information WHY your code is structured the way it is structured for the people who will have to MAINTAIN the project over the next couple of years when you all moved on to other projects?

Is someone typing these all into some system so you know next year that if someone comes with “I have this and this bug”, you already know you have fixed it ?

Or is that something you don’t care about because ‘the code is the documentation’ ?

What surprises me a lot is how little agilists seem to think about the post-release phase, when ‘change’ isn’t going to be easy, and every information element used to create the system has to be there to be able to make changes without breaking anything already in the field. Post-it based storyboards aren’t very good in lasting very long and not very good for searching and describing WHY a piece of code inside your system is done the way it is done, so people who have to make changes for example 1 year after you left the project after release can actually MAKE that change properly without wading through mountains of code.
——–
About the tools: you shouldn’t bend tools to match your way of working. You should bend your way of working to the tools you’re using. If that’s not productive, use different tools. So if TFS can’t handle your style of writing software, use something else. Why is that so hard to grasp? TFS isnt’ a TDD toolset. You can perhaps tweak it a little here and there, but in short, it doesn’t have the tools on board to make it a breeze. So why bother with TFS? Live’s too short to mess with tools who aren’t matching the way you want to work.

Frans, every agile project should have a set of unit tests that help test every single dark corner of the application. Using TDD would have helped the code shape to the tests, with the tests describing what you are looking for the code to do. Post release these act as a regression suite, so anyone new to the project would make changes BUT then use the tests as a buffer. If the tests need to describe a new feature then you change the tests so it describes this change and your TDD cycle starts again. The CI process exists as well to make sure that the code complies to the suite so a commit does not result in a faulty product.

Agile is about focused teams that allow its developers to understand the problems and mindset of the others in writing code. Post release a product may stay within a team, moving to a new interation. Any new members of the team will soon get up to speed with the code, backed up by the test suite they can make changes with out the risks you describe.

I agree that a tool needs to fit your needs. Let me also say that while NUnit may meet your needs it also has limitions of its own. Depending on what, how and where you are testing you may need other tools instead of or additon to.

Frans,
Good questions. Let me address them one by one:
“So, how are you going to store this vital information WHY your code is structured the way it is. . .”

I did not talk about much in this post other than tools. My team has settled on using a wiki to store our living documentation. Currently, we’re using VeryQuickWiki, a Java JSP wiki with very simple functionality.: http://www.vqwiki.org/

We have tables of design data, digital pictures of whiteboard design drawings, and even regular documentation outlining important information about the project. Since as we change the software, documentation becomes wrong or obsolete, we’ve found that a wiki helps keeps the information current. I agree with Andy that our test suite does provide a sort of documentation about how we expect small pieces of code to work.

As far as what the software is supposed to do, our product manager maintains a roadmap that turns into a feature list as we progress. We’re currently piloting XPlanner to maintain this information, but for a small team, an excel spreadsheet might suffice.

“Or is that something you don’t care about. . .”
We do care about it, and we are very disciplined in our process. In fact, if we see two bugs that are related, we look for places in the app (or in our process) that are broken, and we fix them.

“Post-it based storyboards aren’t very good in lasting very long. . .”
Agreed. The storyboard is strictly an artifact for the current iteration. After the iteration has passed, there aren’t any cards left except in the done, done, done column. Maintainance information has to be stored elsewhere. We have ours on the wiki, although our project is on-going. When the project ends, we will have to package the documentation more formally and archive it.

sergiopereira
12:57 pm on February 12, 2007

Nice post, and very timely for me in particular. Regarding your note to MS, why do we insist in begging/waiting for a MS solution for all our development problems. The .Net development community (I hate that expression BTW) will never move forward fast enough if the majority of developers are implementing their process on top of MS tools.
What is the problem with sticking to NUnit, NAnt, NDoc, N*.*? My note to MS would be: “Do not hurt the community tools by shipping half-baked toys (re: ndoc), instead, participate and nourish these free initiatives.” That would never hurt MS, it can only help.

Sergio,
“. . . why do we insist in begging/waiting for a MS solution for all our development problems. . .”

To answer your rhetorical question: we can’t. At least in my shop, we have to solve problems today. We can’t afford to wait until the next release. I also think that trying to have a single vendor for development tools is self-limiting.

There is a CI server built to work with TFS called Automaton. You can find it here. http://www.codeplex.com/automation We are just now starting to look at it, but it looks like our CC.Net replacement for our TFS environment.

Raymond,
Thanks for that. Would you mind sharing how to get everything integrated. It sounds like you are using mostly Team Suite tools. Have you integrated MSTest? Did you use other tools before, or did you start CI from scratch with Team Suite?

I use the TFS Test Manager to set up my tests (across multiple projects), and do have to manually check out the build file to add any new test lists, but it is pretty easy.

I also run nightly builds, and built a tool to automatically delete any code drops that are more than 10 days old (I don’t have to do this, it’s just nice to have so I don’t have to manually delete old builds from TFS, as they build up over time….pardon the pun).

Have you tried Jetbrains’ ReSharper? It’s probably the only thing which makes me as productive developing in .Net as I am in Java using IntelliJ or Eclipse – it introduces many, many more refactorings, much tighter integration with NUnit and dozens more features. Joe White is currently writing about it on his blog: http://excastle.com/blog/archive/2007/01/31/13141.aspx

I’d also recommend you introduce a second stage to your CI build which runs SharpRobo (for WinForms) or Selenium/Sahi (for Web apps) functional tests once all NUnit tests are done. You can probably dedicate a separate CI box for this, since it takes much longer to run than the unit test suite.

The way we usually do this on our projects is the QAs have a functional test ready and waiting when a dev pair completes a story. Essentially, besides all the unit tests, we also have one functional test script per story.

Our builds usually have four stages
– the pre-checkin local build on the developer box(5-7 mins max, all unit tests + basic smoke tests using a functional testing framework)
– the stage 1 CI build on a CI server box (5-7 mins max, same as local build but runs in as close to production environment as possible)
– the stage 2 CI build (30-40 mins max, all unit tests, all happy path functional tests)
– the stage 3 CI build (3 hours upward, all functional tests, assuming one script per story)

These stages are structured so that it becomes less and less likely that something might have broken the build as you progress from stage to stage. The durations I’ve mentioned are the average times for each of the stages across all ThoughtWorks’ projects. Of course, you may not start with all four stages – you can add them on as the project progresses and the number of tests pile up.

Although I understand why you wouldn’t convert a bunch of existing Nant scripts to MSBuild “just because”, people just starting with CI would probably be better off just working with MSBuild directly (that’s what we’ve been doing for the last 2+ years, along with CC.NET and NUnit). There’s not much reason to introduce an additional tool unless you really have to.

Kurt Christensen
5:27 pm on February 12, 2007

In my opinion, Microsoft continually suffocates their developer community by always feeling the need to come out with their own versions of perfectly good community tools (e.g., NAnt and NUnit). If Visual Studio had chosen to *incorporate* support for NUnit and NAnt instead of *replacing* them, MS would have been giving a pat on the back to their community, encouraging similar efforts. What do they do instead? “Hey community, build whatever you like – we don’t give a crap. In fact, if you try to do something worthwhile, we’ll intentionally squash it, just because we can.” Nice job.

You guys should know that the build side of Team System is getting a complete overhaul in Orcas (next version of Visual Studio). It’s the *only* part of Team System with breaking changes in Orcas, in fact.

So yeah, the build process is a little rough in the current release of Team System. You can do a lot of stuff with it (it is MSBuild, after all), but it takes quite a bit of hand-rolled customization to get it where you need it to be.

If you’re hoping that microsoft will magically adopt nUnit and nAnt overnight, then, well.. hope springs eternal, I guess. 🙂 In the meantime, I’d say get used to MSBuild and the microsoft flavor of test classes if you want to use the Microsoft “stack”.

My team is using TFS with the modified CI code from Vertigo. We are using MSTest but instead of using the Test Lists (which require too much maintaining) we are using a modified version of TFS Build that can run Tests without using a test list found here:

Jeffrey, I’ve been using CC.Net, NAnt, MbUnit, NDepend, FxCop, NDoc, NCover setup for several years now. I switched to a new client and they are wanting to maximize their investment into TFS, so anything we can get done in TFS they want to do.

Right now everything is looking “ok”. We are just now getting into the actual setup process. I’m going to miss MbUnit and I argued against MsTest but went unheard. Oh, did I mention this shop has zero experience with unit testing and CI? I tried and tried to push the open source route, but they wanted nothing to do with it. As soon as we get our TFS stuff setup, I’ll report on how it all went.

Yes, starting their CI processes from scratch using TFS. Luckily, MsBuild seems a good replacement for NAnt, so I should be able to accomplish everything I want from there. I’m not too sure on reporting and integration from that standpoint, its still a question mark. Also, because TFS maintains its on proj file for builds, you can’t simulate the build process on a developer machine unless you create your own seperate build file, which is a major pain.

Again, its certainly more difficult (not to mention loss of functionality from several tools) then setting up using CC.Net and NAnt, but I have no choice but to work with these guys and make it work, so we’ll see how it goes and I’ll let you know.

Nice post and I like the comments that people are posting. While you can do a complete MS solution using MS tools I think it’s just more pain than pleasure. MSTest doesn’t produce anywhere near the results NUnit does with regards to output (at least I haven’t seen anything), MSBuild is a nigthtmare (just google on how to say copy a file), and TFS is slow and unruly when it comes to building systems.

I don’t think MSBuild is a good replacement for NAnt, at least until MSBuild stops asking you to jump through hoops to do simple things. I can show someone the NAnt task list and within minutes (without any prior NAnt knowledge) they can create scripts that copy files, publish ClickOnce Smart Clients and Websites, and update the contents of files. The best people can get from MSBuild is to run:

MSBuild solutionname.sln

To get the entire solution to build.

It doesn’t have to be this hard or complex. Kudos to those that got it working and are happy with it. For those that have a setup that works in CC.NET/NAnt/etc, stick with it. For those that are starting out unless you really know MSBuild and how TFS works, don’t go down that path. I can setup a brand new project on Cruise in under an hour, TFS and MSbuild and all that takes half a day. And I’ve been doing this for 5 years now.

I’m going to follow this up with a blog post of my own as I’ve learned a few new tricks and there’s been some improvement on the deployment/automation front (especially when it comes to deploying ClickOnce apps as I have a 4 hour process down to 5 minutes now).

I agree with Kevin – if you already have NAnt scripts, great, don’t change them. However, if you are just starting a new build script, you might as well use MSBuild. Since it’s effectively part of the OS, your project has one less external dependency. I’m not sure what difficulties Bil was referring to (other than its different from nant – granted), my MSBuild scripts do much more than just compile the solution.

However, I would agree that the rest of the Team System ecosystem is sorely lacking. Even though we have access to the entire Team System, my builds still use CC.NET, MSBuild, and NUnit, pulling from TFS source control. All of that works painlessly. My attempts at working with Team Build and MSTest were far too frustrating to bother.

My team is using the entire MS suite and have succesfully migrated from a home grown CC using NAnt to MS Build Server & MSBuild Scripts. While it was not trivial to set up as we started in beta, we have been able to write the tasks we need to get the job done. We have hourly builds that integrate 6 teams and all get an email notifying the offending team if their code broke the build. All in all the process works well. We even have our MSTests runninig in the build. Next time I see you I can give you more info if you need.

Koob
2:50 pm on March 12, 2007

Well, FinalBuilder is surely a product to look into when doing builds. And it integrates with CruiseControl.Net. Best practice is to let FinalBuilder handle all external tools like e.g. NUnit, FxCop. And use CC for polling source control, calling FinalBuilder, and integrating xml logs. This produces powerful builds, that are easy to debug.

It is built using .NET and can automate IE/FF using the same API. Will save you lots of time. Thought I share…

Josh
2:18 pm on March 29, 2007

Sounds like much of what you wish you could do is possible in Accurev. I have even read people using Accurev with Team Foundation Server on cmcrossroads.com. I guess people using TFS still want the best version control system.