We have several common libraries that we’ve moved into a private, local NuGet repository on our network. It’s really helped deal with the dependency and version nightmares between projects and developers.

I checked my first project using full package restore and our new local repositories into our CI server, TeamCity, the other day and noticed that the Package Restore feature couldn’t find the packages stored in our local repository.

At first, I thought there was a snag (permissions, network, general unhappiness) with our NuGet share, but all seemed well. To my surprise, repository locations are not stored in that swanky .nuget directory, but as part of the current user profile. %appdata%\NuGet\NuGet.Config to be precise.

Well, that’s nice on my workstation, but NETWORK SERVICE doesn’t have a profile and the All Users AppData directory didn’t seem to take effect.

The solution:

For TeamCity, at least, the solution was to set the TeamCity build agent services to run as a specific user (I chose a domain user in our network, you could use a local user as well). Once you have a profile, go into %drive%:\users\{your service name}\appdata\roaming\nuget and modify the nuget.config file.

I already have TeamCity configured to dump out the results from our Machine.Specifications tests and PartCover code coverage (simple HTML and XML transforms) so another HTML file shouldn’t be difficult. When we’re done, we’ll have an HTML file, so we just need to setup a custom report to look for the file name. For more information on setting up custom reports in TeamCity, checkout this post (step 3 in particular).

Let’s Dig In!

The code! The current working version of the code is available via this gist.

The code is designed to be placed in a file of it’s own. Mine’s named Get-TODO.ps1.

Note: If you want to include this in another file (such as your PowerShell profile), be sure to replace the param() into function Get-TODO( {all the params} ).

The the script parameters make a few assumptions based on my own coding standards; change as necessary:

My projects are named {SolutionName}.Web/.Specifications/.Core and are inside a directory called {SolutionName},

By default, it includes several file extensions: .spark, .cs, .coffee, .js, and .rb,

By default, it excludes several directories: fubu-content, packages, build, and release.

It also includes a couple of flags for Html output–we’ll get to that in a bit.

param(
#this is appalling; there has to be a better way to get the raw name of "this" directory.
[string]$DirectoryMask =
(get-location),
[array]$Include =
@("*.spark","*.cs","*.js","*.coffee","*.rb"),
[array]$Exclude =
@("fubu-content","packages","build","release"),
[switch]$Html,
[string]$FileName = "todoList.html")

Fetching Our Files

We need to grab a collection of all of our solution files. This will include the file extensions from $Include and use the directories from $DirectoryMask (which the default is everything from the current location).

$originalFiles =
gci -Recurse -Include $Include -Path $DirectoryMask;

Unfortunately, the -Exclude flag is…well, unhappy, in PowerShell as it doesn’t seem to work with -Recurse (or if it does, there’s voodoo involved).

Breath, it’s okay. That is mostly code comments to explain what each line is doing. This snippet is dynamically creating a regular expression string based on our $Exclude parameter then comparing the current file in the iteration to the regular expression and returning the ones that do not match. If $Exclude is empty, it simply returns all files.

Finding our TODOs

Select-String provides us with a quick way to find patterns inside of text files and provides us with helpful output such as the line of text, line number, and file name (select-string is somewhat like PowerShell’s grep command).

Let’s take all of our remaining files, look at them as STRINGs and find the pattern "TODO:". This returns an array of MatchInfo objects. Since these have a few ancillary properties we don’t want, let’s simply grab the LineNumber, FileName, and the Line (of the match) for our reporting.

Now that we have our list, let’s pretty it up a bit. PowerShell’s select-object has the ability to reformat objects on the fly using expressions. I could have combined this in the last command; however, it’s a bit clearer to fetch our results THEN format them.

TODO: Here’s the big change. "Line" isn’t real descriptive, so let’s set Name=’TODO’ for this column. Second, let’s trim the whitespace off the line AND replace the string "// TODO: " with an empty string (remove it). This cleans it up a bit.

Reporting It Out

At this point, we could simply return $formattedOutput | ft -a and have a handy table that looks like this:

Good for scripts, not so good for reporting and presenting to the boss/team (well, not my boss or team…). I added a -Html and -FileName flag to our parameters at the beginning just for this. I’ve pre-slugged the filename to todoList.html so I can set TeamCity to pickup the report automagically. But how do we build the report?

PowerShell contains a fantastic built-in cmdlet called ConvertTo-Html that you can pipe pretty much any tabular content into and it turns it into a table.

Our html <head> tag needs some style. Simple CSS to address that problem. I even added a few fonts in there for kicks and placed them in $head.

As I mentioned before, my solutions are usually the name of the project–which is what I want as the $title and $header. The $solutionName is ugly right now–if anyone out there has a BETTER way to get the STRING name of the current directory (not the Path), I’m all ears.

We take our $formattedOutput, $header, $title and pass them into ConvertTo-Html. It looks a bit strange that we’re setting our ‘body’ as the header; however, whatever is piped into the cmdlet appends to what is passed into –body. It then parses our content out to the $FileNamespecified.

Knapsack… *cough*… I mean Cassette is a fantastic javascript/css/coffeescript resource manager from Andrew Davey. It did, however, take a bit to figure out why it wouldn’t work with Spark View Engine. Thankfully, blogs exist to remind me of this at a later date. 🙂

Namely—because I’ve never tried to use anything that returned void before. Helpers tend to always return either Html or a value.

I finally stumbled on a section in the Spark View Engine documentation for inline expressions.

Sometimes, when push comes to shove, you have a situation where you’re not writing output and there isn’t a markup construct for what you want to do. As a last resort you can produce code directly in-place in the generated class.

Well then, that sounds like what I want.

So our void methods, the Html.ReferenceScript and Html.ReferenceStylesheet should be written as:

As part of our psake build process on a couple of framework libraries, I wanted to add in updating our internal NuGet repository. The actual .nuspec file is laid out quite simplistically (details); however, the version number is hard coded.

For packages that have a static ‘major’ version, that’s not a bad deal; however, I wanted to keep our package up to date with the latest and greatest versions, so I needed to update that version element.

Since I have the full power of PowerShell at my disposal, modifying an XML file is a cakewalk. Here’s how I went about it.

GetBuildNumber is an existing psake function I use to snag the AssemblyVersion from \Properties\AssemblyInfo.cs (and return for things like TeamCity). $nuget and $nuget_spec are variables configured in psake that point to the nuget executable and the specification file used by the build script.

While NuGet alone is a pretty spectacular package management system, one of the hottest features involves the underlying PowerShell Package Console (we’ll call it PM from here out). It may look fancy and have a funky PM> prompt, but it’s still PowerShell.

Considering I use psake and live in PowerShell most of the day, I wanted to do a bit of customizing. Out of the box… urr… package, PM is barebones PowerShell.

Well, that’s pretty boring!

The question is, how would I customize things?

The answer? Setting up a profile!

Step 1: Finding the NuGet Profile

Just like a normal PowerShell installation, the $profile variable exists in PM.

Okay, that was easy. If you try to edit the file, however, it’s empty. You can either use the touch command to create an empty file, then edit it with Notepad, or simply run Notepad with the $profile path–it’ll ask you to create the file. 🙂

For an example, we’ll just pipe a bit of text into our profile and see what happens.

Now, close Visual Studio (PM only seems to load the profile when Visual Studio first starts) and relaunch it. PM should now welcome you!

Step 2: Customize, customize, customize!

Now we’re ready to add variables, setup custom paths and scripts, add some git-tastic support, and add modules (like psake). There are quite a few posts on the blog about customizing PowerShell’s environment, click here to check them out.

Remember: Since we’re using a separate PowerShell profile, be frugal with your commands and keep them "development centric". For example, I don’t load HyperV modules, Active Directory management commands, and other "non-Solution" things into the PM. Visual Studio is slow enough–don’t bog it down. :)

This is also a great opportunity to trim your profile down and break it into modular pieces (whether that be scripts or modules). Keep those profiles as DRY as possible.

A few caveats…

There do seem to be a few caveats while in the PM environment:

1. Execution, whether it’s an actual executable, a script, or piping something to more, seems to be tossed into another process, executed, and then the results returned to the PM console window. Not terrible, but something to be aware of if it seems like it’s not doing anything.

2. I can’t find a good way to manipulate the boring PM> prompt. It seems that $Host.UI is pretty much locked down. I’m hopeful that will change with further releases because not KNOWING where I am in the directory structure is PAINFUL.

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version. Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

After a bit of tinkering, I finally managed to find the sweet spot for getting xUnit 1.5 to run (without errors) with projects targeted at the new .NET 4.0 Framework.

After initial solution conversion, if you run your tests with xunit.console.x86.exe (or the 64-bit version, I’m assuming), you’ll face the following helpful error:

System.BadImageFormatException: Could not load file or assembly 'Assembly.Test.dll'
or one of its dependencies. This assmbly is built by a runtime newer than the
currently loaded runtime and cannot be loaded.
File name: 'J:\projects\Framework\build\Assembly.Test.dll'
at System.Reflection.AssemblyName.nGetFileInformation(String s)
at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
at Xunit.Sdk.Executor..ctor(String assemblyFilename)
at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename,
Boolean shadowCopy)
at Xunit.ConsoleClient.Program.Main(String[] args)

What? BadImageFormatException? But the bitness didn’t change!

@markhneedhamblogged last year how to fix the problem: updating the config files for xunit to “support” the new version. That worked then, but the version numbers have changed.

The useLegacyV2RuntimeActivationPolicy attribute ensures that the latest supported runtimes are loaded. For my projects, this seems to keep Oracle.DataAccess.dll and System.Data.Sqlite.dll (both x86 libraries) happy.

The supportedRuntime element denotes the current version of .NET 4.0 (30319 is the RTM build).