Tuesday, November 19, 2013

To date, development progress on NuGet.PackageNPublish has been pretty erratic. Fundamentally, this is due to it being my pet project, and changes / fixes / enhancements have only been forthcoming when and if I personally need them.As we approach 13,000 downloads of the tooling(thanks, all), I figure I ought to have come up with some kind of roadmap - so here are my thoughts

v0.8

This will bring the tooling up to date and use v0.8.0.2 of the NuGet.PackageNPublish nuget package, fixing the following issues:

I'm basically there with v0.8 - just final tidy ups and the release note to do - tho' I've been promising that release for a good month now - sorry.

v0.9

This is the time to shake things up a bit - specifically, I planning on dropping support for the VS2010-flavour VSIX build project, replacing it with a VS2013-flavour one. This will mean that the tooling can only be build on VS2013 and above, but I don't think that's too much of a step forward.

Also, I have a mad plan to add two new project templates to the tooling:

Octopus Deploy PackageThis project template is basically the equivalent functionality to adding OctoPack to a web project, but broken out into it's own packaging project.

Octopus Deploy Azure PackageThis project template is what I demonstrated (manually) at DDD East Anglia and DDD North 2013 this year - a packaging project that's designed to take the output from an Azure cloud project and package it for OctopusDeploy.

v1.0

This is the big one - I plan on finally fixing the single most troublesome issue with NuGet.PackageNPublish:

Monday, October 21, 2013

So it's the morning after the DDDNorth before, and for a change I'm going to write my retrospective with the event fresh in my mind.

Update & Admission: It took more than that morning to finish this post!!

I'd flown up to Newcastle the night before, and stayed the night in The Roker Hotel along with a few of the usual suspects, so in spite of a fairly late pre-DDD night in the bar, a bracing early morning run along the front and a good Full-English breakfast (with proper Black Pudding) ensured I was refreshed and ready for the day.

Catching a lift over to the venue (thanks Dave) got me to the venue bang on time.

Early Morning on a Saturday

The venue for DDDNorth was the University of Sunderland campus at St Peters - using the library and its cafe as the central meeting point, sponsor showcase and general chat area, with sessions in that building, the David Goldman Informatics Centre and the Sir Tom Cowie Lecture theatre.The speaker room was in the Informatics Centre, so that was where I headed first, just in time to hear the speaker briefing and grab my (very nice, and Sage sponsored) purple speaker's t-shirt. Pleasantries over it was straight over to the library building for my first session. This time, I was much more comfortable with my presentation and equipment, and was on after lunch, so could properly enjoy DDDNorth as a delegate as well as a speaker.First up was Phillip Trelford.

F# Eye for the C# Guy

Given that F# was added to VS2010, I'm amazed I've not even written "Hello World" with the language, so this introduction was always going to be new information to me.Phil's presentation style is one full of energy, and I think he bagged the first ponies of the day in about the first 5 minutes, making the point that whilst it's often considered a language for "Financial / City" types, the 'F' is actually for FUN.

"F# is a statically typed, functional first, object orientated, open source .Net language, based on OCAML and available in VS and Xamarin studio."

Phil then went on to compare a 40-50 line implementation of an immutable POCO with first the 12 line and then a ONE line F# implementation. And then followed this up with examples of unit testing F# using effectively plain english tests via the TickSpec package, and "metaprogramming" samples using the F# quoting syntax ( <@ some code @> ) and the Unquote NuGet package.

"F# code is consistently shorter, easier to read, easier to refactor and contains fewer bugs than its C# equivalent."

I was fairly skeptical when Phil stated that 30,000 lines of C++ code (for an unnamed financial system) had been replaced by only 200-300 lines of F# until he demonstrated a fully functional spreadsheet written in less than that!Finally, he showed some of the "even funner" aspects of F# - including Pacman and Mario written in F# and cross-compiled to Javascript using Funscript.I think that the F# eco-system is one to watch, and am very glad to have had such an engaging introduction from Phil - even without his "F# language jobs pay more" slide!

Scaling Systems: Architectures that Grow

Kendal Miller's session was a no-code, all-content affair that was entertaining, enlightening, and provided a 4-point guide to the issues surrounding scaling enterprise-level systems. The fact that I ended up with over 6 pages of notes is testament to how well he engages his audience.First up was a mantra to code by:

"Time you spend making your software scale is time you're NOT spending delivering functionality."

Kendal then described how most applications (web or otherwise) will fail to scale out of the box, but when considering scalability you need to target the lowest practical numbers for response and loading as "scaling costs REAL money". The techniques usually applied to the problems of scalability he described as only "tactics" - understanding the key principles first is more important.Kendal then described the four key factors to scaling - 3 that enable you to scale, and one that throws a spanner in the works of all four - Kendal's ACD/C factors:

A - Asynchronicity

Do work - just NOT in your critical path. Defer it to later, or do it ahead of time.

C - Caching

Don't do any work you don't have to - the fastest query you can ever run is the one you only ever have to run once.

D - Distribution

Share the work between as many workers as you can - this is the easy route.

and the kicker

C - Consistency

Agreeing on the level of consistency REQUIRED is the compromise that has to be made.

Kendal then went on to explain these all in much more detail, with great anecdotes to support his opinions - including describing how Amazon do basically NO work at all at the point of you placing and order, and how a he once lost an entire tractor in spite of what his inventory system said.

Automation is not the end of the Story

Richard started by setting the scene with what he considers the minimum for build automation

Continuous Integration

Unit Tests

Nightly FULL builds

Static analysis

FxCop, CAP.Net, SpCop, StyleCop

Signing & Obfuscation

Deployment packaging

Manual release builds

He then went on to discuss what "deployment workflow" considerations should be made - in particular who can sign off a build as acceptable and who can promote between environments."Your build should be a bridge to the operations team - a shared language"Finally, Richard gave a whistle-stop tour of some of the tools that are available when taking your build process beyond the desktop, including

ALM Rangers TFS Build Best Practice

Lab Management

OctopusDeploy

MS Virtual Machine Manager 2012

ALM Rangers VM Factory

Puppet

Chef

DevOps Workbench (Beta)

Richard's talk was informative and thought-provoking, and gave pointers to some tools I wasn't aware of and will definitely investigate - and it was a good precursor to my talk on OctopusDeploy.Lunch was the usual brown bag sandwich affair, and with very nice sandwiches too, which I spent in the Library Cafe catching up with Eric Nelson and others. I had to make my excuses and leave a bit early because I wanted to check out the equipment in my room, as I was up next with

An Introduction to Octopus Deployment

This was my talk in which I covered the basics of WHY you need automated, repeatable, controlled deployments and WHATOctopusDeploy is and HOW it provides a very nice out-of-the-box solution.I demonstrated deploying a web app using OctoPack to package it and the Tentacle deployment method for "local" servers, alongside the SFTP method for "remote" servers (in my case an Azure WebSite).Finally I showed how Azure deployments require a specific flavour of OctopusDeploy package, and how my NuGet.PackageNPublish tooling could be tweaked to create it.All the resources, projects, slide decks, etc can be found in the GitHub repository.I must admit to a certain amount of trepidation - amongst the 30 or so audience were the inimitable Liam Westley(who kindly acted as my room monitor) and Richard Fennel. Given how attentive (and non-heckling) both were I think it went well - and it was lovely to receive a tweet the next day from another attendee saying that they would be using OctopusDeploy soon because of my talk.

Update: The feedback scores are in - and very positive. I promise to talk louder next time tho'!

Finally, I moved just into the next room to hear Liam Westley talk about

Event Store

Event Store was created by Greg Young (of CQRS fame), and is an open-source document database written in C# with an embedded Javascript v8 engine that can be installed in single-node or High-Availability modes.Liam quoted Jeff Attwood, saying

"OR Mapping was the Vietnam of Computer Science"

and went on to describe how metadata within a software system can often be as important as, or even more important than the data on which the system operates - and yet it is most often lumped together with that data in some kind of relational store.Event store addresses this by providing a tighter focus - it's create & read ONLY, there are NO updates and NO deletes - the data is immutable in perpetuity.It provides a simple, performant ReSTful api over HTTP or TCP using AtomPub as the representation. As such it's designed to be cached, and provides automatic versioning and an implicit CQRS / message queueing architecture."Indexes" (and you have to use that description lightly) are provided with data projections implemented using a Javascript derived domain language.Liam then went on to demonstrate how to set up Event Store (including the oh-so-important

netsh http add uracl url=http://*.2213/ user=<serviceUser>

to enable access to the Event Store service.Next up were demonstrations of projections against the DDDNorth agenda data that showcased how data can be transformed into "streams" of filtered, sorted or aggregated derived data - which is where the power of the software is really manifest. And not forgetting to start Event Store with the --run-projections=ALL option.All in all, Liam managed to pack a lot of demos of a new and exciting addition to the software developer's toolkit into a short time - yet more to investigate.

Close

The day ended with the usual thanks, swag and farewells - including Sage giving a Surface Pro away, a NDCLondon Golden Ticket (won by Dave!) and Microsoft a MSDN license.The young lad who won the latter was a student at Sunderland, and was initially non-plussed by the small package. When he asked "So what's this worth?" and being given the answer it was a joy to see his reaction - first shock at the monetary value, then what looked like abject terror at the value of the associated licenses, and then finally the slow realisation that he'd been given the tools to really make the most of his calling to software development. I could think of no greater illustration then of how powerful the DDD movement can be - bringing together devs from all backgrounds and levels to learn and share in our community.Andy Westgarth had once again provided a perfect day, marshalling a great array of sponsors, providing a great venue and a great line up of speakers. The crowd rounded it off by giving him a rousing ovation - ultimately absolutely deserved.Roll on next year - somewhere in the North West.

Tuesday, October 08, 2013

TLDR: NuGet.TfsBuild is a new NuGet package that works with NuGet Package Restore so that private (protected) package repositories can be used with TF Build Services.A couple of months ago, as an experiment at work, we set up a real project on the cloud-hosted TFService. The goal was to see whether the entire project could be run in the cloud - from work item management, source code control, builds and automated testing.

Because this was a "REAL" project, we were leveraging NuGet packages from our internal NuGet server - ones that contain proprietary code and couldn't just be shoved onto a public NuGet server. So the packages had to be sourced from a private, protected NuGet server - specifically a private MyGet feed created for the purpose.But that's where we hit a bit of a snag - we wanted to use TF Build Services to avoid having any specific build server (virtual or physical) - but that means not being able to configure additional NuGet package sources.You'd think that you could just add the package source to the nuget.exe.config chat gets checked in when Package Restore is enabled - the problem is that the credentials for a private package source are encrypted in the file using a key that's specific to the specific user on the specific machine that's doing the build.And that would never work with TF Build.Our solution - a NuGet package that adds an additional build step to the project that reads the package source and credentials from the MSBuild parameters - i.e. from the build DEFINITION rather than either the source, or machine configuration.The upshot - it just works. The private package source is configured at the start of the build, just before PackageRestore kicks in for the first project being built (usually your "primary" project).So today I'm very pleased to announce that Landmark Information Group ( http://www.landmark.co.uk /http://twitter.com/LandmarkUK ) have released this little helper package on NuGet.org, with the source code released under the Apache 2.0 license on GitHub.

Wednesday, July 31, 2013

This one caught me today... When loading a crashdump into Visual Studio, the IDE can get its knickers in a twist and cache all the symbol files in the same folder as devenv.exe.That'd be fine, but the symbol files are cached in folders with the same name as the DLL - so the IDE creates a FOLDER named System.dll (etc) in its own folder - and next time you try and start Visual Studio you get this...

Unpleasant, for sure. Fortunately, I found a Connect discussion that showed the problem, and it's easy enough to fix by deleting all the (obvious) cache folders from C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE.Just annoying, and I'm sure I've still got more clart to clear from the IDE directory. (sigh)

Wednesday, July 10, 2013

With no DDD South West this year, the first DDD event of 2013 was the inaugural DDD East Anglia. Held in The Hauser Forum on Cambridge University's West Cambridge site, the location was excellent for me as I have family living close by. So on a bright Saturday morning, I skirted around William Gates to park up behind Roger Needham (the buildings, that is), before joining a trickle of other early risers heading for the event.

Sign in was painless - Phil Pursglove & the team had used EventBrite to manage the ticketing, and had pre-ticketed the speakers - and then it was into the Speaker's room to drop off my bag before finding that essential first cup of coffee and a couple of very nice Danishes to start my day. Catching up with old DDD friends Chris Alcock and Michaela Murray meant I missed the speaker briefing (sorry Phil!), and had to sneak into the back of the "Welcome & Housekeeping" introduction.

For the first 10 minutes or so, it was a place of quiet work and contemplation - until Mark Rendle arrived (following a hair-raising left-rear Pirelli blow-out on the motorway the night before), and re-kindled what was clearly an on-going wind-up session with Rob Ashton that at one point had me practically in tears with laughter.

Session 2

With my presentation ready (and that one last-minute important mega-slide added to the deck), I headed into the first of the three seminar rooms to hear Chris O'Dell talk about how they have embraced continuous delivery and Kanban at 7Digital.

Chris described their development process, and how they've moved from what was once a painful waterfall process to a highly agile, fast-sprint, multi-track development process where live releases happen almost all-day, every day (except after 4pm and on a Friday)

The key take-aways were:

To ensure that Tech Debt is highly visible.

To visualise tasks as soon as they are identified

To move Product Owners PHYSICALLY nearer to (or even better embedded within) the development team

To radiate metrics and visualise them - the more the better - displaying them on monitors all around the office, and continually updated to the minute.

To analyse cycle time for changes per feature and the number of features completed per sprint or per month as the prime KPIs.

Chris described how adopting this process doesn't happen overnight, in fact it took many months to bed in at 7 Digital. Pair programming and TDD with LOTS of unit tests and plenty of acceptance tests accelerated the process - and advocated inverting the "Testing Triangle", so that your process relies more on unit tests and automated integration tests than end-to-end tests.

At 7 Digital, the teams are cross-functional, but focused on a fairly small cross-section of the overall component set (7 Digital use small-API service based components), but there's a fair amount of movement between teams - allowing staff to dip into any component they find interesting or to get a broader view of the whole.

Their deployment process is entirely automated - a small fix can go from analysis to deployed in a couple of hours, and Chris emphasised that the ability to do a scripted "roll-back" is essential (although like many companies, roll-back is actually a roll-forward of a previous release at 7 Digital).

They have a blue/green flip-flop deployment process - similar to the Azure Production / Staging system, and use feature switches (where features are enabled in the code-base via configuration) rather than having code-branches in source control. A vital part of the process is running smoke-tests on the deployed code to verify basic functionality immediately rather than waiting for a user to be impacted by a fault.

Their acceptance tests are all Cucumber and browser based, and are the fundamental quality gate - breaking an acceptance test for a new or existing feature is guaranteed to raise red-flags.

I really liked Chris' presentation - it provided an insightful from-the-trenches view of how Kanban and continuous deployment can be made to work well, enabling a continuous flow of feartures being released..

Session 3

Session 3 was Rob Ashton's "Testing MVC" presentation - fast-paced to the point of almost being manic, funny, thoroughly enjoyable, and containing a good number of ponies to boot, Rob was effectively a proponent of exactly the opposite of Chris's methods, arguing against inverting the testing pyramid, and of doing "barely enough" to ship a workable product. All the same, Rob still gave a plethora of hints and tips, including:

Always write a statement of intent (feature / scenario) first.

The most important thing about testing MVC are speed + feedback

PhantomJS is (currently) cool, but DO use WebDriver to run it. Selenium (currently) sucks.

Next month may be different

Avoid duplications in UI tests at ALL COSTS.

Coypu is the Capybara equivalent for .Net

For writing UI automation and abstracting away the client UI

There is no excuse for UI tests to be slow

Don't bother trying to abstract RavenDB

Actions on your model are good

Sometimes simple is absolutely good enough.

If there's logic in the controller, try pushing it into the model, or a service

With simple, facade controllers, you don't need controller tests!

Controller actions should never have more than ONE "if" statement

All your logic should be abstracted from the MVC framework for testability.

I'd not actually been to one of Rob's talks before, but he's so entertaining, and presents with such enthusiasm that I can see why he's such a popular speaker.

Grok Talks

After Rob, was lunch - the standard fare of a sandwich bag, but a very nice sandwich bag all the same - and Grok Talks.

I'm hoping he'll submit a full talk for a DDD event, as in the 10 minutes he hinted at a development pace few companies can aspire to, with data sets and data rates that would make most devs blanch, even without 6 hour feature deadlines when races are back to back.

Session 4

My final session (as an attendee as opposed to a speaker) was Ben Hall's "Startups and Minimal Viable Products". This was a no-code, all slide discussion of taking the lean startup mentality as far as it can go.

Ben is "Hacker-in-Residence" at a startup incubator, and brought a "lot of experience of failing" to his presentation. As such, it was very much about "Quicker, faster, leaner" and getting your idea off the ground... and about knowing when to can the idea without remorse.

He described the mantra of "lean startups" - namely that of going from Idea to Build to Release as quickly as possible. Until the product has been built (or at least convincingly faked), then there's no way to measure its impact and acceptance. And without measuring the success (or failure) of the product, you can't learn or make decisions about it.

With that process in place, then it becomes good to fail - but you need to fail FAST. It's no good hanging onto a failing concept just because YOU like it if its' never going to succeed.

Success, however is not about WHAT you do, but WHY you do it. Ben drew a comparison between Apple who design beautiful products (including beautiful laptops) as opposed to Dell who "just make laptops". The difference is the belief in the vision, not in the implementation.

Part of the success (or immediate failure) of a startup concept can be driven from a "Business Assumption Exercise", which provides a framework for deciding on the basic "Go/No-Go" decision. Ben talked about developing 4 or 5 initial concepts to the wireframe stage before lunch, canning 4 immediately and then developing the last concept in the aftenoon to decide whether it had legs or not.

This kind of velocity is frankly mind-boggling, but I can see how it gets results - the wheat is separated from the chaff at a VERY early stage, and little effort is expended unless there's a very good chance of return on the investment.

After Ben's brain-baking session is was on to the last session of the day... mine.

Session 5

My session - "An Introduction to Octopus Deploy" - was designed as a very light look at an interesting new product that makes deployment of (particularly, but not only) web apps to server clusters easy.

I've uploaded the slide deck here, but hope that the main take-away for the attendees was that

Controlled deployment is essential

Rollback matters

Octopus Deploy is a pretty good turnkey solution for deployment

Packaging your software for deployment is pretty easy with NuGet, especially with

Closing

After my talk, there was a few minutes of re-organisation of the 3 seminar rooms into one before the closing swag-out. Phil used the DDDSW method of picking feedback sheets - though not as quickly as Guy Smith-Ferrier has done in the past, so there was less of a scrum around the swag table. Redgate again gave away some great software bundles, but the mega-swag of a pair of iPad minis was definitely the icing on the cake in terms of swag.

DDD East Anglia was a great success in my opinion - good location, great talks, fabulous organisation - Phil and the team should be justifiably proud of their achievement.

Thursday, June 27, 2013

Unlike previous methods, with the latest release of the BootCamp software, installing drivers for the Apple Keyboard, Wireless Magic Mouse, and Wireless Magic Trackpad couldn't be easier, as all the driver installers are nicely placed in a folder and are ready to go.

The down side (if you consider it as such!) is that Boot Camp 5 only support 64-bit windows. For 32-bit Windows installations you'll need to revert to an older Boot Camp version and a more complex installation process.

The OTHER downside is that the "stuttering mouse" bug still appears to be there for me at least. :(