Pages

Friday, 24 February 2017

There are two types of stroke: Bleed or blockage. He had a bleed from inside the vein that made it collapse in on itself (think of one tube inside another) thus creating a blockage.

It happened in his cerebellum. That's the part of the brain right at the top of the spine in the middle. They can pinpoint the exact spot.

14 student doctors came to study him because some of his symptoms were all wrong: He had strength when he shouldn't have. Two days after getting home from rehab he moved a tonne of dirt and built a shed in the backyard.

Since then he's built a second carport...So he's doing pretty well considering it was a major stroke.

Why write about this? Glad you asked.

A few months before Dad's stroke he was complaining of difficulty keeping his balance. It started small so we thought nothing much of it, but it slowly got more pronounced.

We thought it was just an ear infection...

...it wasn't...

Like a pipe that starts leaking, only a drip at first, one day it burst next to his cerebellum when he was fixing the clothes horse by bashing it with his palms. The shockwave up his arms burst the pipe.

The cerebellum controls balance.

We didn't see the warning signs. We didn't have a clue.

To the 3500+ of you that read this blog every month, take note of balance issues with your friends and relatives or talk of ear infections. Just be aware.

In Australia, stroke fridge magnets talk about F.A.S.T. indicator signs. No mention or lack of balance as a long leading indicator.

Hopefully, this information will help someone out there.

Another side effect of frying the cerebellum: After ~50 years Dad doesn't like smoking anymore. The cravings are fried too.

Friday, 4 November 2016

This week I completed the extension of our deployment capability of the web app from our Jenkins Master (running in the Test domain) directly to the Production Support environment (running in the production domain). Thus demonstrating that this set of mechanisms will work for the Production environment (also in the production domain).

The domains have made things tricky.

The trick was:

Build on the master and restore all NuGet packages (from both nuget.org and our internal NuGet server)

Use the Archive for Clone Workspace SCM Post Build step to archive the entire workspace (**/*.*). See Clone Workspace plugin

On success kick off a downstream job tied to the slave (running on Production Support IIS box)

First step of downstream job (running on slave) select "Clone Workspace" from the list of possible SCMs and select the parent project

I have a step that automates backing up the web application files (just in case the deployment fails for some reason)

Tuesday, 10 May 2016

We have a lot to cover in this post. I decided against splitting this material into multiple posts as it logically belongs together and want the reading experience to be as simple as possible.

It may surprise some of my readers but until last week I had not used parameterised builds before.

Well I'm here to let you know that they are awesome and will rock your world, baby!

Today's post is a case of needs must.

Our application at work is growing along a number of axes:

To deploy the full solution means deploying two independent systems. In a future release this is expected to grow to three.

We are also about to deploy the current release to production with a road-map of at least three future releases.

There are three pre-production test environments that Release Candidates can be deployed to before being deployed to production.

All this spells an explosion in the number of jobs making it difficult to manage. This won't do. Parameterised builds are the solution to this problem.

Our updated taxonomy has:

Jobs per independent system

Polling the repository building every commit

Deploying a specified labelled version (Master branch first test env)

Jobs per independent system & release branch

Deploying a specified labelled version to the specified pre-production environment

Deploying a specified labelled version to production

That's it. We now support multiple systems, environments and releases with fewer total jobs than before. Boom! With the CloudBees Folders plugin...Everything. Becomes. Clear! (Tip: If you have an existing job that has builds and is holding a workspace, you might find that you need to Wipeout the Workspace after moving it into a Folder to get it to build correctly again. I had to do this with the TFS plugin).

Note that we have decided that the master branch will always represent latest and greatest and spawn release branches at the time we first deploy to the first of two UAT environments. This allows the team to keep working on the next release and commit UAT bug fixes to the release branch. This approach borrows ideas from the GitFlow approach to branching (BTW: we are using TFS for source control only) and seems to be working well for us.

In that context, let's look at each of these job groups in more detail.

Group 1A - Build every commit

One job per branch (i.e. Master Branch and each release branch) that polls the repository's master branch, runs its unit test suite and static analysis (StyleCop and FxCop). If all the unit tests pass and static analysis is within tolerances, label the repository ({JOBNAME}_{BUILDNUMBER}).

Post Build Notification actions for this group are:

XUnit Unit Test publication

Code coverage with OpenCover and ReportGenerator

Email notification sent to the development and test team which includes unit test results and console log with the Extended Email Plugin. Emails are sent on Failure-Any, Unstable-TestFailures and Success

Group 1B - Deploy master branch

This is the first of our parameterised jobs. It uses two awesome plugins: the excellent Active Choices plugin and the Scriptler plugin (Scriptler is Groovy! Sorry, I couldn't resist.) which work together and will enable you to deploy a labelled version of a branch to an environment with two groovy scripts. We'll look at those scripts later.

The purpose of this group of jobs is to deploy a labelled version (remember a labelled version is one that has passed the unit tests and static analysis in a 1A job) of the master branch to the test team's environment (we simply call it TEST). This is the first test environment after development and is where story validation occurs.

These jobs give development and test team members the ability to deploy to the TEST environment and no further.

Post Build Notification actions for this group are:

Email notification same as Group 1A

Group 2A - Deploy Release branches

The remaining two pre-production environments are End to End (E2E) and User Acceptance Test (UAT)

Each job in this group is essentially a copy of Group 1B except it:

operates on a release branch

the groovy scripts are parameterised to pick up the labels (created by the corresponding 1A job) on the release branch

the target environments are E2E and UAT.

In the same way that test team members can "pull" versions through to the TEST environment they are testing in, the E2E and UAT Testers can "pull" versions through to the E2E and UAT environments they are testing in. It is the responsibility of the development team to commit fixes to the appropriate branch.

Post Build Notification actions for this group are:

Email notification same as Group 1A

Group 2B - Production Deployments

Each job in this group is essentially a copy of Group 2A except the only available target environment is Production. I did this so that only a small group of people could deploy to production and a wider group could not accidentally deploy to production. At a later date I plan to consolidate Group 2A and 2B by incorporating this (Thanks Bruno!).

The $JOB_NAME parameter is the Jenkins job name (should be self-explanatory) and the $FULL_JOB_NAME includes the CloudBees Folder name like so: <FOLDER_NAME>/<JOB_NAME>. This script needs to set a parameter called VERSION_SPEC. This is so the TFS plugin knows to checkout by label.

Thursday, 25 February 2016

This is a follow up to the last post about giving team members 1-button deployments to test environments

Generally speaking the deployments have been working extremely well. Previously what was a 30-minute manual (and therefore error-prone) deployment that the development team had to do (which in and of itself reduced iteration capacity) has been reduced to a 2 minute automated process that just works.

Since the last post there have been some major and a minor (incremental) improvements. I'll talk about the incremental one first.

We decided to move the step that runs any database updates that have been checked in since the last build to the front. We found that it is the most likely to fail and therefore don't want to deploy the web application if it does. By making this simple ordering change the whole build is more transactional in nature.

Is it ok to do a deployment now?

So we are running static analysis, unit tests and code coverage as part of the job that runs on every commit. Everyone thought that was great. Also, we could deploy to any nominated environment. Everyone thought that was awesome.

One small wrinkle: The unit tests were running as part of the run-on-every-commit job but were not a gate keeper to the deployment jobs. As a result the Test team had to keep asking: "Is it ok to deployment now?"

This was an issue and needed to be resolved.

Come with me on the journey of how I spent the last two days (thankfully the start of the iteration) solving this issue before we got back into coding and required a deployment "service". Take heart, it is possible with the right mix of plugins.

Firstly some key points about the environment we are operating in:

Using Web Deploy for deployments means that all the knowledge of the remote target IIS is kept with the solution in publish profiles.

The Test team are required to be able to press a button to execute a deployment.

Due to item 1, an artifact repository and the promoted builds plugin are not much help because Web Deploy wants the workspace that has been tested as input, not the compiled binaries.

The Web Deploy mechanism (which I execute through the /deploy parameter to msbuild) works so well, I wanted to keep this in place unchanged. The problem was therefore reduced to: "How do I hand a successfully unit-tested workspace to a deploy job?"

Job 1 Overview

This is the job that I granted users in the DEV and TEST team read and build permissions. Remember to grant yourself all permissions. Its main purpose is to act as gatekeeper of job 2, which actually does the deploy. Job 1 does this by running the unit tests.

Job 1 Configuration

Block build if certain jobs are running: On (Build Blocker Plugin)

As this job is running the XUnit tests, if it is kicked off while the main build job is running (due to a developer commit) it will fail to execute the XUnit tests and fail due to an empty results file. Blocking this job until the main job finishes. Deployers just have to be a little more patient and wait the extra (up to) 10 minutes for the main job to complete.

Discard old builds

We're going to be archiving the workspace. No matter how much disk space you have your going to want limit how much you keep. I currently have this set this to 10, although I am thinking of reducing it to 5.

Permission to copy artifacts (Copy Artifacts Plugin)

Specify the name of job 2 as a project allowed to copy artifacts

Check out the source code from TFS

Use Update: off. This is really important for our angular typescript application. If one of the developers moves or otherwise alters a .ts file in the solution. You really don't want any old .js and .js.map files hanging around on the filesystem. Get a fresh copy of the entire workspace every time.

Execute other projects

Once we have got to this point we a good to go. Execute Job 2 by specifying its name. I call mine DoDeploy__ (I actually have 3 flavours of deploy DoDeploy_TEST_RELEASE, DoDeploy_TEST_DEBUG, DoDeploy_PILOT_RELEASE)

Job 2 Overview

This is where the rubber hits the road, where we have a green light to deploy.

Job 2 Configuration

No Source Code checkout

We want to get the archived workspace that Job 1 has deemed ok to be deployed. Its the workspace that just got unit tested successfully.

Delete workspace before build starts (Workspace Cleanup Plugin): On

To be sure, to be sure

Copy artifacts from another project

Project Name: Name of Job 1. I call mine: DeployTPS__ because this is what the user wants it to do.

Which build: Upstream job that triggered this build

Artifacts to copy: blank (We want everything baby)

Artifacts not to copy: blank

Target directory: blank (workspace is default)

Parameter filters: blank

Ready to deploy

At this point we have secured for ourselves a workspace that has been successfully unit tested and ready for deployment. QED.

It would make life easier if Jenkins has a plugin that allowed me to more easily archive the workspace, because that's what Web Deploy prefers. It would be easier to do all these steps in one job rather having to split over two.

Some more observations

If you find yourself with the same environment limitations, the above set up does work. However it is not perfect. Be aware of the following things you can and can't do.

You can:

Deploy the latest successfully unit tested code to a given environment.

You can't:

Deploy artifacts to an artifact repository (because you don't have one) and deploy particular versions of your application to a given environment. This is less than ideal and will be where I'll be working next to enhance our capability.

Thanks for reading and hopefully you have found this helpful. As always, if you have any questions, feedback or comments leave them in the comments section below. Let me know if there is a better way to skin this cat.

Wednesday, 11 November 2015

Today I'd like to talk about how I set up auto-deployment of the web application as it is an important building block in continuous delivery that I have implemented to our internal corporate test environment.

We added some nice features to the environment that are worth telling you about. To get it running smoothly required dealing with some gotchas.

In this environment the target IIS and Jenkins are on the same machine. This is only because we were having firewall issues in a previous environment. The below approach should still work across machines as long as network permissions are in place.

Let's start by outlining what some of these nice features are and then we'll get in to the detail:

Everyone on the development team, including the testers have the ability to build a Jenkins job to deploy the web application to the Test environment

The application IIS content directory is completely expunged every time the deployment job is executed

Redeploying with WebPublish and MSBuild

The minor assembly version number (Properties/AssemblyInfo.cs) is incremented and automatically committed back to TFS source control every time the deployment job is executed

I'll be referring to scripts that I have opensourced here. This is where I will continue to add new scripts and make improvements.

1. Everyone can deploy

I used Jenkins own database to create user accounts and simple passwords for each team member and then used Project-based Matrix Authorization Strategy which is part of the Matrix Authorization Strategy plugin to control access (Overall Read permission at global level).

I then created a job called "DeployToTestEnv" and gave users Read and Build permission in the job config

While this does mean that the testers still need to ask the developers if it the code is stable to deploy each and every time they think about deploying, this minor pestering is far outweighed by the benefit of being able to DEPLOY AT WILL!

In a perfect world, the DeployToTestEnv job would only every run after the main job that runs all the tests on each commit had run successfully, but I was overruled and told the testers must be able to "press a button to deploy".

2. Expunging everything on every deploy

Now we are starting to get into what the job actually does. To ensure that you have deleted ALL the files and folders recursively from the target IIS, I user the IISController.bat script to stop the website and also stop the application pool. I found that if I didn't stop the application pool some files and folders remained locked. As soon as I stopped both, everything could be removed with DeleteSiteFiles.bat.

3. Redeploying with WebPublish and MSBuild

Redeploying is easy with the Deploy.bat script. I pass in 3 parameters. The solution file name (path relative to workspace root), the configuration (usually release) and Publish Profile name (previously set up and connection tested with a user that has management rights on the target IIS.). The web.config. file should probably update the connection string (at a minimum). Have a read of this and this.

In the past I have stopped here and felt pretty good about myself. In our environment, the testers are sitting in an office away from the rest of the team. Its only 5 paces away but that glass wall and door is a barrier. As a result of this, the testers are raising bugs unfinished stories.

4. Auto-increment of minor version number

To combat this and know what version of the application the testers are testing and the dev team is developing, we instituted automatic incrementing of the minor version number in the Properties/AssemblyInfo.cs file. This was achieved in two parts. Firstly, I downloaded and installed this and added it as a project in the solution. Secondly, I run the IncrementMinorVersion.ps1 script as a PowerShell Build Step in Jenkins. This script will checkout the file, increment the minor version number and reliably check the change back to TFS source control. Powershell is cool! The effect is that the minor version number increments by 1 AFTER each deployment to test so that the development team is immediately working on the next release.

Thanks for reading and hopefully you have found this helpful. As always, if you have any questions, feedback or comments leave them in the comments section below.

Friday, 25 September 2015

Welcome to the latest JenkinsHeaven post!
I'm extremely pleased to be able to write this post for you. Running Javascript tests as part of your Jenkins build with Jasmine 2.2.0 is actually, and refreshingly, pretty easy.