There's a common pattern in IT. In many cases, it ultimately leads to a new buzzword. It's not complicated. In fact it's fairly obvious. But understanding what you can do with it can help immensely when it comes to pushing change in your organisation.

Agile, DevOps, Microservices, Containerization, whatever the next buzzword is - all of these "revolutions" are really a result of these three steps.

Awareness

Acknowledgement

Adjustment

Examples:

Let's look at a couple of examples to solidify what I'm talking about. I'm simplifying for the purposes of brevity, but consider Agile and Microservices:

As a "politically powerless developer" in an organisation, it can be difficult to instigate change, even when you know it's the right thing to do. In short, you probably don't have the power to implement the Adjustment part of this pattern.

But you can definitely affect Awareness. It's easy - start measuring.

Measure how long it takes for a change you commit to make it to production. What happens in that time? Is there a lot of waiting?

Measure how long it takes for a user-reported issue to get to your backlog. Add some production monitoring, and measure how much time is saved by learning about issues first-hand. How much time did you save?

Measure the time it takes to write tests, and compare that to the time you take fixing bugs for code that doesn't have tests around it. Can you prove that writing tests is beneficial overall?

Once you have information, you can give it to your boss or pass it up the chain. You've already acknowledged there's a better way, so you'll be ready with solutions once management acknowledges there's a problem!

Summary

To really instigate change and adjust how your organisation delivers software, there needs to be an acknowledgement that there's a problem. That won't happen unless there's awareness.

My first real-world exposure to version control was in the good old Visual SourceSafe days. Back then (and continuing into server-side Team Foundation Version Control), it was common to have exclusive locks on files - i.e. only one person could work on a file at a time. If someone changed a file, you had to "Get Latest" before you could start working on it.

In practice, big changes required many files being locked by one person, and everyone was blocked until it was done.

This was unproductive.

So we worked on branches. We knew that the merge at the end was going to be horrible, but that was a problem for the future. Not worth worrying about now.

Then exclusive locks went away, and we started working on the same files, merging as we went. We could even work offline with distributed solutions like mercurial and git, and make changes in tiny branches (rather than branching the whole codebase), which meant smaller merges and less pain.

But big changes and refactorization were still a problem. You still had to merge your changes with everyone else's, and with everyone working on the same files, it became easier to work on your own branch (or a team branch) and worry about merging later. You could even push broken code to the server without bothering everyone else.

So we deferred merging.

Again, we knew the merge at the end was going to be horrible, so for big changes, our situation hadn't improved much.

Continuous Integration

Continuous Integration means merging all of your changes into everyone else's changes on a continuous basis in order to make sure problems are caught early. You can ensure that the code compiles, all the tests pass, and even run analysis tools to check for code quality or security vulnerabilities.

There are obvious advantages to continuous integration. It's common knowledge that problems are cheaper to fix the earlier they're identified. Whether that's a bug or a merge conflict, if you can resolve it immediately, it will save time in the long run. You want to know straight away whether you've "broken the build". Not a week later when another team member pulls your changes and can't compile, and certainly not a few months later when it's time to push to production1.

If everyone on the team is working on the same code, and they're all pushing their changes regularly, continuous integration will identify problems quickly.

If a team is working with long-lived branches, they're not doing continuous integration. Instead, they're doing continuous isolation.

By working on a long-running branch, changes become more and more isolated from the other branches, including trunk (or master). CI builds can still exist against that branch, but they're only integrating changes within that branch, not the code that the other teams are committing. They might make the team feel good, but they're just deferring the pain.

Trunk-based development

The solution seems obvious - don't work on branches! Or if you do, use short-lived branches and merge back into the trunk as soon as you can.

I promote this way of working as much as I possibly can, but in practice, it's not always that simple. Not every change can be knocked over in a few days.

trunkbaseddevelopment.com is a fantastic website that goes through this idea in detail, including the practicalities of doing it in real teams. I'd highly recommend reading through it.

The other technique that can help is feature toggles (or any of a bunch of names that mean the same thing). The idea is that even unfinished code can be merged into the trunk, as long as it's hidden behind a flag. As long as it a) compiles, and b) doesn't break anything, you can deploy it all the way to production with the flag turned off. The code won't run, but you can still continuously integrate your code.

Feature toggles deserve their own blog post, so I won't expand further here.

Summary

While long-running branches may feel more productive day to day, realise that you're just deferring the merge pain. That conflict you introduced is going to need attention at some point. The sooner you address it, the easier it will be to resolve.

When you start something new, first on your agenda should be a website! Right?

One of the first things Donovan set up for the new "League of Extraordinary Cloud DevOps Advocates" team was a new LoeCDA.com website. The "Tour Dates" section will be extremely useful if you'd like to see any of us at an event near you!

He used the extremely useful yo team yeoman generator and had a new ASP.NET Core website, complete with a CI/CD pipeline, running on docker in Azure in just a few minutes. This still blows my mind, but I digress - that's not the point of this post.

Our dev process

Even though this is a very simple website, we agreed to make all changes via pull requests (PRs). This helps us in two ways:

We can exercise the PR process in Team Services and Visual Studio

It helps us learn from each other

Exercising features that real dev teams will be using is really important. Half of the Cloud Dev Advocate role is bringing feedback back to the engineering teams. What better way to do that than using the features?

Learning from each other is an often-overlooked benefit of working with pull requests. It forces code reviews, which is a great way to learn about new techniques or better ways to implement features.

So every change will occur on a feature branch (or topic branch), we'll submit a PR, another team member reviews and approves it, and merges in those changes. Of course once that happens, CI/CD ensures it gets to production with almost no more effort!

Finding out about PRs

We use Microsoft Teams for day-to-day communication, so what better way to learn about new pull requests than through a chat client that's always open?

Teams has a great feature called Connectors, which allows you to integrate with third-party applications. Thankfully, there's a connector for VSTS.

Configuring the VSTS connector

The first (optional) step is to set up a new channel in Teams. You can easily add a connector to an existing channel, but having a specific channel for PRs reduces the noise in your other channels.

To do this, just click on the ellipsis next to your team, and choose Add Channel.

Now click on the ellipsis next to the new channel, and choose Connectors.

Find the Visual Studio Team Services connector. If you already have some VSTS integration, you'll see a Configure button. If not, it will be Add.

If you're adding a VSTS connector for the first time, you'll need to walk through the steps to connect to VSTS. Once you've done so, you'll be able to choose your connection (or "Profile") for subsequent VSTS connector configurations.

The next part is easy - just go through the fields and choose the appropriate entries from the dropdowns. You'll need to choose a VSTS Account and Project, and you can choose to be alerted by events for individual VSTS teams, or for all teams.

You'll then be asked for an event type. There are a lot of VSTS events you can choose from here.

In our case, we want to hear about both "Pull request created" events, and "Pull request merge commit created" events - i.e. when there's a new PR, and when one has been merged. Unfortunately you can't choose both at the same time, so we had to go through this configuration twice. Not a big deal - it's a once-off configuration and only takes a couple of minutes.

Once done, going back to Connectors (from the channel ellipsis) allows us to see the connectors we've configured.

The end result

When a new PR is submitted by anyone in the team, there's a new message in the channel - complete with a direct link to view the PR!

I'm excited to announce that I'm joining Microsoft as a Sr. Cloud Developer Advocate in the Cloud and Enterprise group!

In a seriously ego-boosting Twitter DM, Donovan Brown asked whether I'd be interested in the role and encouraged me to apply. I went through the process, and I'll be starting on his team next week, on the 7th of June.

What about Octopus?

I joined Octopus Deploy early in 2015 as a Solution Architect. When I joined, I was one of only six in the company, but since then the company has grown to about 25 and my role started revolving more around the community. I also wrote our extension for Visual Studio Team Services.

I love Octopus, and leaving wasn't in the plan at all, but this new opportunity was far too good to pass up. I consider myself extremely lucky to have worked with such a dedicated and smart group of people for over 2 years. I have no doubt that Octopus will continue pushing out awesome software.

While Octopus occupies a commanding position in the world of deployment automation, Microsoft (through VSTS and Azure) has a set of tools and services for the whole DevOps lifecycle. I'm looking forward to talking to a broader audience about everything from requirements gathering to monitoring in production.

However... you can still expect me to recommend Octopus as part of that process. :)

What will I be doing?

It's been made clear to me that my goal at Microsoft is to help people succeed. Not to help them succeed with a Microsoft-only stack. Not even to help them succeed with a mainly Microsoft stack. If Microsoft tech can be used anywhere in the pipeline, I'll try to help out.

This is really important to me, and is another example of Microsoft putting their money where their mouth is. Gone are the days of Microsoft lock-in - today it's all about competing on merit, enabling any platform, and open-source. The Microsoft of ten years ago didn't excite me, but that's changed dramatically.

As for my day to day, you can expect to see me at more conferences and events. I'll be creating useful content to help people succeed, and hopefully visiting a few customers directly. It'll take me a little while to settle in, but feel free to reach out on Twitter if there's something I can help with.

It's easy to decide you want to get away from long, painful, weekend deployments, but it takes much more than just a decision. For legacy code (whatever your definition of legacy might be), there are likely some things that prevent you from deploying fast and frequently.

In my previous Brownfields posts I talked about DevOps as a Culture and changing the dev process. In this post, I'll talk about some practical changes you can make to your codebase and how you work with it. These changes will make it easier to deploy regularly and without fuss.

One codebase

An application should only have a single codebase, and only one branch that you deploy to production from. There are two common practices that get in the way of this:

Separate copies of code for different scenarios

Long-lived branches

Code for different scenarios

This is a surprisingly common practice. I've worked with organisations in the past that had different, almost-the-same codebases for different flavours of the application. They could be separated based on country, set of features, or individual customers.

The common thread is that at some point there was a diversion based on two seemly unreconcilable differences. Rather than a larger effort to refactor, two versions emerged. The result is that it's incredibly difficult to apply changes and bug fixes to all versions consistently and safely.

Long-lived branches

It's also common to separate versions of code into branches. This can be a way of implementing the previous practice (code for different scenarios) that gives the feeling that it's the same codebase. It can make changes a little easier to propagate across versions, provided the versions are similar.

It's also common to have separate long-lived branches for different environments. The team writes code on a dev branch which then gets merged into staging for a test deployment, then to production for production deployment.

In my opinion, this is a mistake. Each time you need to deploy, you're merging and ending up with a brand new codebase. That means when you deploy, it's the first time you've ever deployed that code.

If a successful deployment relies on a clean merge, that should scare you!

So how do you get to a single codebase? Well, it's likely to require (sometimes significant) refactoring.

Identify and isolate variations

It's important to find these differences and isolate them. Rather than if statements scattered throughout your code, properly independent classes or assemblies make it much easier and safer to make changes to your software without unexpected side-effects.

Dependency injection is great for this, but of course you don't want to have to change the code that resolves your dependencies each time you compile for a different scenario. You need to be able to change your configuration without recompiling.

Externalise configuration

To avoid having to recompile for each different deployment scenario, it's important that your configuration is outside the compiled application.

That means you can deploy the same binaries and compiled code, but change the configuration based on the environment you're in, or the customer you're deploying to.

It's really up to you where you keep your configuration. .NET developers tend to keep config settings in .config files, but environment variables or even external databases are reasonable alternatives. As long as you don't have to recompile to deploy to a new location!

Get your versioning right

Finally, it's extremely important that you can easily distinguish between different compiled versions of your software. The usual way to do this is with versioning.

Because there's only a single codebase, and we only deploy from the master branch, we only need one version number. At least for production candidates.

Following Semantic Versioning is an awesome way to manage your versions, but if you can't do that, you just need to follow two rules:

Each build has a new version number

New code has a higher version number

These two rules mean that given any two builds, you can easily determine whether they're the same, and if not, which is newer. Remember, we only have one branch, so we're comparing like with like!

Summary

There are changes you can make to your code and the way you work with it that will make deployments easier.

Get to one codebase

Identify and isolate variations

Externalise your configuration

Get your versioning right

Once deployments are easier, you can feel confident doing them more often, which will ultimately lead to a better DevOps story!

There are a lot of options for connecting with people online these days, and I use many of them. To differing degrees, the main ones I use are Twitter, LinkedIn, Facebook, Email, Skype, and Slack.

So which do I use for what purpose, and why did I only supply links for two of them?!

Over the past few years, I've started being more careful about the contacts I have through these channels. If you've asked me to "add you" on a particular channel, I might have ignored you or refused. This post explains why.

Note: There's a chance I'll come across like an arsehole in this post. My intent is to explain, not offend. But if you think I'm doing it wrong... uh, I'm sorry you feel that way.

Twitter

I use Twitter a lot. My profile and tweets are public, and I tweet a mix of personal and work-related stuff. I'm really happy for anyone to follow me. Seriously, go do it.

I respond to most @ tweets, but fair warning - if an account is abusive or a bit too spammy, I won't hesitate to block and report.

Twitter is almost certainly the best way to connect with me.

The accounts I follow are extremely varied as well. There are a lot of tech people, but also family, sports people, companies, parody accounts - basically anything I want to read. If I don't follow you and you think I should, let me know! I might! But I also might not. Please don't be offended.

LinkedIn

I rarely use LinkedIn. I'll keep my profile more or less up to date, but blog posts happen on my blog, and I'll post interesting links to Twitter.

I do have a lot of LinkedIn contacts, but I'm pretty strict on who I "connect with".

If I've annoyed you by "not connecting", there's a good chance it was on LinkedIn.

As a rule, I don't connect with recruiters. No, I'm not interested in the job you think I'm perfect for because my profile matched the keywords in your search.

I also don't connect with people I haven't met. It's great that you have a similar job as me, or you like the same things, but I treat LinkedIn as a place to keep track of people I actually know.

Facebook

I use Facebook frequently, but only for friends and family. Luckily Facebook friend requests are few and far between. I'll let you read into that what you will.

The overlap between business connections and Facebook friends is small, and I intend to keep it that way.

Occasionally, work-related stuff will creep onto Facebook, but I'm conscious that most of my Facebook friends, well, couldn't care less. So it doesn't happen a lot.

Email

I welcome emails! So why didn't I provide a link to my email above?

It's not hard to find an email address for me, and I'd prefer if you did a little bit of work. It indicates that you actually wanted to contact me (and not that you spammed an address you found somewhere). It's a coarse-grained filter. Of course, if I meet you in person and you ask, there's a good chance I'll give it to you.

If I haven't given you my email address, it's not hard to find if you actually want to contact me.

I've set up a lot of email addresses. Ultimately they filter into the same couple of accounts, but into different folders. Some I care about deeply, some not so much. Some only exist so I can see who sold my details.

For that reason, I'll check the publicly-available ones a lot less than the others.

Of course, just because I welcome emails, doesn't mean I'm great at returning them. By its very nature, email is asynchronous, so I might not reply to you today. Or tomorrow. Or this month.

Skype

Skype is great for voice and video conversations. That means if we're connected on Skype, it's almost certainly because we needed to see and/or hear each other at some point in the past.

My Skype contact list consists almost exclusively of peopleI previously needed to have a conversation with.

Unless we need to speak remotely, I won't add anyone on Skype. For me, it has one purpose, and making new friends isn't it.

Slack

At time of writing, I'm up to eleven Slack Teams. Yes, it feels like a lot, and yes, they're sometimes a pain to manage. They're all tech-related, and I'm very unlikely to join a Slack Team unless it has relevance to me right now.

Each Slack Team has a purpose,but if we're on the same Slack Team, let's chat!

I find them extremely useful. For example, the Octopus Slack Team is how I communicate with the rest of the company about 99% of the time.

Interestingly, I'm not in any public teams. All are invitation-only, and I only manage one of them myself.

Summary

So that's how I treat my social media accounts, and why I might have ignored your "friend request" or "network connection". Don't take it personally!

In my last post, I claimed that while DevOps was all about culture, that's not very helpful when it comes to getting started. You can't force a culture change.

However, there are things you can do - even as a relatively powerless developer - that can set you on the right path.

What are we trying to do?

Ultimately "doing DevOps" comes down to one key goal:

Improve your cycle time

A faster cycle time means changes spend less time in the pipeline. This has a few advantages. If a bug is introduced, it's very likely that the team that worked on that code did so recently. It's fresh in their minds and therefore easier to fix. Second, it provides for far more agility. Code that's waiting to be deployed for months may not be relevant by the time it gets to production. And if there's a new opportunity, code can be written and deployed with very little delay.

A faster cycle time also means smaller changes being released to production more frequently. Smaller changes means less risk, and if something does go wrong, it's far easier to pinpoint and fix the problem. In many cases, it's faster and safer to fix the issue and deploy than it is to roll back - which is a good thing.

The faster these steps can occur, the shorter your cycle time will be.

The team should focus on items 1, 2, and 7. Deciding what to create and creating it is what people are good at.

Items 3 - 6 should happen automatically. Repetitive, precise tasks are not what people are good at. People are slow, and they make mistakes. You'll find you simply can't be fast without effective tooling that does the work for you.

What are the practical steps?

Let's look at items 3-6 one by one, starting with item 3 - building and running tests.

Set up a Build Server

You absolutely, 100%, without a doubt need a build server.

Configuring a build server is the easiest and most beneficial thing you can do to improve your process.

I personally love the new Team Build available (for free) in Visual Studio Team Services - it'll build anything you can throw at it in any language, and you don't have to use VSTS for anything else if you don't want!

Once you have a build set up, configure it to build on every commit - a Continuous Integration build. A build server that compiles your code and runs tests on every code change provides two main advantages:

You'll know immediately if a code change has broken your software

You'll always have a deployable build ready to go

Deployment

Steps 4 and 5. Deployment.

If you want to improve your cycle time, you'll need tooling for this. Going through a printed document checking off items as you manually deploy to multiple machines just isn't going to cut it!

Again, VSTS has an option in Release Management, but I'm confident in calling this one for Octopus Deploy. I'm naturally biased because I work there (Disclaimer!), but the reason I work there is because I love the product. Find an Octopus user, and ask them how they feel about it.

Conveniently, it works really well with the VSTS build system mentioned above. Just add and configure the Octopus steps available in the marketplace extension.

Deploy the same bits to pre-prod and production, and deploy them the same way.

It's really, really important that each time you deploy a release candidate to an environment, you deploy the same artifact built by your build server above (don't rebuild), and deploy it in exactly the same way.

Doing so gives you two advantages:

You're testing the same bits that will go into production

You're testing the method by which those bits get to production

Importantly, this means externalising all your configuration. You shouldn't have environment-specific settings in compiled code - rather, keep it in config files or databases or even external services. I'll elaborate on this in a future blog post.

Monitor Production

If you're deploying to the web, there's no longer a reason to be surprised by a customer telling you about an issue in production.

If you're searching for clues in the logs, by definition you're a step behind

As developers, we've been writing to error logs for decades, and clever tools like Seq can make them extremely easy to analyse. However, there are countless tools that let you get ahead of the problem.

If you're a .NET developer writing web apps, Application Insights literally takes minutes to add to your application, and for relatively small amounts of data, it's free. With very little setup, it'll alert you to performance problems, exceptions, and all sorts of other things you want to know about before your customers do.

Summary

Setting up these three things - a build server for CI, an automated deployment pipeline for CD, and monitoring for Production - will cover most of what it means to "do DevOps", at least from an automation perspective.

These items are a great start, but constant introspection and improvement are just as important. It's not enough just to have these in place, just like it's not enough to have a standup and sprints if you're "doing Agile". You need to continue to refine and adjust so the process works as well as it can.

If you get your process tooling right, your team can focus on what they're best at - writing code.

It's been said that DevOps isn't a set of rules or processes or a tool you can buy. DevOps is a culture.

I agree.

But while I think that's true, but it's not very useful as a way of guiding an organisation.

Changing Culture

If you've worked somewhere where a culture change has been pushed from upper management, you'd know how poorly that strategy works. I remember being at team meeting where an upper manager informed us that "we [had] an agile culture now". Which was great, but to nobody's surprise that direction wasn't quite enough. It turned out they still wanted a full project timeline up-front and an optimistic Gantt chart.

Of course, it's important to get management buy-in for anything that will change the day-to-day operation of a team. But management buy-in for a buzzword is very different if it doesn't come with support for doing things differently.

"We're a DevOps Company Now"

Last week, I spoke about Brownfields DevOps at NDC London. It was a talk on how to work with existing legacy or "brownfields" software to make it easier and faster to safely make changes, deploy those changes, and measure them in production.

I asked the audience, partly tongue-in-cheek, whether they'd ever been told to change their culture. For example, had anyone been told they were to "do DevOps now"? There was some awkward laughter that spoke volumes.

The main problem comes when the buzzword is identified as something the business must have, while there's little support for what that buzzword actually means.

Bottom-up

I'm a firm believer that a DevOps culture can only come from the teams on the ground. The people working in dev, operations, testing, and support.

Management support for changing work practices is important, but those changes have to take place because the team wants them to take place and makes them happen, sometimes despite what the business asks for. As the saying goes, it's easier to ask forgiveness than permission.

What can you do?

As someone in dev, operations, or any other technical team, you have more control than you think.

Does the architecture of your application support regular change without constant regression testing?

Can you consistently and safely deploy to production or is it a nightmare every time you release?

At deploy time, how dependent are you on the state of your production environment?

Do you have metrics on whether the features you just released are being used?

All of these questions can be answered, and problems solved, by technical teams. And aside from the time needed to make changes (ask forgiveness, not permission), they don't require a management directive.

Back to the Culture

Once you start thinking of ways to improve the agility of your software and process, the culture change will happen.

A culture change happens as a side-effect of becoming more efficient and seeing success. It feeds on itself - every change you make that improves productivity and reduces frustration encourages you to make the next change, then the next. Before you know it, improving your process becomes a primary consideration whenever you do any kind of work - and that's a DevOps culture.

Next post

In my next post I'll expand on some specific things you can do as developers to move in the right direction. Whether you're doing a File | New Project, or working on "brownfields" software, there are practical steps you can take to start "doing DevOps".

While at NDC I had the chance to record a Pluralsight Play by Play with Troy Hunt. In a bit of a twist for regular viewers of his courses, I'm doing most of the teaching!

In the course, we talk about traditional manual deployment strategies and the state of deployments in many companies. We discuss the common problem areas when it comes to deployments, then I walk through Octopus Deploy and show how it helps solve these issues. Of course because it's Troy, we talk about security and Azure deployments as well. We cover a lot of ground in an hour, so whether you've used Octopus before or are new to it (like Troy), it's worth a watch.

Play by Plays differ from other Pluralsight courses in that they're unplanned and unscripted. The format leads to a much more natural discussion between two experts. Troy and I get along well, so what you see is almost everything we recorded!

More to come

This is my first foray into the Pluralsight world (more coming), and the format of the Play by Play made it a lot of fun. Of course having a Pluralsight veteran like Troy Hunt made it much easier - I had a guide on site telling me how the session should run, and what things would need to change.

My plan is to publish another full-length Octopus course soon - focusing on integration with your existing toolset. Of course there's the fantastic course by Kenneth Truyers on Deploying .NET Applications with Octopus Deploy, but there's definitely room for more Octopus courses, and plenty of room for courses around the broader DevOps area.

Ok, but why?

Several years ago I worked at an organisation where the software developers didn't get along with the rest of the company. Shortly after I joined, it got to the point where the software department was about to be shut down and outsourced.

The problems were typical of organisations where a dev department is just one moving part: The "business" complained that deadlines were consistently missed, requirements were misunderstood, and developers were abrasive and resistant to accepting new work or answering questions.

From the developer side, the "business" didn't understand software complexities, salespeople would commit to deadlines before speaking to the devs, and new work and meetings were piled on without a lot of consideration.

It was a frustrating place to work for everyone.

Turning it around

When I left two years later, it was a very different story.

The developers were hitting deadlines, they enjoyed their work, and were no longer scared of unexpected visits from managers or salespeople. The "business" had their questions answered when they needed, could trust estimates, and regarded the dev team as an important and efficient part of the business. There was no longer any talk of outsourcing.

Now, I only had a minor role in this turnaround. Minor. But I learned a lot.

The reason for the turnaround didn't have anything to do with the skill level of the developers or the "business". It had to do with resetting expectations and changing the way we worked.

I'll write more about some of the techniques we used in future blogs and presentations.

And?

At NDC Oslo, I attended an awesome speaker workshop with Jesse Shternshus and Denise Jacobs. Amongst some really fun and useful exercises, I landed on the beginnings of a new talk. One that drew from my experiences with the aforementioned organisation, as well as other companies I'd worked with over the past few years.

It occurred to me that these big turnarounds don't happen every day, but I was in a lucky position where I'd experienced one from beginning to end. So I thought about it some more, and focused in on what was bad, and how it became good.

The main culprits? Meetings, interruptions, and changing priorities.

Ok, lovely story, but why the survey?

I want to know if my experiences are common to everyone.

More than that, I want to know how they differ between different jobs, and whether there are correlations between types of work, workplace happiness, meetings, interruptions, and everything in between.

The results will make their way into blog posts, presentations, and hopefully workplaces.

I'm sure this data will be useful to more people than just me, so when the survey is finished, I'll make all the data freely available under an attribution license.

In software development we often talk about the Bus factor. The higher the bus factor, the higher the amount of damage that would be caused if a particular developer was hit by a bus. Not damage to the person - presumably there'd be a lot - but damage to the project they're working on.

The problem is the knowledge that's concentrated in one person's brain that would be irretrievable if they were to suddenly disappear. Unless that knowledge is no longer required, there's a big risk here. When do we usually consider it "no longer required"? When it's been turned into working code.

Consider a developer who is halfway through a big feature. Because they've been deep in that feature for a while, they're likely to be across all the complexities, moving parts, and the considerations that need to be taken into account. But once the feature has been written, tested, deployed and proven, the knowledge loses value rapidly.

As an industry, we try to mitigate the risk of concentrated knowledge by sharing it between team members in various ways. Classically this involves documentation. There's usually some formal process to force developers to update documentation when they make a change (because documentation is super boring). However studies have shown that software documentation is rarely kept up to date anyway, so the problem persists.

More "agile" teams try to share knowledge by making sure more than one developer is involved at all times. Pair programming and code reviews are examples of techniques used here. In addition to sharing knowledge, studies have shown the code quality can be better in some circumstances. Particularly with complex tasks and junior or intermediate developers.

While the documentation tries to capture this knowledge so it's accessible long down the track, this has questionable value. A) It's probably out of date, and B) when was the last time you referred to old documentation for a project?

Still, it's a start. We're trying to mitigate the risk of concentrated knowledge.

What about the code?

So great, we have techniques for lowering the bus factor by sharing knowledge. Once our feature is complete and all that knowledge has been embedded into code, we're out of the woods, right?

Well, no.

How much use is our code unless it's being used?

I'd argue (at least from the customer or end-user's point of view) code that hasn't made it to production is effectively 0% done. In fact I have argued that. Severaltimes.

And of course, code we haven't deployed carries its own risk. In effect, un-deployed code has its own bus factor. This is true for a couple of reasons:

First (and least likely) something could happen to all that code before you manage to deploy it. Servers go down, and hard drives fail. As developers, we try to push code to the server(s) frequently and avoid leaving changes on our machine.

Second, remember I said knowledge rapidly loses value once the code is complete? Well it doesn't immediately lose value. You may think a feature is done, but until it's being used in anger in the live environment, you can't be sure!

I've had many experiences where a bug has arisen months after I finished that work. Even when I was the original author, the knowledge of how everything fitted together has faded. Rapidly. I have to relearn how it all works - which is the same problem the team would have had if I'd been hit by a bus.

If code makes it into production very quickly after it's been developed, it's still in the developer's mind. Any bugs that arise are likely to be fixed much more quickly.

So what's the solution?

It's pretty clear to me:

Get features finished, and put them in production.

Anything else just prolongs work in progress and increases the bus factor.

This year, I've attended a lot of conferences. Most of the time I've been at a booth for Octopus Deploy, but often I've been there to speak. Before this year, I'd spoken at events all over Australia and in New Zealand, but 2016 was really my first foray into the international speaking circuit.

The highlights of my year so far have undoubtedly been the NDC Conferences. NDC conferences are huge and incredibly well organised events, and I'd often looked admiringly at the speakers who get a guernsey. It's quite humbling to be one of them, even though I've got a long way to go to consider myself an equal!

What works for me

I'm far from considering myself a professional speaker, but with the ramp-up of speaking gigs this year I've learned a few things about the way I work when preparing and delivering technical talks. Everyone's different, so these lessons may not apply to you, but they're things I've noticed about myself and the way I work.

Let ideas gather

I used to find it difficult to come up with ideas for a talk. More and more, I'm finding that if I pay attention to things I read and conversations I have, the talk comes to me instead.

#1: Write everything down in OneNote and let related ideas coalesce.

Coalesce seems to be the most accurate word for this (I spent some time on thesaurus.com). In short, every time I read something interesting, have a technical conversation, or solve a difficult problem, I think about whether it could one day turn into part of a talk, and if so I write it down. I'll revisit these notes frequently and think about them and expand on them. Before long, related topics start to come together and merge, and before you know it, there's an hour of material. A talk emerges.

Practice, practice, practice

You've no doubt heard this from every speaker, speaker coach, or person-on-the-street-with-an-opinion, but it's true. The only way to get good at something is to do it again and again.

I tend to do my practicing privately. Partly because I don't like delivering an unfinished talk to other people, and partly because it takes me some time to land on the best way to say something. Nobody wants to hear me ramble awkwardly. Importantly though, I practice out loud as if I'm talking to a room of people.

#2: Practice many times - out loud, as if you're doing it for real

While practicing a talk, I'll go through a three main phases:

Work from the outline and just start talking. See what works and what doesn't, and remember the phrases that convey your point succinctly.

Run through the talk start to finish, but stop and backtrack if something isn't right. Fix the slides immediately when you notice there's something wrong, write down how long each section should take, and get your timing right.

Run through the talk start to finish, and don't stop. Refer to your timer and make sure you're on track.

I'll run through each of these several times before moving onto the next, and each time I deliver the talk, I'll go back to phase 2.

Getting comfortable

Despite all this preparation, it often takes me a while to get totally comfortable with a talk. That used to bother me, but I now see it as part of the process. It's only when you're on stage that you learn that what worked well in the hotel room doesn't necessarily work in a room full of people. This applies especially to jokes.

#3: I'm not completely comfortable with a talk until I've delivered it at least 3 times.

It's often good to deliver the talk for the first time at a smaller event. Your local User Group is a really good place to start, as is a brown-bag lunch presentation at your workplace.

Get it front of your mind

At NDC Oslo this year, I delivered two talks, then I did the same talks two weeks later at Kansas City Dev Conference. The first of my talks at KCDC was one I'd presented half a dozen times, so I didn't bother running through it again just before the talk - I figured by now I knew it back to front! But when I got up there, it sucked (in my opinion).

It wasn't that I didn't know the content, I did. It just wasn't forefront in my mind. It had only been 2 weeks since the last time I'd done it, but I'd forgotten the natural segues and the flow and the nice phrases to use. The whole thing felt messy and clunky. I resolved not to let that happen again.

#4: Always do a start to finish run-through just before you go on stage

When I say "just before", I mean that day. It takes no time at all to forget the small things, and it's the small things that matter.

Be ready and focus on the start

I do a lot of prep beforehand, but even so, the first few seconds of speaking have the potential to throw me.

#5: Have everything you need ready to go, and focus on your first few seconds

The photo above was taken just before I delivered my second talk at NDC Oslo. The room is actually quite wide, so you're probably seeing about 1/4 of the audience here. What you'll also see is the following:

A bottle of water

A timer (on an iPad with auto-lock turned off)

A notepad with timings

What you don't see is me practicing the first few seconds of my introduction over and over in my head. When I start speaking, I need to know exactly what to say, and how to move into the first slide of real content. I know from personal experience that if I get stuck in those first few seconds, it'll throw me for the rest of the talk. Of course in this case, the first word out of my mouth was "Um", so it doesn't always go perfectly. :)

Make it easy to interact

The other thing you don't see in the previous photo is scheduled tweets. This was an idea I'd heard about from a couple of people, notably Troy Hunt, and Denise Jacobs.

#6: Schedule tweets that are timed to your slides and encourage people to retweet

Because I've practiced and I know my timings, I know when I'm going to hit something that's (hopefully) tweetable. Rather than let others do it for me, I schedule tweets from my account with Tweetdeck so people can retweet them.

Planned but not scripted

This one is a personal preference that suits my style. I always know what I'm going to say with a fair amount of accuracy, but I'm never running from a script.

#7: Have a plan and even stock phrases, but never a script

This is another great advantage of practicing over and over again. During those practices, I'll have landed on certain phrases and sentences that sound good, and I'll make sure I repeat those each time. The same goes for segues between tricky slides - I'll know how to do those transitions. However for the most part, the actual words will be different every time. I have a plan, but not a script.

So what's next?

I'm really excited to be doing my third NDC event for the year in August - the first ever NDC Sydney! If you haven't been to one, do yourself a favour. This is one conference where they do everything right whether you're a speaker, an attendee, or a sponsor. There are still tickets available, so don't miss out.

On the Friday night of NDC, I'll be doing PubConf which should be a lot of fun, and a week later I'll be back at the always excellent DDD Melbourne.

Hopefully I'll see you at one of my talks soon!

]]>Welcome to my new blog engine!

I've recently been at a few conferences around the US and Europe (notably NDC London and Oslo) and I spent some time speaking to a lot of the other speakers about presenting, blogging, drinking, and other things. One thing a few of them had

I've recently been at a few conferences around the US and Europe (notably NDC London and Oslo) and I spent some time speaking to a lot of the other speakers about presenting, blogging, drinking, and other things. One thing a few of them had in common was they recently simplified their blog. I'm taking a leaf out of their book and doing the same.

My previous blog engine was Wordpress running on a shared Linux VM. It was fine, but it required constant upgrades both of the engine and the various plugins, and it was starting to feel too bloated. The UI looked a bit dated, and as is evidenced by the lack of posts, I found myself largely ignoring it. I just wanted to write content and not worry so much about the rest.

The new engine

I was put onto Ghost by Troy and Kylie Hunt, but I'd heard of it once or twice from some other speakers. It's simple. Very simple. And I think that will work for me.

The best thing about Ghost is that I can write my posts in Markdown. Over the past couple of years, I've found myself using Markdown more and more. In GitHub, VSTS extensions, helpdesk responses, and more. It's fairly natural now and it feels great to my programmer mind.

Tweaking the UI is also very simple. I can work directly with real HTML (well handlebars), CSS, and JavaScript files. If I want to make changes, I'm not trying to work with a framework, I can just make the changes.

Migrating my posts from WordPress was a little harder than I thought it would be. I found documentation on the Ghost site that led me to believe it would be trivial - install a WordPress plugin, click a button, and that's it. Unfortunately, it didn't work. I was given an empty file every time.

After some searching, I found another wp2ghost GitHub repo from Jon Gjengset that provided the answer (isn't Open Source great!?). Export the WordPress content as XML, then run this tool and you'll end up with Ghost-compatible JSON. After a bit of find-replace with image references in the resulting JSON file, it worked beautifully.

The final step was to move the blog images. I originally thought this would be hard. How do I upload images in bulk to an Azure Web App? Then I found Kudu. Wow is that tool awesome! I pulled the images from my Wordpress site over FTP, then dragged-and-dropped them into the browser! Yeah! I know!

There's one last piece that's missing - comments. I actively made the decision not to move those. I felt that for the most part, the comments on my blog didn't add much value. For one thing they were old, and if someone raised a point worth mentioning, I'd usually update the post. That said, I still have them all, so if you found them useful, let me know and I might reconsider.

So what's next?

This blog has had a UI and engine refresh, and it's about time for a content refresh as well.

The change in subheading probably gives you a hint. I'm going to focus on devops and developer process much more. I've been focusing my presentations on these topics lately, and I think it's time the blog followed suit.

In the future, you'll see some targeted and hopefully short posts on various aspects of devops, deployment automation, and developer process. They'll tie closely into the types of things I speak about. I hope you enjoy the change!

Unlike the previous upgrades, Octopus 3.0 is a Tentacle First upgrade. That means you'll need to upgrade all your Tentacles before you upgrade your Server.

But why?

As you may have read in some previous blog posts, we've changed the way the Server and the Tentacles communicate. The end result is faster deployments across a broader range of Deployment Targets.

But it also means that an Octopus 3.0 Server can't communicate with 2.6 Tentacles and vice versa.

Unfortunately, that means we can't do an automatic update like we have in the past. If we did, we couldn't check the result because a successful update means the tentacle is now speaking another language.

So... I have to run an MSI on all my Tentacle machines before upgrading?

Yes!

We recommend doing it before you upgrade so if there are any problems, you can reinstall the 2.6 Tentacle and you still have a Server you can use.

But I have hundreds of Tentacles!

Never fear! We built a tool to help. It's called Hydra and we built it just in case the thought of RDP-ing into all of your servers to click Next, Next, Next through an MSI makes you nauseous.

We strongly recommend you use new hardware (or virtual hardware) and install a fresh 3.0 instance. That way you'll have a 2.6 Server alongside a 3.0 Server so at least one of them should be able to communicate with your Tentacles. You should see the Health Checks start to fail from your 2.6 Server, and see them light up on your 3.0 Server.

As always, support is here to help, but make sure you read through the documentation before starting to avoid any issues!

I've loved the product since I started using it about a year ago, and I quickly became it's primary champion at SSW. I helped put it into a few clients' pipelines and I even have a

]]>https://damianbrady.com.au/2015/04/16/im-an-octopus/9f3be58c-2fca-45d2-986b-f8e53fcfb21cThu, 16 Apr 2015 07:35:53 GMTAfter 4 great years with SSW, I recently moved on and joined Octopus Deploy.

I've loved the product since I started using it about a year ago, and I quickly became it's primary champion at SSW. I helped put it into a few clients' pipelines and I even have a couple of videos about it on SSW TV.

If you haven't had the pleasure of working with Octopus Deploy, you're missing out - I joined not because of a bigger paycheck (spoiler: it's not bigger), but because I think it's a tool that delivers real value to people delivering software. I wanted to be a part of it and contribute to its success.

Octopus Deploy is a deployment automation tool for .NET developers. It helps you deploy everything your application needs to machines or cloud services in multiple environments in a safe, repeatable way. For the less technically inclined, think of it as a tool that helps programmers get new versions of their software to you.

One thing I've learned over the past few months is that Octopus Deploy can be used poorly, and it can be used really well. When a team uses it poorly, it's not always their fault (well, sometimes it is). The documentation around the product is pretty solid, but as is often the case with documentation, it tends to focus on how a feature works rather than when, why, and how you should use it.

A large part of my role at Octopus Deploy will be helping teams use the product really well. Not just in terms of picking the right features and knowing what's available, but also improving the deployability of their software. It's something we focused on a fair bit at SSW. As a developer, everything that happens after you've used your design, architecture, and coding skills to create a product can be automated. If you're on the infrastructure side, your skills are in the physical and network architecture and knowing what resources are needed and when. For both, the busywork of copying assets and changing configuration files every time there's an update is really a waste of your precious time!

I encourage you to give Octopus Deploy a try. At the very least, it can help you pinpoint areas of your software that are difficult deliver in a repeatable way. Feel free to contact me if you have questions or want to chat about Octopus or devops in general!