Methodology – 345 Systemshttps://www.345.systems
we help you build amazing softwareWed, 20 Feb 2019 15:47:16 +0000en-UShourly1https://wordpress.org/?v=5.0.3Leaky Pipe Syndromehttps://www.345.systems/methodology/leaky-pipe-syndrome/
https://www.345.systems/methodology/leaky-pipe-syndrome/#respondWed, 03 May 2017 08:47:52 +0000https://www.345.systems/?p=627I recently had a problem with a leaky pipe. No, I didn’t need to go and see the doctor: this was the waste water pipe in my kitchen. I’m not sure how plumbing works in all other countries, but here in the UK you basically get 3 options, in ascending order of cost: Plastic pipes...

]]>I recently had a problem with a leaky pipe. No, I didn’t need to go and see the doctor: this was the waste water pipe in my kitchen.

I’m not sure how plumbing works in all other countries, but here in the UK you basically get 3 options, in ascending order of cost:

Plastic pipes that are welded together using a solvent. This is the cheapest option to put in, but cannot be separated once done and is difficult to repair.

Plastic pipes with push-fit connectors. These connectors have an O-ring to seal them, they are fairly cheap and maintainable, but all of the pipes need to be exactly the correct size. OK if you’ve used them from the start, but not interchangeable with the welded type.

Plastic pipes and compression-fit connectors. These connectors have a conical washer that compresses as the joint is tightened, so they can accommodate both widths of pipe above. Most expensive to manufacture.

Guess which of these was fitted in my kitchen? Of course, the first one. Now that’s not a problem if you never have to take your system apart, or if no defect is found in the original fitting. Unfortunately, the weld in the pipe had come apart somehow after 9 years of service. Maybe it was never done right in the first place. However, I could not repair the joint without easily replacing the entire waste from my sink through to the outside.

If these has been push fittings and a seal had gone, I could have replaced the faulty fitting in 2 minutes and it would have been all good.

However, I ended up replacing half of the waste with a combination of (2) and (3) above: push-fit pipes from the sink to ground level and a compression fitting to graft this onto the slightly larger bore pipe to the outside.

What’s this doing on a software blog, you may be asking.

Pretty obvious really. We have all been accused at some time or other of “over-engineering” solutions precisely because we have designed in the fittings (interfaces) where software systems are designed to come apart so that, at a later date, maintenance can be carried out.

Software maintenance is a poorly understood concept, and it’s really hard to adequately capture maintainability requirements in any meaningful way. Often you have to rely on experience, a combination of business domain experience to anticipate requirements that may change of features that may be added later, and the technical experience to know which patterns and interfaces will be required later. As with all things maintenance-related, making savings on a small upfront cost can result in many times that cost later.

Unfortunately humans are very short-term in their thinking. A project manager tasked with delivering a system but not maintaining may very well favour the small savings upfront as they won’t be left to deal with the maintenance later. And anyway, they’ve delivered a system that meets the requirements right?

It’s different when you’re the one who’s got to go back in and fix the thing later, when it’s leaking gunge all over the place and the management are panicking, because now you’ve got an operational incident. In fact, I’d go so far as to say that you can help to train better developers by making them do the following:

Sit with users that are using their software, and gain direct feedback

Fix bugs, both their own and from others

Be on hand when their features go live to deal with any issues

Work on production incidents when they happen

Upgrade or add features to other people’s software

Working through this list should give developers a sense of humility, and maybe just enough experience to understand that next time they have a blank sheet of paper they should anticipate how their software can be taken apart for servicing.

]]>https://www.345.systems/methodology/leaky-pipe-syndrome/feed/0The Agile Manifesto and the Myth of Methodologyhttps://www.345.systems/methodology/agile-manifesto-myth-methodology/
https://www.345.systems/methodology/agile-manifesto-myth-methodology/#respondMon, 10 Apr 2017 08:40:24 +0000https://www.345.systems/?p=495The Manifesto for Agile Software Development was written back in 2001. What a seminal work of software development! In all the time since I haven’t actually found anything wrong with it. I’m a big fan, and so are my colleagues at 345. Throughout the ’00s it looked like agile development was becoming a thing, but...

]]>The Manifesto for Agile Software Development was written back in 2001. What a seminal work of software development! In all the time since I haven’t actually found anything wrong with it. I’m a big fan, and so are my colleagues at 345. Throughout the ’00s it looked like agile development was becoming a thing, but then the wheels started falling off. “Agile” got itself a bad name for a variety of reasons and the vultures were already waiting to feast on its fall.

So what went wrong?

Individuals and interactions over processes and tools

Well, Scrum for a start. The first of the values is “Individuals and interactions over processes and tools”. So the first things was to….build a process and a set of tools so that we can control the individuals and interactions. Um, yeah.

I’m not saying that you go off half-cocked and just wang in whatever code you fancy on any given day. Nooooooo. But management (almost) always want to formalise process and in doing so they (almost) always place the needs of the process above the needs of the individuals. If you have a team of low-productivity automatons who are happy to sit at desks like corralled cattle all day you might not notice much difference, but good people don’t work like this. I mean, really good people. People who work off inspiration, who can come up with solutions in a couple of days that that are worth more than their entire annual earnings.

Working software over comprehensive documentation

The second value, “Working software over comprehensive documentation” is a corker as well. It really separates the people who deliver from the people who hide behind process as an excuse for non-delivery. If you value working software primarily, delivery of quality product is your focus.

Let’s put this another way. If you’ve worked on a large and heavily-documented system, how often were your documents read? I mean REALLY? Did you deliver value, or did you make sure you followed the process? Did you deliver quality1, or did you ensure the checklist for the project’s “quality gate” was met?

What does get read, and read often by nearly all technical staff, are conceptual diagrams. Boxes and arrows. Yes, do these, and lots of them. In fact, why don’t you whiteboard this stuff and then take photos of it. Do this design activity often, refining all the time, but don’t spend your time making it into documents that nobody reads.

The best documentation I have left behind on a project was a 2-hour video of me and a colleague at a whiteboard briefing some new team members on our application. That video was watched by every new starter after that.

Put it another way: You have stuff that originates in the heads of your tech team, and needs to get into the heads of other team members (especially new starters) so that they can contribute to delivery. What is the most effective method of extracting and transmitting this information? Use that method. Don’t write documents because someone selling a methodology in the ’80s decided you should.

Focus on value. Value is working, high-quality software.

Customer collaboration over contract negotiation

Those of us who build software for clients need contracts. Contracts specify how we get paid. At a fundamental level they need to happen.

Often a strange metamorphosis happens with contracts. Ideally, a contract would say something like “hey, build us a great system, work as good as you can, and we’ll pay you based on the effort you put in”. You then set off on the best route you can see, but you learn along the way, you get new ideas, you get feedback from users when they see prototypes. You incorporate the feedback, the product improving with every iteration. Everyone’s happy, the developers are motivated, the product is great, minimal time is wasted. Great job everyone.

Or, back in the real world, management (especially financial management) step in. They want a contract that says “hey, build us a system that does X and Y, and deliver it on Z date and we’ll pay $$”. Off they walk, smug in their ignorance of how things work in the real world. This leaves much unanswered, such as what does X mean? If you don’t give us the information on Y then we can’t make date Z. What happens if this is more complex than we thought?

Next, you’re staring down the barrel of a “process” that’s disguised as project management but underneath is subterfuge designed to provide commercial cover, creating a contractual imperative to adopt bad behaviour.

Responding to change over following a plan

Before long you’re in a much darker place than you were before. You worry about delivering the letter of the contract rather than delivering a great system. Collaboration gets frowned on, because improvements you find along the way would require contract change, and the contract is already agreed. Do you need to get the legal bods in again? Do you want to ask the CFO for more money? No. Then pipe down.

Inability to respond to change is almost always closely tied to the commercial conditions you work under.

It doesn’t have to be this way. The difficult thing is that approaching in the right – agile – way takes trust.

To be Agile is to trust

Accept that when you start your knowledge is incomplete. Accept that the product you need is not the one you’re able to describe when you start. Accept that your priorities will change. Get good people together who are motivated to get the job done.

Then trust them to do it.

1For those who don’t understand the philosophical implications of Quality, I recommend reading Lila by Robert M Pirsig.

]]>https://www.345.systems/methodology/agile-manifesto-myth-methodology/feed/0Projects spawned in the Mouth of Hellhttps://www.345.systems/methodology/hellmouth-projects/
https://www.345.systems/methodology/hellmouth-projects/#respondFri, 31 Mar 2017 15:14:34 +0000https://www.345.systems/?p=440When project delivery goes bad Who is there, my friend, can climb to the sky? Only the gods dwell forever in sunlight. As for man, his days are numbered, whatever he may do, it is but wind. -Gilgamesh, trans. Andrew George Don’t trust those smiley affirmative the-universe-will-provide types who go -I have no regrets. Only...

Who is there, my friend, can climb to the sky?
Only the gods dwell forever in sunlight.
As for man, his days are numbered,
whatever he may do, it is but wind.
-Gilgamesh, trans. Andrew George

The sprint retrospective was going about as well as could be expected

Don’t trust those smiley affirmative the-universe-will-provide types who go -I have no regrets.

Only sociopaths have no regrets. Presumably, sometime after man’s first disobedience brought woe unto the world, mortal taste and all that, even god, 1000 yarding it through the bottom of a whiskey glass late at night, be like –oh, maaaan!

I got plenty regrets. And in computers, those regrets generally come somewhere in the middle of the how many-eth night you’re trudging on a death march against a towering deadline you don’t have a hope in hell of making, stacked empty pizza boxes a greased monument to your failure to build anything more meaningful than trash that’ll go into the recycling next time you get round to cleaning. I don’t even really like pizza. But it’s passed into lore that techies perform to pizza like dolphins to sardines and so managerial types keep on running the math that cardboard pizza adequately compensates for all the overtime.

The customer loathes you, you despise the customer, the project manager thinks the technicians are lazy bums, the technicians wonder out loud if the managers, given an accelerometer and an orienting frame of reference, would be able to find their own backsides – an inability all the more surprising when that’s where most of the deeply unmotivating motivational speeches seem to be coming from. But everyone involved, everyone really hates the salesman.

The salesman doesn’t hate anyone, because he’s off somewhere sunny enjoying his bonus. Because he already got his bonus on signing the deal, not on actual delivery, nor even for making promises that’re even vaguely realistic in a continuum where parallel lines never meet.

You regret the corners you have to cut to make it over the line, you regret your bleeding fingers, you regret your irradiated raw eyeballs, you regret how good it could have been if you only had a bit more time, you regret the life you’re missing out on, the birthday parties you didn’t attend, the concerts you didn’t go to, the restaurant bookings you had to cancel, you regret the love interests to whom you can’t even really explain the why without getting into heavy tech specifics that they have no interest in anyway, you regret the whole sorry state of an engineering industry pretending to be a customer servicing business, you regret the sense of pride and duty that won’t let you do a half-ass job even under these goddamn circumstances. And no one will be positing rhetorically that your glory can’t fade either, because whatever you have just busted a hump getting done will be superannuated in less than 5 years. And no one will care. You won’t even care. And you will regret that too.

You wonder idly, if you left for the airport now, you could be somewhere sunny before tomorrow, track down that bastard salesman, throttle him in his sleep, make it back before anyone even knows you were gone. Even if they caught you, no court in the land would convict you, right? You got cause.

Redefining the measure of success

It’s a quintessential human trait: we stumble ball-deep through sewers of our own making, mired in foetid excreta, and we still take the time to stop and to ask -just how did we get this far into the shit? Organizations take the time to call meetings on this right there in the stygian gloom, wallowing in their effluvia as they choke in the miasma. Never again, they say, lessons have been learnt, celebratory flagellants. At this point you don’t need a paddle, you need a steamer. Try slipping that by procurement.

The worst start redefining what success means until they can hail blatant failures as heroic successes. Governments are pretty good at this sort of thing.

And here you are, in the dead of night, you ask -Where did it all go so horribly wrong? And it’s one of those worldly ironies that it’s not normally all that difficult to answer that question either, all other things being equal. Because you know. . . like the man said in impeccable dactylic dimeter: someone had blundered.

The difficulty is more normally to get the right people to admit what caused the problem, given that the right people are usually the ones who catalyzed the cascading failure in the first instance. The right people being the right people in that they’re the ones who could have done something about it, but in all other respects they are the wrong people. Dunning Kruger writ large.

Hell, I’m not even naive and idealistic enough anymore to hope that the jabronis who cause the problems will have a moment of clarity, if not basic decency, and actually admit that, yes, in hindsight, yes there is a problem here and yes in hindsight that problem is one that we created and having said all that, here’s a vaguely realistic plan for how we’ll fix it. I’ll settle for them just not doing it again. A teachable moment, of sorts.

Fat chance, sonny jim. Whisper it, I’m also not always all that convinced that maybe I didn’t have a bigger hand in the failure than I thought I did. Should have fought harder, been smarter, quicker off the mark.

As an employee, there’s not a helluva lot you can do about this dynamic. Yours is not, usually, to reason why. As a freelancer, or a business owner, assuming you’re sensible, you get warier about barging in with timbs where angels more normally fear to tread. Even so, keeping with Pope, a little learning is a dangerous thing.

Hellmouth projects take on a life of their own

it’s alive! it’s alive!

Hellmouth projects achieve biogenesis by evolving a malevolent homeostasis of sorts. The hellmouth maintains its perverse momentum by endergonic metabolic reactions that suck any chance of achieving positive results out of its hosts as a precursor to full-blown metamorphosis into a facultative ectoparasite.

It lives, in short. Like Swamp Thing. It’s hungry too. The hellmouth consumes energy for its own parasitic political purposes, energy that should have been spent delivering the end-product instead. Sort of like the Straumli Blight, once woken unstoppable by nothing short of a destructive spasm in space-time that destroys both contagion and host.

And then while you’re there, wondering how it went so wrong, you marshal the sorry remainders of your synapses to extricate some sort of redemption from the doom: satisfy the stakeholders, satisfy the customer, survive this thing somehow. You wonder with former District Attorney Dent whether you can be a decent man in an indecent time?

The time where you get to fix a project where everything goes wrong is before it starts. Anything later than that is too late. Measure twice, cut once. There’s no Buffy to save you once the hellmouth opens.

I’m not even going to bother with the numbers. If you’ve worked anywhere within the ambit of IT you’ve lived this, nevermind seen it. If you thought it was just you, take heart: see the various stats from the likes of Gartner and Standish’s CHAOS over the years – something in the area of 70% of IT projects fail. The bigger the project, the bigger the fail. You could probably nitpick over the data gathering methodologies of these studies, so let’s just say that the number is way too high, whatever it is.

Given those mistakes that you laughably name your experience, you get better at spotting the signs of trouble in advance. Assuming you don’t spend all your time justifying previous failures instead of learning from them, you develop a combat mindset, tingling spidey-like in the presence of implausible project objectives, duplicitous salesmen and enervated project managers. Like a quaker, you don’t swear oaths. Oaths, not oats.

Don’t make promises you can’t keep. If you don’t understand empirically what promises you can keep, you shouldn’t be making them. Especially if those promises are what a civil court would refer to as a “contract”. That’s not quite the same as promising to eat more vegetables and do more exercise this year. Also, if you’re unclear what empirical means, stick to the golf.

The Fourth Horseman: “. . .and Hell followed with him.”

Sometimes this means saying No to customers. This is hard. Customers come to you in need, and in their desperation they can push hard to get to hear answers they want to hear, rather than the answers that they need to hear. They’ll put you on the spot to commit to a guaranteed fixed price and a deadline when you don’t fully understand the full scope of the delivery yet.

Unknown Scope of Delivery, Fixed Price and Immovable Deadline, three horsemen of the project delivery apocalypse. The Fourth Horseman is the idiot who is willing to engage on those terms. At best the idiot is jejune, committing the crime of credulity and unwarranted optimism, at worst it is deliberately and disingenuously profiteering in the short-term even though the long-term cost & reputational damage far outweigh any overall benefit.

postscript:

Wow, that took a dark turn pretty quickly, huh? I wrote the above while I was thinking about how a lot of companies generally behave when projects go wrong, and the flashbacks that overpowered me somewhere around para. 4 weren’t your pleasant Woodstock type.

None of the above refers to any project past or present of 345. I was thinking of the time I spent in the trenches way back in the beginning of my career – sadly probably an all too common malaise in this industry – when you’re still junior enough to have to put up and shut up.

In fact, not following the script for these nightmare scenarios is sort of the point of 345. I was fortunate enough to meet great business partners some years ago and we all share a vision of doing things differently: a large part of the reason that we started 345 was that we want to deliver great software that makes all the difference for happy customers, rather than being yet another IT consultancy that specializes in politicizing excuses for non-delivery.

If you too are sick and tired of IT projects that fail before they even begin, and want to talk about how we can work together to buck the trend and have great IT project delivery instead, get in touch here.

]]>https://www.345.systems/methodology/hellmouth-projects/feed/0Software Development “Process Smells”https://www.345.systems/methodology/software-development-process-smells/
https://www.345.systems/methodology/software-development-process-smells/#respondMon, 06 Feb 2017 00:00:01 +0000https://www.345.systems/?p=361We’ve all heard of code smells right? These are the tell-tale signs that things aren’t right. I haven’t heard anyone talking about software development process smells, the tell-tale signs that your process is awry, so I thought I’d put a stake in the ground. I’d love any of our readers to chip in with more...

]]>We’ve all heard of code smells right? These are the tell-tale signs that things aren’t right. I haven’t heard anyone talking about software development process smells, the tell-tale signs that your process is awry, so I thought I’d put a stake in the ground. I’d love any of our readers to chip in with more suggestions and I’ll keep this page updated in the future.

So, here’s my initial list of the smelly stuff:

Manual Testing

This is a massive bugbear of mine, and one of the key process smells. If you do anything manually you aren’t guaranteeing repeatability. If you haven’t got repeatability you can’t guarantee quality. Since testing is supposed to be about assuring quality, manual testing is a contradiction. Stop it. Right now.

Well, maybe not right now, but as soon as you get automated testing in place.

Remember as well that manual testing is like a parasitic load on your development cost. OK, writing automated tests may be 3x more expensive than a manual test but if you can run a regression sweep every time you check in code. This catches issues early and allows them to be fixed when it is cheap and effective to do so.

When you have the dead weight of manual testing on your project you tend to make testing an event rather than an integrated part process. You store up more and more code into a release with the intention of testing it once. You then find bugs and end up with stabilization phases. Don’t do this. Test early, test often, test automatically.

Manual Deployment

Ditto the above for the same reasons of quality, repeatability and parasitic effort / cost.

Manual deployment often comes around because of a lack of capability as much as anything. You need to change your ops procedures from “click through the wizard” to “write a shell script” for everything they do. Once you have the shell scripts to do everything, from provisioning and configuring infrastructure to application deployment your task to automate end-to-end becomes a heck of a lot easier. There are plenty of tools out there that will process a YAML file full of bash commands.

Sometimes I find the inability to deploy automatically is a key factor in release constipation. The inability to consistently deploy flags releases as risky, and hence something to be avoided. Releases should be seen as low-risk and something to be embraced. You get this from automation, and automated quality control.

Planned Production Downtime

This follows hot on the heels of the above point.

For many in the .com economy, whether you’re running an ecommerce business or a SaaS service, the idea of taking down production for patching or releases is simply untenable. If you switch off your business you’re also switching off your clients. And they don’t like this. They might not come back.

Believe it or not, in the dark and murky world of the enterprise, the practice of planned outages is still alive and well. Business that operate Monday to Friday might get away with releasing over a weekend. Just because you can get away with this, it doesn’t mean it’s right. It’s a sign your process smells.

When you take down your production environment for releases it usually means:

You have no disaster recovery / failover capability, as you can’t swap to your backup instances.

You have manual deployment processes.

You are tied to physical machines, rather than provisioning a fleet of newly minted virtual instances ready to swap in for your old ones.

This is also symptomatic of the “deployment is an event not a process” outlined in the next section.

Release Constipation (or Deployment as an Event not a Process)

Otherwise known as releasing annually / biannually / quarterly.

See “crazy branching” below. The longer the gaps between deployments the more unreleased code you have. The more unreleased code you have the riskier each deployment becomes, and the greater the merging backwash you leave behind. This is one of the uber process smells because it’s at the apex of a smell pyramid.

A smell for release constipation is the scheduled release cycle.

Why not release when a feature is ready?

OK, to be fair, I have worked on banking systems whose releases need to align with regulatory change. This often entailed a huge last-minute changeover of systems to meet the regulatory change. Having said this, you’re rarely penalised for being compliant with new regulation too early, only if you’re late. Most of the time you can get almost all of the features out there early, and sometimes you just need to change some configuration at the time the new regs come in.

Almost every excuse I hear for why you schedule you releases instead of releasing on demand actually points to something smelly in your processes. Inability to integration test. Lack of test environments. Lengthy manual test cycles. “The business not being ready”. Manual deployments. Downtime.

Fix the things that prevent you from releasing, then experience the joy of frequent low-risk releases and put the pain of the sh*t-or-bust release behind you.

Crazy Branching

Ever been on one of those projects where someone tries to explain the branching strategy and you get lost after 2 minutes because there’s so much information to take in? Crazy branching is often a sign of release constipation, which is another of the process smells. When you can’t get a release out of the door and into production (maybe because you’ve frozen the features in order to undertake a lengthy manual testing phase), but your team is still working, you end up making extra branches for them to work on.

Of course, your release branch never stays still. You fix bugs. You get emergency feature requests and have to accommodate interface changes when integrating with other systems. These changes in your release branches need to be merged back into your feature branches, and you then end up spending half your time just making sure the merges are done and nothing is regressed.

The fix for this is to ship early, ship often, minimise the amount of unreleased code, keep the lifetime of feature branches short (a couple of days max, ideally) and avoid branching off branches (which branch off branches off master).

Zip Files of Source Code

If you find that someone has ever felt the need to keep zip files with a cut of your code in them, “in case I need to get back to this build”, you should break out in a cold sweat. There is no better illustration of the fact that you’re not managing your source code repository correctly (or you have got devs who don’t know how to manage source code).

Stop it. You should be able to go back to any commit and build from it. You can branch from any commit in the past. You can do all of this if you’re managing your source code right.

No zip files please. There’s no need and you’re only embarrassing yourself.

Sharing Dev/Test Servers

I remember working with a client once that had a thick-client trading app running against a SQL Server database. To “save on the licence costs” all of the dev team connected to the same database server to debug against.

What could possibly go right with this? Every build you’re running needs to be on a clean stack so you know you haven’t got cross-contamination from anyone else’s changes.

If different testers are running different sets of tests concurrently on the same environment how can you be sure that they’re not affecting each other? (A fix for that is to automate your test suite and run it as part of your release pipeline).

Quality depends on repeatability. You need isolation of your environments to achieve this. If you have no isolation your process smells.

Test Outcome Reports

It’s not unusual for system integrators to agree to writing a report when they “deliver code” that describes the quality of it. What’s the point? If you’ve set up your CI properly you should always have 100% of tests passing, or else you’re not protecting your branches properly. You should also be setting minimum coverage requirements on the branches as well. I can give you the stats on quality after every build I do. Why write this down in a report?

The answer, sadly, is that test reports are a process smell of testing-as-an-event. If you have decent CI and automated results available from every build you don’t need to report on quality, you enforce it.

Summary

The above points are by no means exhaustive and are intended as a guide to get you thinking about the issue of process smells. What is there in your organization that is a symptom of underlying process stinkers? Contact us via the website or add to the comments below if you’ve got additions to the wall of shame.

]]>https://www.345.systems/methodology/software-development-process-smells/feed/0Building DevOps on a solid foundationhttps://www.345.systems/methodology/building-devops-solid-foundation/
https://www.345.systems/methodology/building-devops-solid-foundation/#respondMon, 09 Jan 2017 00:00:33 +0000https://www.345.systems/?p=312Just about everyone has heard of DevOps by now, right? We have clients that talk to us because they want to “do DevOps”, but if you’re in this situation, how do you even begin to plot the correct journey for them? In this article I’ll go through some of the thought processes I use to...

Buying into the need for DevOps

I start by saying that DevOps isn’t something that you do once, tick a box, and then move onto something else. It’s a way of working. It’s a mindset and an approach.

I’m not a gardener, but I liken it to having a garden. In the early days you design your garden, put your turf down and your plants in. Then you tend to it regularly. You prune and weed, add a little, remove a little. Optimise. Nurture.

I then get clients to understand that manual processes undermine their ability to deliver. This is a biggy. If you do anything manually, no matter how well you document the steps, you always get different results. Eventually, always. Software quality depends on repeatability. Manual processes are not guaranteed repeatable. Manual processes are therefore the enemy of quality.

More than this. Manual processes don’t scale. If it takes me 20 hours this week to deploy something, it will take me 20 hours next week. If I need 10 deployments next week it will take 200 hours (or even more as everyone dies of boredom and demotivation). If I spend 40 hours this week scripting a deployment it may take me 10 minutes next week to kick off 10 deployments. Code scales, people do not.

Chart illustrating weekly effort of manual vs automated deployment

Chart of cumulative effort for manual vs automated deployments

Examine the underlying practices

DevOps doesn’t appear in a bubble. It should be a wrapper that encompasses other practices. If you’re doing the other practices well you should be able to build on top of them. If your dev practices are weak you need to address them first.

Look at your source control processes, your branching strategy and how you come to release code. Is this rock solid? You can’t build quality software until you get your source management right. Fix it. Get the right tools and learn how to use them.

Look at your build processes. Make sure you’re automating your builds with every new commit. Are you ensuring that bad builds are rejected? Does your build process test quality? How? Does it run unit tests? Does it set code coverage thresholds? Do you block merges to your master branch if the quality measures aren’t met?

Look at your testing strategy. What’s the reliance on unit testing versus integration testing? (This is a big subject, with no single answer). If you make a change, how confident are you that you haven’t broken anything? Are you able to automate a test run and get a report on your quality? How do you manage integration with systems developed by other teams or vendors?

How do you provision infrastructure? Do you procure physical tin? Are you virtualized? Are you able to provision a new environment, or scale your production environment, by running scripts?

Examine and understand all of the practices that underpin your delivery. Methodology is for selling books and consultants. Practices are the key to good development. Practices and a commitment to excellence.

Examine your processes

The best processes are the simplest. The fewest branches. The smallest number of active deployments. The shortest time from commit to release.

Look at what you’re doing, and examine it critically to see if it’s adding value and contributing to quality.

Look at each process and understand what happens at each stage. In detail.

Plan for end to end

Once I’ve been over the practices and processes I then start working with clients on a plan to get the end to end DevOps in place.

You need to look at where your pain points are and then plan to eliminate them one by one.

Where are your pain points? Look where you’re burning resource that isn’t adding value. The only true measure of value is working software (Agile Manifesto). Anyone that isn’t contributing to the building of working software is a symptom of waste and bad quality. Script waste out of your project. Build solidly scripted sub-processes. Tend to them. Improve and optimise them often.

Building your DevOps is like building links in a chain. Model your process, script each step. Encourage continuous improvement via evolution; dissuade stagnation. Aim to go from a commit, through testing, to deployment solely by running scripts. Once you have achieved this, think about how you can join the links to create your release pipeline.

In summary

DevOps is like many other things in software. There were people who were doing DevOps before it was even a thing. That’s because of their commitment to excellence and they devised good practices to support what they were doing.

My advice would be to focus on building the foundations right. Before you know it you’ll be “doing DevOps” because the components will all be there.

]]>https://www.345.systems/methodology/building-devops-solid-foundation/feed/0Day 1 DevOps: A Manifestohttps://www.345.systems/methodology/day-1-devops-manifesto/
https://www.345.systems/methodology/day-1-devops-manifesto/#respondMon, 19 Dec 2016 00:00:24 +0000http://www.345.systems/?p=263I believe everyone starting a software project should start their DevOps on the first day [of the build cycle] of their project. I believe that failure to do this leads to bad places almost every time, and the more complex the solution the worse mess you can get into. This is my manifesto for getting your DevOps...

]]>I believe everyone starting a software project should start their DevOps on the first day [of the build cycle] of their project.

I believe that failure to do this leads to bad places almost every time, and the more complex the solution the worse mess you can get into.

This is my manifesto for getting your DevOps lined up from the start of a project.

What is DevOps?

DevOps is the term used to describe a set of practices used to automate the delivery of software and infrastructure. Most software delivery best practices have incorporated automated build / Continuous Integration (CI) for a long time, but as automation extends from the developer’s code commits up to the point of deployment the range of practices involved has expanded to include scripted provisioning of infrastructure, automated deployment and automated testing.

There is no strict definition of DevOps, but I’d put a stake in the ground to say that if you are manually changing settings on any server in your test or production environments then you need to improve your DevOps.

Manual changes are not repeatable. Without repeatability you cannot achieve consistent quality.

Manual processes are not scalable. Without automation you cannot improve productivity.

No excuses

I’ve been on too many projects where deployment is left too late. I’ve heard a lot of excuses. I’ve yet to hear a compelling one. I just hear that some people aren’t interested in quality.

Excuse: We don’t want to spend the money on hardware yet, so we don’t need DevOps.
Retort: What? You’re happy spending money building something, and not knowing if it works, but you can’t even stand up a few VMs?

Excuse: We haven’t designed the infrastructure yet.
Retort: What? You don’t even know how you’re hosting your solution yet you’re willing to take the risk building it?

Excuse: We don’t have the expertise to build the infrastructure [or deploy the solution] yet.
Retort: Concentrate on building your DevOps expertise before you start building software.

Excuse: We outsource that to someone else.
Retort: Exactly how will you be in a better place by getting them to do this later?

Excuse: It will take too long.
Retort: Exactly how will you be better off if you burn that time – and more – later on, when your project is at a more critical stage?

When to start

We all start building a new application with something resembling a “hello world” app. Even if we’ve just initialized a new repo on Git, we can create:

An index.html with static text for a website.

An API route that GETS “/status” and returns a 200.

A background service that writes out to a log.

Literally, within minutes of starting a new software project, you can have a few lines of code that do something trivial that demonstrate running code. It is at this point that you should deploy your code.

Don’t leave it till later. That is a path to bad things. Deploy now. You now have enough code to:

Set up your source control repo, branching structure, permissions.

Set up your CI and automated tests.

Set up your target infrastructure and DevOps scripts.

Set up your deployment scripts to automatically deploy.

If you wait to add devs onto your project until your quality processes are in place you won’t regret it. Not for a moment. You might also uncover [early!] which developers are used to achieving high quality and which aren’t. And you might teach the ones that aren’t a lesson that will benefit them for the rest of their career.

Impact of cloud

At some stage we’ve all been forced to manually deploy some code onto a server at the last minute because something went wrong. Usually it’s not a pleasant experience.

The world of cloud computing makes the bad habit of manual deployment unsustainable. On the other hand, it makes the exercise of scripted deployment easier as there is no physical infrastructure to provision. Provisioning of infrastructure and deployment of code are all just lines of script. And usually fairly brief.

Cloud infrastructure richly rewards those with good practices and scripted, repeatable processes. You can spin up a test environment in seconds, run a suite of tests, and then tear the environment down when it has only cost you pennies in CPU time. Usually this can be achieved with a handful of lines of script.

Cloud is your friend. Embrace it.

Penalty of leaving it late

As your codebase gets bigger, with more contributors, it gets harder and harder to get your deployments working. All the time you’re trying to deploy, you have colleagues that may break your deployments.

If you can get your deployments and test runs green after your first day of code, make it your developers’ responsibility to keep them green. Human nature being what it is, if you’re trying to deploy to new environments it will be your problem until you get it fixed. If you have a working deployment the responsibility should fall on whoever breaks it. This works in your favour, so get it right on day 1.

Bigger picture

It’s easy to get devs to start cutting some code, that’s what they all love to do. A Solution Architect can identify applications and services, and you can get working on features and everyone’s happy.

It’s a different mindset to consider deployment from day 1. It is often the case that the design a Solution Architect may recommend is driven by features and functional requirements, but it is the physical / virtual infrastructure that will dictate how non-functional requirements are supported.

If you need to put in place realistic infrastructure on day 1 you need to have a grip on how to meet performance, scalability, reliability, security, upgradeability and a whole host of other non-functional aspects of your solution. It helps immensely to get these baked in early so that you are building on solid foundations.

Done means done…when?

One of the main problems with project management on software projects is when people estimate for features based on how long they take to code. If you allow a developer to code a feature as “works on my machine” and declare the task done then you’ve lost.

You can’t even rely on unit tests passing, you have to look at test coverage as well. Unit test coverage should be as close to 100% as possible. I can’t put a number on where you should draw the line, but it needs to be high.

You should also be testing features for completeness as well, so functional tests encompassing user stories or business scenarios will also need to have full coverage. This should absolutely be at 100% for a feature to be considered done.

The more features you have, the more you will be in a position to run load on your system, perform security penetration tests. You will be “upgrading” continuously, as your app will already be deployed.

Done really means done when the feature works in production and passes all tests (with full test coverage). Day 1 DevOps gives you the surest path to this.