In Code

Elsewhere

Support My Wording & Coding

My first job was in
Operations.
When I got to be a Developer, I promised myself I’d remember how to be good to Ops.
I’ve sometimes succeeded.
And when I’ve been effective, it’s been in part due to my firsthand knowledge of both roles.

DevOps is two things (hint: they’re not “Dev” and “Ops”)

Part of what people mean when they say
DevOps
is automation.
Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation.
Applying disciplines from software development can help.

These words are brought to you by a Unix server I operate.
I rely on it to serve this website, those of a few friends, and a
tiny podcast of some repute.
Oh yeah, and my email.
It has become rather important to me that these services tend to stay operational.
One way I improve my chances is to
simplify what’s already there.

If it hurts, do it more often…

Less likely, at any given time, that I’m running something dangerously outdated

More likely, when an urgent fix is needed, that I’ll have my wits about me to do it right

Updating software every week also makes two strong assumptions about safety (see
Modern Agile’s
“Make Safety a Prerequisite”): that I can quickly and easily…

Roll back to the previous versions

Build and install new versions

Since I’ve been leaning hard on these assumptions, I’ve invested in making them more true.

The initial investment was to figure out how to configure
pkgsrc
to build a complete set of binary packages that could be installed at the same time as another complete set.
My hypothesis was that then, with predictable and few side effects, I could select the “active” software set by moving a
symbolic link.

It worked.
On my
PowerPC Mac mini,
the best-case upgrade scenario went from half an hour’s downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while).
The worst case went from over an hour to maybe a couple of minutes.

…Until it hurts enough less

I liked the payoff on that investment a lot.
I’ve been
adding incremental enhancements
ever since.
I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise.
It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results.

Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps.
Yesterday I
extended that shell script
to generate another shell script that’s uploaded along with the packages.
When the upload’s done, there’s one manual step: run the install script.

If you can read these words, it works.

DevOps is still two things

Applying Dev concepts to the Ops domain is one aspect.
When I’m acting alone as both Dev and Ops, as in the above example, I’ve demonstrated only that one aspect.

The other, bigger half is collaboration across disciplines and roles.
I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from
BDD
— or from anything else that looks like healthy cross-functional teamwork.
It’s the healthy cross-functional teamwork I’m after.
There are lots of places to start having more of that.

If your team’s context suggests to you that DevOps would be a fine place to start, go after it!
Find ways for Dev and Ops to be learning together and delivering together.
That’s the whole deal.