Technical Solutions for Technical People

Menu

Identifying and Paying Off Technical Debt

While at Interop Las Vegas, I sat on the Ask the Experts panel in the Virtualization and Data Center Architecture track. The audience had questions revolving around various technologies and methodologies, with one standing out from a gentleman worried about moving forward with an application that could only run on 32-bit architectures. As you’re probably aware, those are dying off rapidly – the latest wave of Windows operating systems don’t even have a 32-bit option available. Constraints such as this are daunting, especially as so much noise is made around other ways to deliver applications with virtual machines and containers.

The reality, however, is that problems like this are more common than it seems. Businesses love to delay fixing architecture, technology, and hardware issues until they become so painful that there’s no other choice. This is called Technical Debt. Put simply, this is the concept of paying a tax on poor decisions made in the past that are focused on technology. The tax comes in many forms – poorly written code, hardware refresh delays, and other band-aids applied to make things last just a little bit longer – but is most often represented as operational efforts required to set things straight. The corrective effort will exceed the amount of time it would have taken to do things right the first time.

I can do a quick fix right now, but someone will have to redo it and fix it properly at some point. It’ll be difficult to debug and in the end it’ll take twice as long. (source)

Technical debt isn’t a technology problem; it’s a people and process problem. And often not your problem, originally, but something you’ve inherited. You’re paying off someone else’s technical debt. Perhaps it was a decision your boss made, or a co-worker that has since left the company. In some cases, someone at your last job is paying off your old technical debt. Right now. Ouch, right?

Confusing Debt with Blame

Technical debt is often translated into a technology blame. Take the 32-bit application constraint as an example. You might want to blame Microsoft for not continuing to provide a 32-bit operating system. It’s their fault! And as such, lash out at the vendor and label this a technology problem. However, I see this as highlighting a long tail of technical debt that is being called to term. It’s time to pay up, and pay up you will.

How does one start to pay off this debt? First, it’s a good idea to have major pitfalls documented for future knowledge. 32-bit operating systems have been fading away for years. It’s not a surprise. Thus, it’s good to recognize this and start documenting the debt. Perhaps submit it to your PMO (Project Management Operations / Offices) or CAB (Change Approval Board)? Assign it a risk value with your architecture team or, if you are the architect, with your internal road map?

These little blips also make great examples of how to steer the forward-looking architecture decisions held by the Project Management / Product Management teams. Point out how poor decisions have cost the company time, money, and other intangible resources. Assign weighted risks to corner-cutting decisions and raise a red flag. If a project kicks off without any thoughts to strategy, architecture, or road map, raise a red flag. After all, 90%+ of the time spent on any successful project is the design work. Implementation is trivial in almost all cases. Awesome implementations can’t fix a poor design.

Or if all else fails, just subscribe your technical team to CommitStripand enjoy the laughs. 🙂

Related

2 Responses

“Technical Debt” is a wonderful term, and there are many times where it is actually more economical to pay it rather than what can be a huge redevelopment cost. Once upon a time 16-bit apps were THE way to go, in part because “640K ought to be enough for anybody” as said by Bill Gates(in his full marketing mode). And there are still many such applications still existing because they do the job they need to do, and any amount of messing with it (aka “upgrades”) only risks stopping it from doing the job needed. As one set of ‘Best practices’ gets replaced by the next iteration of ‘best practices’ (sometimes even radically so on the same generation of a product), systems are designed and built in good faith under the ‘best practices’ of the time. This doesn’t make them bad designs, nor invalidates the tasks they are accomplishing. The ‘upgrade’ to the current ‘best practice’ usually has a high cost that in many cases doesn’t improve the system’s ability to do the tasks they were created for, resulting in a huge negative ROI. For the most part, that blame game is rather silly, though the vendors do invite it upon themselves. When the vendor PUSHES the upgrades too hard, it is natural push back for the existing systems are working just fine. Take a corporate back end 32-bit(or even 16bit) application that is working just fine. In many cases it is just easier and most cost effective to keep it running on the operating environments it was designed for, and simply build the new systems on the new ways of doing things.

Change just because everyone else is doing it? Didn’t your mama ever teach you about friends and jumping off of bridges? In the manufacturing world, change occurs when the risk to not change exceeds the risk to change OR the new TCO is less than the old TCO so that the payback pays for the system in less than 24 months.