For manufacturing, as TCO is
typically compared with doing business overseas, it goes beyond the initial
manufacturing cycle time and cost to make parts. TCO includes a variety of cost
of doing business items, for example, ship and re-ship, and opportunity costs,
while it also considers incentives developed for an alternative approach.
Incentives and other variables include tax credits, common language, expedited
delivery, and customer-oriented supplier visits.

Many organizations find themselves attracted to the idea of Cloud
and look for solid financial justification to support the business case for
their journey. At first glance the TCO
analysis seems like a perfect fit – run the TCO on your existing data center,
compare it to your projected Cloud deployment, and look at all the money saved… For a mature IT organization with existing
data center(s) and workgroup (office closet) solutions the analysis is often
not so straight forward.

As in most things, the devil is in the details. The gotcha in the definition above is the
word “indirect.” Typically, it is fairly
easy to collect the direct costs (Physical Servers, Virtual Servers, Storage,
Networking, Labor) for your existing data center(s) but the indirect costs are often elusive. As an example, items like the cost of racking
and stacking servers can easily be overlooked.
These activities may be a direct cost if you have a vendor handle them,
or they may be indirect as part of your general labor. Do you know precisely what it costs you to
rack and stack 1 server, 1 switch, 10 servers, a Firewall?

Other possible indirect costs may be even more challenging
as they may be open to interpretation. As
mentioned in my previous post regarding Minding your Technical
Debt – how do you account for the cost of fully depreciated hardware? If that hardware is out of support – do you
attribute a cost to the risk associated with not having a support
contract? What is the cost of running an
unsupported OS or Application no longer receiving security patches? These scenarios certainly have a cost when it
comes time for audits, or worse, when you have the misfortune of your systems
being penetrated due to un-patched vulnerabilities.

Additionally, if you are building a business case for one
workload, and not the entire data center, the TCO analysis can be even more complicated. How do you parse and allocate the costs
attributed to only the workload in question, and more importantly, how do you
calculate the costs associated with shared services (Active Directory,
Networking, SAN Storage, Virtualization Host(s) - especially if intentionally over-subscribed)?

If you don’t accurately capture all costs (direct and indirect)
the TCO analysis may not fully support your business case or it may be easily
picked apart. The reality is that it’s
nearly impossible to have a perfect TCO calculation. It is easier to calculate TCO for Cloud
workloads due to billing granularity but it can still pose challenges. When you are comparing TCO between non-Cloud
and Cloud workloads you may find yourself comparing apples and watermelons if
you aren’t careful. Cloud costs, while
granular, are often fully loaded in that they carry the costs for physical
security, climate control, redundant power and many more services already
embedded in the hourly rate. If you
compare the cost of an EC2 host with the cost of your fully depreciated server
(zero cost) in your data center – and you don’t account for physical security,
climate control, redundant power, etc your comparison and any conclusions drawn
from this analysis will be deeply flawed.

So where does this leave us?
How do we create a meaningful business case for a Cloud Journey? Despite all the challenges mentioned above I
believe it is possible to complete a meaningful TCO comparison – you just need
to be careful and methodical. Make sure
you are comparing apples to apples. If necessary
employ the help of professionals who have experience in this area or even
engage directly with your Cloud vendor for guidance. If you aren’t sure where to start, AWS has a
good Excel model for creating a TCO analysis, and for larger engagements they have
a whole cloud economics team.

Most importantly, I believe you MUST place a dollar value on
items that your organization may not generally quantify in economic terms. If your Cloud journey is going to result in a
substantial improvement to your security posture you need to calculate a dollar
value for that and include it in the comparison. It could be as simple as estimating the cost
of a lost customer, down time, security breach, etc. If your Cloud journey will shrink the time it
takes you to deploy applications, calculate a dollar value for the savings. When you include these items in your analysis,
make them easy to change (cell in a spreadsheet) in case others challenge your
assumptions. The most important part is
getting others to acknowledge there is a dollar value for these benefits.

Ultimately there are many good reasons for organizations to
consider a Cloud journey. The purely
financial argument supported by a TCO analysis is only one dimension. Other areas like agility may be as beneficial,
or more beneficial to your organization than direct cost savings. When creating a business case, I encourage
you to have a multi-dimensional story, be careful not to rely exclusively on
the always imperfect TCO analysis.

“Some debts are fun when you are acquiring them, but none are fun when you set about retiring them.” – Ogden Nash

Technical Debt
is one of my favorite terms in IT. Techopedia
credits the term to Ward
Cunningham and was originally intended to equate software development
choices to financial debt.

“Imagine that you have a project that
has two potential options. One is quick and easy but will require modification
in the future. The other has a better design, but will take more time to
implement. In development, releasing code as a quick and easy approach is like
incurring debt - it comes with the obligation of interest, which, for technical
debt, comes in the form of extra work in the future. Taking the time to
refactor is equivalent to paying down principal. While this takes time in the
short run, it also decreases future interest payments.”

For me, the
term Technical Debt is useful more broadly - across all of information technology. I believe the term succinctly describes the sum
total of an organization’s decisions, good and bad, over a period of time. As the debt from your choices accumulates the
compounding interest can become a drag on the ability of IT to serve the
business efficiently. This often
manifests itself in costs well above industry standards and a decreasing ability
to serve the business’s needs in a timely manner.

I mentioned
good and bad decisions because often, well intentioned, sound business and
technical decisions can still result in Technical Debt. We live in very dynamic times and no
technology or business decision is immune to the changing landscape. As the business context changes and technology
advances, its possible, if not likely that a decision that was good at the time,
is no longer the best fit for the organization.
Think of this like entropy – a gradual decline into disorder.

Talking
about an organization’s Technical Debt should not be taken as an insult to
those responsible for the previous decisions.
I like to give people the benefit of the doubt and I assume that people
generally want to do a good job. None of
us are immune to bad decisions or good decisions turning bad as situations
change. Often the best we can hope for
is to make the right decision given the known options under the constraints we
have. Constraints can be: lack of time, access
to viable options, changes in business strategy, incomplete or inaccurate
understanding of the root problem, limited funds, limited resources for
implementation or support, lack of expertise, the list goes on and on. We all aim to make the best decision we can with
the information we have at the time. No
decision is permanently immune to becoming debt, entropy is a natural outcome
and blame is not helpful.

Lets talk
more specifically about what Technical Debt looks like in the modern enterprise
IT organization. Servers, storage,
networking gear etc all have a reasonable service life. Reasonable means different things to
different people; from an accounting point of view its common to depreciate
assets over a 3 to 5 year period. From a
support perspective, manufacturers typically support hardware for around 5
years. As you go beyond 5 years it can be difficult and increasingly expensive to
get replacement parts or buy extended warranties. On the software side, devices running
software more than 5 years old often are not fully maintained by the manufacturer and
no longer receive security patches and software updates. These are generalities but I believe them to
be directionally accurate.

As an IT
department budgets for hardware replacement cycles it can be tempting to let
devices go beyond their projected life and extend out beyond that 3 to 5 year
window. If it ain’t broke, don’t fix
it! Viewed through the lens of Technical
Debt the organization must realize that this decision has consequences – good and
bad. In the short term this decision has
a positive effect on the balance sheet or allows for reallocation of budget to more pressing priorities.
In the long term you are taking on Technical Debt and the interest it
carries. The enterprise takes on risk in many forms,
most of which may not be completely evident to the decision makers. This risk includes: security risk (lack of
patches etc), retention risk (quality resource don’t want to work on old
stuff), performance degradation, higher likelihood of an unplanned outages, diminishing
expertise on aging systems, the list goes on and on. As I mentioned above, as your Technical Debt
stacks up you will inevitably feel its compounding effects as a drag on your
productivity.

From
an IT strategy perspective, it is essential to make sure Technical Debt is a
familiar part of Senior Leadership conversation. Technical Debt as a high-level concept is
useful in encapsulating the risks described above. While Technical Debt cannot be avoided
completely there are several strategies for minimizing the amount of debt you
take on and mitigating the interest. In
a perfect world it would be great to place a quantitative dollar value on the
organizations Technical Debt. I look
forward to exploring these topics on a future blog post. Please stay tuned and as always – please
provide feedback or ask questions.

As I eluded to in my earlier post “Teaching an old dog new tricks,” developing a cloud strategy for an existing enterprise is a bit like peeling an onion. There are many layers to work through, each getting progressively more complex (sophisticated) as you work toward the center.

Crawford is a well-oiled machine after 76 years in business. They are a truly global enterprise with operations on nearly every continent (no Antarctica yet). For an organization like this, starting the journey can be a daunting task. Where to begin? I am not sure there is a “right” answer. Let me share a high level view of how I am approaching it - for the sake of clarity, much of this is running in parallel:

To mix metaphors for a moment – while its helpful to have a global vision, you don’t need to boil the oceans and solve for every eventuality all at once. The reason I mentioned peeling an onion is that I believe this process will be iterative and many steps will continue to repeat as our journey progresses from one layer to the next. A cloud strategy should be a living plan that evolves with the business needs, and incorporates new technology and capabilities where appropriate. Don’t expect a great epiphany where it all becomes clear. The most important step on your journey is the first one. Start where you can and go from there.

In the coming weeks I hope to visit each of the bulleted items outlined above in more detail. Please stay tuned and as always – please provide feedback or ask questions.

I recently signed on to work for Crawford & Company
developing their (public) cloud strategy.
As coincidence would have it, on my first day the team celebrated the company’s
76th birthday complete with birthday cake (bonus). AWS (Amazon Web Services) likes to talk about the “stages of
adoption” and the “cloud journey.”
Whichever you prefer, Crawford is an old (mature, not tired) dog looking
to learn new tricks.

Day 1 – where to begin…
I started the journey by doing a brain dump of what data I want to
gather: what does the infrastructure, network, governance, and security foot
print look like, where is everything physically, how does it fit together logically, what does it
cost, are there any burning issues? I
know this journey is going to be like peeling an onion, the best place to start
is the outer layer and work my way in.
For me the process starts with gathering data because I know unraveling their complete infrastructure inventory is likely to take time and, it
provides a great entry point to meet key people throughout the
organization.Besides, its hard to do
analysis without data…

One (of many) things that jumped out at me during the first
week – Listen Carefully. Be
sensitive to any gap between what people say and what they do. The people I am working with are all
top-notch professionals. That being
said, mature corporate environments have well trained antibodies built up to
preserve the status quo. Everyone is
supportive and excited to embrace a future that includes cloud technologies but
the realities of keeping the lights on often necessitates the purchase of new
hardware, software, etc. Every new capital expenditure
further complicates the cloud TCO comparison.
Topic for a future blog article – how do we break the CAPEX cycle…

I have a number of topics I would like to
document as this journey progresses. Please
check back to see how its going and hopefully learn something along the
way. If you have any (non-proprietary)
questions, please let me know.