Bound to turn up. The adventures of an early adopter.

Main Menu

Destroy Your Infrastructure

The state of the industry is now and has long been driven by companies selling easy solutions to the incredibly difficult problem of meeting the conflicting goals of performance, usability, and security. What the clued have known and the unclued usually have not is that there is no easy answers, turn-key appliances, or buttons to click to solve complex system technology problems.

So how are these goals met? How does this seemingly impossible and nebulous goal of having resources and datastores loose enough to use, but tight enough not to be freely available, managed and accomplished?

It is usually systems thinking, a data driven risk analysis process, and a fundamental hard-won understanding of how things work to at the many levels of interaction required to form an effective defense and environmental awareness.

I make this statement because, in my reasonable experience in these matters, most technology management is less about a perfectly arranged stack of software and technology. It is more about the problems created by various distinct widgets that were deployed once, no one sufficiently understands, and is full of artifacts and bitrot. These solutions have to be maintained to because of this lack of understanding of what unmanaged artifacts are important, which are obsolete, which are vulnerable, and which are superfluous.

Some recent examples of systematic failure:

Sony’s tragically negligent tale of being owned repeatedly. If billion dollar companies aren’t getting this job right, and we think of valuation, revenue, and profit as parts of corporate success, who can we expect to have done this correctly?

Lately, everyone appears to be prone to speaking about the usual lolhats and how everyone’s corporate security “sucks” while adding nothing constructive to the conversation. It doesn’t help when security experts get caught using bad practices and weak credentials in disclosures. People who have been paying attention in the last few years know that security hasn’t been improving quickly as the primary targets of security have shifted almost completely into the app tier.

People have been perennially speaking about this problem in the same circular discussions that don’t yield improvements or action for a long time. Why we haven’t really got anywhere significant in these discussions because these discussions are largely irrelevant to everyone.

Everyone? Really? Aren’t you exaggerating Ian?

Sadly, no. I don’t think so.

Developers are interested in making things that work and nearly universally have zero interest in being security experts. That’s why they’re software developers. If fail-closed for security isn’t part of the design requirements, it will have little attention. There has been a lot of talk about implementing fail-closed in development frameworks, but I haven’t heard of any actual progress that is changing the landscape.

Non-technology people? They don’t care at all because they have no concept of how these things work or how to gauge priorities because of this.

So what do we do?

To address some of these issues, I’m going to voice a simple operational, development, and infrastructure approach instead of another giant heavyweight framework. There are way more than enough frameworks, methodologies, and religious beliefs out there already.

Developers have always been good about implementing new stuff, but they rarely clean out the old code, fragile dependencies, or dirty kludges that should never have been in the codebase/footprint in the first place unless there is a powerful alignment-focusing incident that brings attention and resources to address it.

Lesson: Developers need more garbage collection and environment remodeling and need to do it with greater frequently without first having a motivating catastrophe.

Operations and business unit people who aren’t one of the technocrat initiates tend to fear change because of past bad experiences. Someone needs to go to one of those uber-executive retreats (or RSA keynotes) and give some sexy business fashion pitch that makes change, and more importantly acceptance of change risk as being worth taking to keep the enterprise cutting edge and relevant.

Lesson: Worry less about getting sued and more on doing the right thing. Share data, tell people about problems and how you’re working to fix them, disclose and be transparent to your customers, and foster trust by allowing people to make informed risk management decisions. Many vendors and services organizations hide information in order to maintain a sense of brand, delay or disavow breach disclosure information, or just plain lie about capabilities in order to make sales.

Hiding risk and technical debt

In order for there to be improvement, this kind of behavior can no longer be tolerated from firms both large and small.

There are too many examples (of tales both trade secret and disclosed) of technology companies hiding risks of their products running in their customer environments and inability to close critical reported vulnerabilities in a timely manner or dismissing legitimate risks as “purely theoretical” to go into. There are a lot of reasons why this hasn’t been effectively addressed and the cost sink of marginal value that is compliance efforts have taken the place of what was a good opportunity to introduce policy and visionary changes.

SDL/SDLC programs have the possibility to improve environments and, when real changes can’t be made to correct development culture, they’re the next best thing. DevOps has the possibility to assist in synergizing (did I really say that?) and [re]coupling partnering org units. Metrics programs can often be relevant and valuable. However:

SDL often just gets reduced to a checklist or process nightmare timewaster when people don’t understand that it’s both a developer maturity training program and also a code quality assurance program. Yes. That’s what it really is.

DevOps (and other such rugged, agile, scrummish, ruggeddevops, or other buzzword) often doesn’t have their ideal realized as they are not especially prescriptive. Additionally, everyone seems to always have a differing view of how it should work. I wonder if it is philosophically incompatible for most technology workers.

Metrics programs are hard to make useful. The biggest reason for this is that people only share (public) data when they are owned or in the courts. As per usual, this is because of the mythos of the Average Company. The Average company doesn’t exist. What is good for this non-existent entity may not have any meaning for yours at all. Data programs need to be relevant and meaningful. If they’re not, they become a hamster wheel of pain for bureaucracy and the cult of toxic middle management.

Because of these complicated to grok, tricky to implement, and nearly impossible to align well in corporate kool-aid cultures, I have a different change prescription.

Blow it up.

We as practitioners have known for years what is required to secure things, do it cheaply, be agile in our practices, and often advise others on how to improve their efforts by identifying the root cause of systematic problems. Let’s examine the persistent and entrenched problems that everyone keeps talking around without zeroing out.

Code artifacts and persistent flaws:

Persistent known issues, bitrot, and design shortcomings can’t continue to be tolerated as they are presently. No vague philosophy is prescriptive enough to fix this problem. It needs to be readily simple and implementable. Core problem: huge unnecessary threatsurface.

Infrastructure needs to be solid. It needs to be the bedrock on which to build your castles. If you can’t hire the best people to staff your departments, pay for the best consultants you can afford to give you a strategy and implementation plan. A lot of my work in my career has been addressing issues of bad implementations that can’t be redone properly. People have lots of names for this and ways of saying it so that it doesn’t sound like it wasn’t someones mistake. Technology infrastructure is now the bedrock on which all first world business occurs. Treat it accordingly.

This is all old news. Practitioners already know all of this. IT workers should know them all first hand. We have failed as an industry to communicate these principals to our cousins in services and development and our customers in the greater business community. Because of this long term failure, the government has decided to do it for us in establishing compliance goals and having some of us work to define them. Compliance goals for the Average Company. Compliance where the opinion of an assessor usually focuses on what is a “good enough” control.

For the uninitiated, this is only the tip of the iceberg. The compliance rabbit hole is very very deep and employs legions of people, most of which you wouldn’t want around your infrastructure or guiding policy/practices.

So how do we fix this? Easy.

Blow it up and re-deploy with a minimal footprint.

Madness? Let me explain.

All modern environments have some or all of the following:

Disaster recovery plans

Virtualization and platform-centric management

Detached/decoupled (and usually redundant) data stores

A comprehensive and fine-grained approach to information classification

“Cloud,” distributed, or community resources and hardware

These things all have one common demand; they require a quick and painless method for action for them to function without significant error. Deployment is something that is often a kludge packed afterthought. Much like an insufficient design process, the root cause of most security problems, deployment should be fast, exactly repeatable, push-button easy, and most importantly designed for success.

The solution to your persistent compromise problem

If the problem of easy, automated, and rapid deployment of least-privileged hardened technology can be solved well in your environment, the cost savings will be substantial.

There are a ton of products and processes that monitor and track changes. They’re usually complicated, fault prone, and generally give false confidence because of:

Rootkits

Subverted router/firewall configurations

Quiet and difficult to detect communication channels, leaky software, and sneaky out of band data paths

If it can’t be fixed, it should be retired. If a process can’t be improved, a replacement should be created.

More importantly, people who love rigidity and inflexibility in a practice that requires dynamic need to find a new job away from technology. It isn’t a place for people who want to put in minimal effort and collect a check. If people/processes/technology can’t be made to accept change and adapt to a lean and efficient process that yields quality, replace them with ones that will. The cost of keeping them around, if data was available to calculate the full cost and implied risk adoption, would be shocking.

Coming to a network/appstack near you

This renewal process should be build not just into operational and code development, as all the current fashionable management and intermanagement process philosophies focus upon, but comprehensively to clear away the relics, dead weight, and ineffective process causing tech debt.

Adoption of turnover, especially in edge and virtualized environments should be easy. The reasons for not putting it into practice are too costly to ignore any longer.