How Much IT Policy Is Too Much?

As almost everyone has probably already noticed by now, there are some radical changes going on in the way that organizations purchase, manage and use technology. Since IT is (by its very nature) adaptive, this is not totally unexpected. However, even though we expect technology to change, there are periods when it changes faster than others. And right now, changes are coming quickly: between virtualization and cloud, mobile and BYOD, VDI, large-volume storage and the slow uphill push to Exascale computing, IT is in a period of transition.

Now, when technology is new, and particularly when that technology has a potential security impact, security professionals start feeling pressure to take action. And in situations where technical solutions to address security challenges are immature or emerging, that pressure takes the form of policy authorship. It's human nature -- we need to do something, so we do what we can. If the only thing we can do is write a policy, pressure will be high to do that. Pressure can come from a variety of sources: from the industry press, from executives, from customers, or auditors. To illustrate what I mean, a cursory glance through the industry press finds guidance suggesting organizations write "cloud policy," "mobile policy," "BYOD policy," and "social media policy."

But do you really need all that? Only sometimes is it a good idea to create policy in response to new technology innovations. There's significant work that goes into policy documentation -- and because of the impact of that policy to the organization as a whole, it's usually more important to "get it right" than "get it done." Why? Because it takes time to get right, and not getting it right is usually worse than not having it at all.

Policy Proliferation Is Not Your Friend

Now, don't get me wrong -- I'm not saying that you don't need security policy. I'm also not saying that there aren't certain circumstances where it is advantageous to create one-off polices for specific technology challenges. Instead, what I'm saying is to be skeptical of the "knee-jerk" reaction of authoring policy anytime a new technology challenge is encountered.

Policy is (when done well) a non-trivial investment. Not only do you need to write it (which should include getting buy-in from all stakeholders) which takes time, but you need to maintain it as well. This includes getting the policy approved and published, periodic management reviews, indexing, publication, documentation of changes, alignment to compliance frameworks, etc. Even if you try to minimize workload by using "canned" policy instead of writing your own, the work effort is still non-zero since you'll need to "normalize" terminology and format for your organization and all the overhead items still apply.

But there's more at stake than just extra work. Keep in mind that you're not starting from a green field: meaning, you already have security policy in place. New policy needs to exist in harmony with what you already have. What does "exist in harmony" mean? Specifically, not conflicting with other policies and not introducing audit issues. Since policy is auditable, you could ultimately be called to task if you say you'll do XYZ and you wind up not doing that. Not to mention that any policy referencing specific technology (a useful thing to avoid) needs to keep pace with changes to that technology. So if you had an IM policy that specifically references using AOL instant messenger on Windows 95? Well, maybe it's time to update that.

When You Need a Policy vs. When You Might Not

Anyway, the point is that creating a policy isn't (as sometimes perceive it to be) low effort. But there are situations where the effort is worth it; specifically: when the policy directly forwards a security or compliance goal. In this context, a goal of that stripe might be a regulatory requirement with a line item that requires specific policy documentation (e.g., PCI 12.3). There can even be business goals that you could support with policy; if, for example, you are routinely audited by customers as part of contract negotiations and a large number of them ask (in checkbox-style format) whether or not you have a particular policy. From a business standpoint, you may want to be able to say "yes" to a question like that and support your response with a targeted document.

But the inverse to that is also true. Namely, if the writing of a new policy doesn't directly support a business, security, or compliance goal, then you may want to seriously consider refraining from taking this burden on. That doesn't mean you don't address the technology challenge at a policy level by the way -- it just means that you look to interpret policy you may already have in light of the new technology instead of adding a new policy document to the mix.

For example, if you are concerned about social media, you may wish to review current policies like (a few examples assuming you have them) "acceptable use of technology," "ethics guidelines" and "representing the organization in public forums" to see if one of them covers the things you're worried about. You also have the option of authoring new topic-specific supplemental documentation (e.g. technical standards or guidance) to address the topic without crafting new policy -- in fact, that's generally what this supplemental documentation is for.

Ask yourself what additional value would be provided by writing new specific language about the subject. In point of fact, a cursory read-through (at the very least) is always a good idea when evaluating whether or not you need a new policy document. You want to do at least one pass through your existing policy set before writing a new policy (since you may need to update language to avoid areas of contradiction and overlapping scope) and then another pass after you write it (but before publication) to shake out any "unforeseen consequences" in other policy based on what you're looking to put into effect.

The point is, writing new policy is often one of the first things you'll hear suggested when technology changes occur, but maybe it's not always the optimal strategy for addressing those changes in a productive way.

Ed Moyle is a senior security strategist with Savvis, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner of Security Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.