Is information security like pollution?

I’ve been catching up on my post-RSA reading, and ran across Elinor Mills’ article on the RSA conference, “Why the security industry never actually makes us secure.” One comment she makes was interesting to me because, as much as I’d like to disagree with it, I just can’t: “Like pollution, security incidents are something everyone potentially contributes to and suffers as a result of.”

Overall, I think Mills’ article is an excellent collection of commentary on where we are today as a security community, and is well worth a read. It also got me thinking about the ideas she presents (with the help of other infosec pundits). Here are a few of Mills’ points that I agree with, and some of the reasons why:

There are no magic bullets

Products can help, but products alone will not “solve” the enterprise security problem.

Automation is still at the mercy of the skills of the people doing the automating. In other words, doing it wrong just helps you achieve insecurity more efficiently.

The bad guys are outpacing the good guys

It is usually faster to attack and defeat a countermeasure than it is to build a new, better countermeasure.

The bad guys’ compensation is tied *directly* to their success, which is a strong motivator. The same can’t always be said of the good guys.

Even if better “stuff” exists to fight the attackers, enterprises are usually behind the curve due to being resource constrained and slow in rolling out new technology. Security folks spend a lot of time on trying to catch the bad guys in the act, which is definitely focusing on the lagging indicators.

To me, it sometimes feels like we are focusing on building better airbags rather than spending more time teaching good driving skills.

“Most of RSA, especially on the trade show floor, is reactive security and the idea behind that is protect broken stuff from the bad people,” said Gary McGraw, chief technology officer at Cigital.

I agree with Gary. His particular focus is on secure code, and Cigital seeks to help people eliminate insecure code in the applications we deploy. That would be huge if we could make it happen consistently – but it’s subject to the same issues as automation: people are involved, and those people don’t always code securely (and may not know how). Also, our willingness to pull in unproven 3rd-party libraries for the sake of speed can cause some problems.

Configuration hardening is another area of trouble (or opportunity, depending on your perspective). This is a fundamental means of reducing your attack surface. Fast is the enemy of secure in most organizations; in other cases the enemy is apathy. This is particularly frustrating, as we know how to do it – but most people choose not to.

Those are a few examples of some of my take-aways from the article. Or at least the ones that I agree with.

A point of disagreement

There was one comment from the article that I don’t agree with, however.

“This might be a fundamental mismatch that the market cannot resolve,” without government intervention, [Bruce] Schneier said.

I am unwilling to jump to the conclusion that government intervention and legislation is the solution to better information security. In fact, I think it may be the fastest way to drive us to a broadly insecure state, as it could drive mediocre, “one-size fits all” security practices across business. Such an approach may well create highly homogeneous security models, along with “least common denominator” implementations. And what’s easier to attack than a bunch of things that have identical vulnerabilities, and mediocre security?

I’d love to say, “I’d like to be proven wrong on this one,” but I just can’t say that.