Jurassic Plaque: How Fear Impedes Security

Jurassic Plaque: How Fear Impedes Security

In my last column I explained how organizations that enforce the strictest corporate security are often the ones that end up being the least secure.

Contemporary technology and the way contemporary technologists implement it are partially to blame, but most of that paradox is something most companies can repairwith enough money, time, interest and experience. What you can't change is the response of the end users to overly secured systems.

It's not just people writing their passwords on Post-It notes; it's the loss of productivity that comes from the quotidian rassling they have to do with appliances they have to use to work and the resulting loss of faith in the mission the appliances are supposed to support.

Life Finds a Way

Unless you're running a Red Chinese bonded sweatshop or a sheltered workshop in the States, the people who use your systems are motivated in such a way that they're much more productive when management shows respect for their time than when it doesn't.

Technologists and naïve executives often approach security as though securing the appliances is the mission. The key instigator of that trend is the lust for a level of security that goes beyond what people, technology and time can achieve gracefully.

In pushing the systems beyond what they were designed to achieve, management is warping the implementation and making it something that gets a little more effective at protecting itself from external predators while becoming a lot less productive for the people it's allegedly serving.

Self-selection for Weakness

When the system becomes the mission, the people who use the systems to make work happen recognize that, consciously or unconsciously, and start taking the actual mission more lightly. Once staffers get the sense you aren't paying attention to the real work of the organization, their incentives morph and they are less likely to keep churning out their work.

There are three types of staffers who respond poorly to excessively secured systems that slow them down.

The weaker-minded ones will give up entirely on caring about the real work. Unless you rigorously design and apply change management techniques to convince weak-minded staff that the extra burdens are worth the lost time and convenience, they will come to believe management either doesn't know or doesn't care. The average management response, merely ordering them to suck it up, is a guaranteed loser.

The stronger-minded ones will see the storm on the horizon and be more likely to bail on you entirely.

The third group, in some ways, is the most dangerous because it's hard to scope out before a project: the subversives. Subversive users become resentful about changes that make life more difficult (sharing the pain) while not giving them something in return.

Some subversives do innocuous things to mess with the system. On a contract I performed recently, the system was secured with all kinds of flaming hoops users had to jump through to get their work done.

One subversive managed to capture other users' passwords, logged into the system as various people and added R-rated words to their slide decks. She wasn't caught, though IT expended a lot of resources looking for her. No real harm was done in hard financial terms, but the culprit did a good and very public job of mocking both the security of the system and the managers who put it in place.

More practical subversives will mess with the system in ways that increase their chances of getting their work done. Put up a utility that automatically flags every floppy disk as a major risk and there will be a guy who installs a utility that turns it off completely (also killing the essential function of checking disks for malicious code).

And there's the Jurassic Park effect. As anyone who's been in the military knows, there's an inescapable cadre of people who will spend more energy on surreptitious attempts to subvert procedure than managers expend trying to get them to conform. These Eddie Haskell types usually succeed in remaining undiscovered.

Other subversives try to cause serious damage for whatever sick reasons drive them. More often than not, you can catch them, but usually not until after they've created a costly problem.

Page 2

The best approach to avoid the consequences of excessively secured systems is to not allow yourself to get talked into one in the first place.

That can be difficultmost people view things in a binary way, so, they believe, if no security is bad, more is always better. The simplistic all-or-nothing view doesn't reflect reality.

One technique I've seen work is to insist on the same kind of benefit/cost analysis that most shops insist on for other kinds of projects. What's the actual risk? How likely is the next gadget to decrease attacks, and by how much? The act of discussing a security initiative this way can be enlightening to all involved and help IT make more rational decisions.

If you're convinced you truly need the additional security, a conversation with the advocate of tighter security can still add value.

It can frequently expose a warped sense of perspective that will motivate the advocate to make protecting the system the primary mission, and not just a supporting feature. Once a person has outed herself as one whose world views are skewed by personal anxieties, you have insight into the usefulness of her advocacy.

One technique that always works is to run a pilot project with end users. Get them to keep a diary of experiences and how they believe the changes affect their productivity. More likely than not, you'll get feedback that will help you tweak the design to better avoid the three kinds of staff setbacks.

It's not intuitively obvious that at some point adding security processes and technologies actually degrades safety. It just happens to be true.