I sometimes think that the essence of engineering is making
intelligent tradeoffs between conflicting parameters. Improve one value and
another one worsens. The art is in knowing where to make the best trade. As
engineers we are trained to quantify the parameters and outcomes, draw some
kind of cost/benefit curve, and make a rational choice based on our analysis.

A classic case is the receiver operating characteristic (ROC)
for radar. We want to increase the radar’s sensitivity, measured by the
probability of recognizing existing targets (true positives). However, as we
increase the sensitivity, we inevitably also increase the probability of seeing
things that don’t exist (true negatives). Being good engineers, we draw
a curve showing the probability of true positives against the probability of
true negatives. Most such curves have a well-defined knee, and that is usually
a good place to operate. As engineers we feel that we have analytically identified
the best tradeoff possible.

Unfortunately, the world doesn’t always cooperate with
our analytic strategy. I think sometimes that the ROC analysis may even be an
anomaly in a world that is usually uncooperative. There seem to be any number
of really important problems where there just isn’t any quantitative,
rational strategy for making tradeoffs. In these problems there are two intrinsic
barriers to analysis: one is benefits that cannot be quantified, and the other
is costs that appear to be infinite.

A problem that exemplifies both of these barriers is the tradeoff
in computer networking between connectivity and security. I hear it discussed
very frequently, and I have yet to see the glimmer of an analytic justification
for decisions that get made.

It’s easy to get perfect security in a network –
simply pull the plug and disconnect. But the value of a network increases with
the number of connected users. The more people and computers that are connected,
the greater will be the information acquired, commerce attained, and so forth.
However, the more users that are connected, the more bad actors that appear,
and the greater are the risks of costly computer attacks.

How do we make such a tradeoff? In my mind I see a cost/benefit
curve. On the abscissa would be the cost associated with the risk of opening
the network, while on the ordinate would be the benefits associated with increased
connectivity. The cost in a business environment might be the probable loss
of business, increased liability, or monetary losses due to expected computer
intrusions. In a military situation the cost could be measured by the probability
of mission failure, but it is also possible that increased connectivity could
lead to more serious compromises of military systems and strategy.

The value of connectivity in a business situation could be increased
efficiency of operations, more knowledgeable and satisfied employees, and increased
revenue. There would be similar values in the military, although instead of
increased revenue there would be measures of mission success.

How do we measure these values? I’m very afraid that the
answer is that we can’t. It isn’t just that it is difficult; I think
that it is really intrinsically impossible. It is a conclusion that I resist
as an engineer, but one that I encounter time after time. Monetary cost is something
that we are familiar with, but benefit is often not quantifiable. So in the
case of network connectivity, the ordinate – benefit of connectivity –
cannot be measured.

Assessing the expected cost of computer intrusions – the
abscissa – also seems impossible. But here we encounter the other fundamental
difficulty, which is the appearance of a non-zero probability of infinite cost.
The chief information officer of a company will say that there is some chance
that a computer attack could irreparably damage the company’s reputation,
putting it out of business. A military officer might reason that a computer
attack could disable the entire defense system. Even though there might be small
probabilities associated with these events, their harm seems infinite, and the
cost/benefit analysis would be moot.

Often it seems that tradeoffs in these situations are made defensively.
When a computer attack damages a company, the computer security person gets
his picture on the front page of the paper, losses his job, and has to find
another career. If, on the other hand, the business is handicapped by a dearth
of connectivity, it is likely that no one will notice. It is easy to see how
systems administrators are reluctant to make their networks easily accessible.
In discussions of defense networks I’ve even heard distinguished engineers
mull over the advantages of completely disconnecting the network.

Although I recognize the nearly insurmountable difficulties,
I’m still unhappy that there isn’t a more rational way to make these
impossible tradeoffs. There must be a better way than getting out the old dartboard.