August 11, 2010

One of the presenters at
Metricon 5.0 was comparing IT security to other fields
in various aspects of metrics and monitoring.
I mentioned I thought she was giving far too much green for good to
the field of medicine.
This provoked repeated back and forth later.

My point was that 150 years after the invention of epidemiology
and 100 years after the discovery of bacterial transmission of disease,
in medicine application of known preventive measures is so low that
Atul Gawande of Harvard
has gotten large (on the order of 30%) reductions in deaths from complications
of surgery in many hospitals simply by getting them to use checklists for
things like washing hands before surgery.

I have an elderly relative in a nursing home who can't take pills whole
due to some damage to nerves in her neck.
Again and again visitors sent by the family discover nursing home staff
trying to give her pills whole without grinding them up.
Why?
They don't read instructions about her, and previous shifts don't remind
later shifts.
This kind of communication problem is epidemic not only in nursing homes
but in hospitals.
I found my father in a diabetic coma because nurses hadn't paid any attention
to him being a diabetic and needing to eat frequently.
Fortunately, a bit of honey brought him out of it.
Even nurses readily acknowledge this problem, but it persists.
I can rattle off many other examples.

To which someone responded, yes, but medicine has epidemiology,
and Edward Tufte demonstrated in one of his books that that goes
well beyond checklists in to actual analysis, as in a physician's
discovery of a well in London being he source of cholera.
I responded, yes, John Snow, in 1854:
that was the first thing I said when I stood up to address this.
But who now applies what he learned?
One-shot longitudinal studies are not the same as ongoing monitoring
with comparable metrics to show how well one group is doing compared
to both the known science and to other groups.

Many people still didn't get it, and kept referring to checklists
as rudimentary.

So I tried again.
If John Snow were alive today, he wouldn't be prescribing statins for
life to people with high blood pressure.
He would be compiling data on who has high blood pressure and what
they have been doing and eating before they got it.
He would follow this evidence back to discover that one of the main
contributors to high blood pressure, heart disease, and diabetes
in the U.S. is
high fructose corn syrup (HFCS).
Then he would mount a political campaign to ban high fructose corn syrup,
which would be the modern equivalent of his removal of the handle from
the pump of the well that stopped the cholera.

To which someone replied, but there are political forces who would oppose that.
And I said, yes, of course.
Permit me to elaborate.

There were political forces in John Snow's time, too, and he dealt with them:

Dr Snow took a sample of water from the pump, and, on examining it under a microscope, found that it contained "white, flocculent particles." By 7 September, he was convinced that these were the source of infection, and he took his findings to the Board of Guardians of St James's Parish, in whose parish the pump fell.

Though they were reluctant to believe him, they agreed to remove the pump handle as an experiment. When they did so, the spread of cholera dramatically stopped. [actually the outbreak had already lessened for several days]

Snow also investigated several outliers, all of which turned out to involve
people actually travelling to the Soho well to get water.

Still no one believed Snow. A report by the Board of Health a few months later dismissed his "suggestions" that "the real cause of whatever was peculiar in the case lay in the general use of one particular well, situate [sic] at Broad Street in the middle of the district, and having (it was imagined) its waters contaminated by the rice-water evacuations of cholera patients. After careful inquiry," the report concluded, "we see no reason to adopt this belief."

So what had caused the cholera outbreak? The Reverend Henry Whitehead, vicar of St Luke's church, Berwick Street, believed that it had been caused by divine intervention, and he undertook his own report on the epidemic in order to prove his point. However, his findings merely confirmed what Snow had claimed, a fact that he was honest enough to own up to. Furthermore, Whitehead helped Snow to isolate a single probable cause of the whole infection: just before the Soho epidemic had occurred, a child living at number 40 Broad Street had been taken ill with cholera symptoms, and its nappies had been steeped in water which was subsequently tipped into a leaking cesspool situated only three feet from the Broad Street well.

Whitehead's findings were published in The Builder a year later, along with a report on living conditions in Soho, undertaken by the magazine itself. They found that no improvements at all had been made during the intervening year. "Even in Broad-street it would appear that little has since been done... In St Anne's-Place, and St Anne's-Court, the open cesspools are still to be seen; in the court, so far as we could learn, no change has been made; so that here, in spite of the late numerous deaths, we have all the materials for a fresh epidemic... In some [houses] the water-butts were in deep cellars, close to the undrained cesspool... The overcrowding appears to increase..." The Builder went on to recommend "the immediate abandonment and clearing away of all cesspools -- not the disguise of them, but their complete removal."

Nothing much was done about it. Soho was to remain a dangerous place for some time to come.

John Snow didn't shy away from politics.
He was successful in getting the local politicians to agree to his first
experiment, which was successful in helping end that outbreak of cholera.
He even drew his biggest opponent into doing research, which ended up
confirming Snow's epidemiological diagnosis and extending it further
to find the original probable source of infection of the well.
But even that didn't suffice for motivating enough political will to
fix the problem.

From which I draw two conclusions:

Even John Snow is over-rated.
Sure, he found the problem, but he didn't get it fixed longterm.

Why not?
Because that would require ongoing monitoring of likely sources of infection
(which sort of happened) compared to actual incidents of disease (which does
not appear to have happened), together with eliminating the known likely
sources.

Eliminating likely known sources is what Dr. Gawande's checklist is about,
150 years later, which was my original point.
And the ongoing monitoring and comparisons appear not to be happening, even yet.

As someone at Metricon said, who will watch the watchers?
I responded, yes, that's it!

One-shot longitudinal studies can create great information.
That's what John Snow did.
That's what much of scientific experiment is about.
But even when you repeat the experiment to confirm it,
that's not the same as ongoing monitoring.
And it's not the same as checklists to ensure application
of what was learned in the experiment.

What is really needed is longitudinal experiments combined
checklists, plus ongoing monitoring, plus new analysis derived
from the monitoring data.
That's at least four levels.
All of them are needed.
Modern medicine often only manages the first.
And in the case of high fructose corn syrup (HFCS),
until recently even the first was lacking,
and most of the experiments that have happened
until very recently have not come from the country
with the biggest HFCS health problem, namely the U.S.
A third of the entire U.S. population is obese, and another
third is overweight, with concomittant epidemics of
heart disease, diabetes, and high blood pressure.
And the medical profession prescribes statins for life
instead of getting to the root of the problem and fixing it.

Yes, I think the field of medicine gets rated too much green for good.

And if IT security wants to improve its own act, it also needs
all four levels, not just the first or the second.

December 02, 2008

While sitting in a small room perusing a book from the bottom of the stack, The Dilbert Future, I idly looked again at Scott Adam's prediction #2:

In the future, all barriers to entry will go away and companies will be forced to form what I call "confusopolies".

Confusopoly: A group of companies with similar products who intentionally confuse customers instead of competing on price.

OK, good snark. But look at the list of industries he identified as already being confusopolies:

Telephone service.

Insurance.

Mortgage loans.

Banking.

Financial servvces.

Telephone companies of course since then have gone to great lengths to try to nuke net neutrality.

And the other four are the source of the currrent economic meltdown, precisely because they sold products that customers couldn't understand. Worse, they didn't even understand them!

It gets better. What industry does he predict will become a confusopoly next? Electricity! And this was in 1998, before Enron engineered confusing California into an electricity-price budget crisis.

For risk management, perhaps it's worth considering that simply selling something the customer can understand can rank way up there. Certainly for the customer's risk. And given how much the FIRE companies drank their own Kool-Aid, apparently it's good risk management for the company itself. Especially given that the Internet now gives the customer more capability to find out what's going on behind a confusopoly and more ability to vote with their feet.

August 22, 2007

The term "Outrage" suggests that risk cannot or should not be discussed
in a rational manner.

What I think Sandman is getting at is that often risk isn't
discussed in a rational manner, because managers' (and security people's)
egos, fears, ambitions, etc. get in the way.
In a perfect Platonic world perhaps things wouldn't be that way,
but in this one, people don't operate by reason alone, even when
they think they are doing so.

Outrage x Hazard may be a means to express risk within the context of the organization, but I like probability of loss event x probable magnitude of loss better for quantitative analysis.

Indeed, quantitative analysis is good.
However, once you've got that analysis, you still have to sell it to management. And there's the rub: that last part is going to require dealing with emotion.

August 07, 2007

For example, Russell Cameron Thomas of Meritology
mentioned the difference between puzzle thinking
(looking only under the light you know)
and mystery thinking (shining a light into unknown
areas to see what else is out there).
Seems to me most of traditional security is puzzle thinking.
Other speakers and questioners said things in other talks
like "that's a business question that we can't control"
(literally throwing up hands); we can only measure
where "we can intervene"; "we don't have enough information"
to form an opinion, etc.
That's all puzzle thinking.

Which is unfortunate, given that measuring only what you know
makes measurements hard to relate to business needs,
hard to apply to new, previously unknown problems,
and very hard to use to deal with problems you cannot fix.

Let me hasten to add that Thomas's talk, entitled
"Security Meta Metrics—Measuring Agility,
Learning, and Unintended Consequence", went beyond these puzzle difficulties
and into mysteries such as uncertainty and mitigation.

Not only that, but his approach of an inner operational loop (puzzle)
tuned by an outer research loop (mystery) is strongly reminiscent of
John R. Boyd's OODA loop.
Thomas does not appear to have been aware of Boyd,
which maybe is evidence that by reinventing much the same process
description Thomas has validated that Boyd was onto something.

August 06, 2007

There's been some comment discussion in
about security ROI.
Ken Belva's point is that you can have a security ROI,
to which I have agreed (twice).
Iang says he's already addressed this topic, in a blog entry
in which he points out that

Calculating ROI is wrong, it should be NPV. If you are not using NPV
then you're out of court, because so much of security investment is
future-oriented.

Iang's entry also says that we can't even really do Net Present Value (NPV)
because we have no way to calculate or predict actual costs with any
accuracy.
He also says that security people need to learn about business,
which I've also been
harping on.
I bet if many security people knew what NPV was, they'd be claiming
they had it as much as they're claiming they have ROI.

Still, it is hard for me to believe that anyone who knew anything about
Vietnam, or for that matter the Algerian war, which directly followed
Indochina for the French, couldn't see that going into Iraq was, in
effect, punching our fist into the largest hornet's nest in the world.

June 29, 2007

Speaking of
Black Swans,
here's an interesting point in a review of
Nassim Nicholas Taleb's book on that subject:

Why do we base the study of chance on the world of games? Casinos,
after all, have rules that preclude the truly shocking. And why do
we attach such importance to statistics when they tell us so little
about what is to come? A single set of data can lead you down two very
different paths. More maddeningly still, when faced with a Black Swan
we often grossly underestimate or overestimate its significance. Take
technology. The founder of IBM predicted that the world would need no
more than a handful of computers, and nobody saw that the laser would
be used to mend retinas.

If a casino sees a black swan (a really big winner), it's likely to
escort that person off the premises permanently, and maybe have a
few words with whichever card dealer or one-armed-bandit programmer
let that happen.
If ordinary people hear somebody saying a really destructive
event is likely to happen, they're likely to
call him a mad dog, no matter how good his data.

Yet black swans happen.
While by their nature they're hard to predict precisely as to time or place,
it's good risk management to admit they can happen and to have a plan for
that eventuality.

"It was the conventional wisdom that salvage logging and planting could
reduce the risk of high-severity fires," said Jonathan R. Thompson,
a doctoral candidate in forest science at Oregon State, who was lead
author of the study appearing this week in Proceedings of the National
Academy of Sciences. "Our data suggest otherwise."

They suggested that the large stands of closely packed young trees created
by replanting are a much more volatile source of fuel for decades to
come than the large dead trees that are cut down and hauled away in
salvage logging operations.

ActionBioscience.org: The figure "$33 trillion" was once projected as
the value of ecosystems globally. What do you think of this type of
economic analysis?

Polasky: The $33-trillion figure refers to one of the earliest studies
that was done on the value of ecosystem services. The lead author was
Robert Costanza. He and his coauthors tried to get at the notion of how
we can establish on a global basis what the value of ecosystem services
is. They came up with a number 33 trillion [USD] plus or minus a few
trillion. There are a number of problems with the study. The most basic
one is the question of what you are talking about when you consider all
the ecosystem services of Earth. The entire system is our life support
system. So what is our life support system worth? You don’t really
have to have a scientific study in order to answer that question. The
real value of the study was not the $33-trillion figure, which who knows
what that means, but that it spurred people to focus on these issues.

Such values can be big, and the dollar value isn't the only consideration.
There is a bit of risk in that we can't do without the biosphere,
and some risk management is in order.
Even beyond that obvious non-dollar value,
there are further questions of species diversity and esthetics.
Do we really want to kill off an ecosystem when we don't really know
what it's doing for us,
and do we all want to live surrounded by concrete?

Jared Diamond: Collapse: How Societies Choose to Fail or SucceedThe author examines societies from the smallest (Tikopia) to the largest (China) and why they have succeeded or failed, where failure has included warfare, poverty, depopulation, and complete extinction. He thought he could do this purely through examining how societies damaged their environments, but discovered he also had to consider climate change, hostile neighbors, trading partners, and reactions of the society to all of those, including re-evaluating how the society's basic suppositions affect survival in changed conditions.