Tuesday, December 18, 2012

Due to recent events, it has come to my attention that this
State is not fulfilling the obligations of the Second Amendment to the
Constitution.We will therefore enact
the following:

1.All weapon-owning residents of this State will
report once a year to a State center with all of their weapons.At that center, they will receive training in
operations as members of a State militia and in use of those weapons for
militia purposes, via State-certified instructors.If the obligation is fulfilled correctly, including
following the regulations established by the instructors, those residents will
receive a State certification of their ability to act as members of the State
militia. This obligation will continue until the resident dies or no longer
owns a weapon.

2.The object of this training will be to protect
the State against those threats to it not covered by the Federal
Government.Specifically, the training
will be for call-up at any time to fight against persons in the State who seek
to overturn all or part of the State by force. Such call-up and training can
only be done by a State-certified or Federally-certified authority, and during
training and call-up weapon-owning residents shall be deemed in violation of
regulations if they attempt to use their weapons other than under the
supervision of such authority and under the regulations governing the militia.

3.Penalties for failure to act as part of the
militia shall be as follows:first,
failure to rightfully gain such certification or to fulfill such call-up each
year shall cause, in the first year, a fine of $100, in the second such year, a
fine of $1000, in the third such year, a jail term of a minimum of one month
with confiscation of all weapons for that term, in the fourth such year a jail term of a minimum of one year with
confiscation of all weapons for that term, and for the fifth and ensuing such
years, a jail term of a minimum of twenty years. If it shall be found that a resident has
concealed weapons or failed to train for longer than the time period since the
last training, he or she shall be liable for all such years so
established.Second, use of weapons
outside of training or call-up in such a manner as to indicate inability to use
them properly as a member of the militia, and specifically use in commission of
a felony crime or use by the gun-owning resident or another person deemed
legally improper and causing harm such as accidental death shall be deemed a
crime separate from and with penalties in addition to all other related crimes,
and will involve a minimum jail term of one month. Third, the State shall
distinguish between weapons useful for purposes of the militia and weapons not
so useful, and use outside of training or call-up as described above shall also
be deemed a crime separate from and with penalties additional to all other
related crimes, and will involve an additional minimum jail term of one month.
Fourth, sellers of weapons who fail to verify that the resident has
certification will be personally liable for the same penalties, and their
corporations as people shall be liable as well, for one million dollars per
felony crime committed.

4.Failure of the State to adequately fund and
staff such training shall not be a cause for voiding these requirements.
Moreover, the State may not in its funding assess those who do not own guns
more highly than those who own guns.

Let me add two comments on this change to our State
laws.First, there seems to be a
pervasive misunderstanding of the Second Amendment to the Constitution, involving
in many cases those of the legal branch.The Second Amendment does not state
that the people in general in the
United States may bear arms. Rather,
it states that the people as well-regulated
members of States may bear arms. This is made clear in the very first words
of the Preamble to the Constitution, which states not “We the people” but
rather “We the people of the United States,”
thus defining the word “people” for the rest of the document. I find that
commentators often overlook this – a mistake comparable to saying that “I believe”
(in the tooth fairy?) rather than “I believe in God[s]” is an expression of
religious faith.

The implication of this is that the right “of the people” to
bear arms extends no further than their rights as members of a State (and, of
course, where Federal law does not intrude).Whether they have any rights to bear arms when their State has allowed
militias to wither, I leave to others – my personal opinion is that they do
not, unless the State law specifically establishes such a use and its relation
to the interests of the State. However, it is very clear that in the absence of
such laws, establishment of such a militia, as I have done above, means that
weapons may be used only for the purposes of a State militia. Moreover,
registration of weapons for purposes of a State militia is in no sense a
violation of individual Constitutional rights, but rather a right and proper duty of residents
of a State.

Second, it should be understood that while residents of this
State have a right to acquire or sell any weapon they wish, they also have an active
duty once any weapon is acquired to use weapons in a manner consistent with the
purposes of the State, and that neither ignorance of the law nor passive
failure to comply with the militia’s regulations will serve as an excuse for
failure to do so.I believe that the net
effect of this will be that, while residents and visitors may continue to use
weapons for such purposes as hunting and self-protection, weapons used for killings
(mass and other) will be more difficult to acquire and retain, and users far
less likely to cause unintended deaths.

Finally, I view this enactment as the minimum necessary to
re-establish our State’s proper fulfillment of the Constitution.Any attempt to “water down” the necessary
laws shall simply remove the rights of weapon owners altogether, since without
Constitutional sanction there are no legal weapon rights.

Tuesday, December 4, 2012

First off, I give credit for the word "gnastly" to Jeff Jones of IBM and Charles King, writer of Pund-IT Review. It's not their fault I'm choosing to use the word (which I define as extremely nasty, nasty by nature, perpetually nasty) to describe climate change.

Anyway, the reason I cite Mark Twain is that he is reported to have wisecracked: Everyone always talks about the weather; but no one actually does anything about it.

Mark Twain was wrong. We have spent the last 45 years proving him wrong. We have done something about it. We've made it worse.

The figures I look at are disaster costs, and there I am amazed to find that it has reached the point where it is actually affecting GDP. According to some sources, Hurricane Sandy took 0.2 % off US GDP in 3Q, and will have at least that effect in 4Q, because so much of the US lives in the areas affected (of course, its span of damage was 1200 miles at one point, according to Jeff Masters). Projections were that each increase of, say, 1 degree C would increase average energy in the atmosphere to increase average storm speed by 10 mph, not to mention storm surge and precipitation amounts, which would seem to argue for at least a doubling of disaster costs; well, the average disaster costs for the US at least seem to be closer to a tenfold increase, and James Hansen cites this kind of effect as an unwelcome "surprise" of an underestimate of the effects of global warming.

Of course, at least initially, no negative trend is complete without some positive aspects. Last winter was unusually warm in the US, as one would expect with both the North Atlantic Oscillation tilting the right way and winter temperatures warming about 5 degrees anyway from global warming. On the other hand, that led also to baking summer heat that exacerbated the Midwest and Southern droughts that are still going on.

Last winter's mildness helped the economy. I suspect that this winter we will see alternation of cold/snowy and warm like last year, as according to a study the warmer Arctic air from summer ice melting causes oscillations up and down of the jet stream that carries the relative cold (to us temperate zones) south. Overall, I would guess, a slight plus to the economy. On the other hand, we should anticipate more disaster costs next summer and fall ... And this is just the start of the gnastliness, folks. Another decade or two, ten times as much in disaster costs maybe, a 1-2% hit to GDP, food prices really beginning to be impacted -- if not before then.

Oh, well, we still haven't reached the point where it's on the top of the strategic list for CEOs. Not even close, according to recent IBM surveys.

An old Flanders and Swann describes English weather as "cold and dank and wet" and concludes: "Freezing wet December then ... Bloody January again!" Except for the "freezing" part, I suspect that's what may be coming up. Happy holidays, everyone! It's not gnastly yet!

Wednesday, November 14, 2012

As I write posts and white papers on agile software
development and business agility, I find myself surprisingly often reminded of
sayings by or about General Grant during the Civil War.I am not recommending Grant as a model of
agile thinking – I think we’ve had enough books about Jesus Christ being an ad
man and Attilla the Hun being a master business strategist (although if you
want repulsive characters who were good generals, Subodai the Mongol would be a
much better choice). I am, however,
suggesting that Grant showed some interesting “agile” traits.

Let’s start with a remark by someone when Grant showed up to
take over in Tennessee (I am paraphrasing): Grant didn’t come in with a lot of
show and effort. And yet, when he came on the scene, everything seemed to start
working like clockwork.He seemed to
have a plan and everything he did played into that plan.

The plan, by the way, was mostly not Grant’s. The idea of
attacking downstream positions to free up the Union lines was General Thomas’.The spontaneous attack on Southern positions
at the end was not in anyone’s plan.However, Grant adopted and adapted to realities on the ground and
changes in plan immediately.So, I
suggest, Grant had (a) ability to reach out to a “customer” and change plans
based on feedback – reactive agility – and (b) ability not just to adapt to
unexpected change but also to incorporate it immediately in an overall
strategy.

Now let’s reach back a little earlier, to the attack on
Vicksburg.Catton in his superbly
written history states that Grant during the winter was trying various methods
to get downstream, and while almost all of these didn’t pan out, the constant
trying of different things had the effect of bewildering his opponent Pemberton
so that Pemberton perceived himself as being under potential attack from many
directions at once. As a result, when the real attack happened, Pemberton was
constantly one step behind.To me, that’s
a bit like evolving the nature of a product constantly during development,
compared to a competitor sticking to a fixed plan.However – and, to me, here’s a key point – Grant was
proactive rather than reactive.Not only
was he constantly thinking about attacking; he was constantly thinking about
changes in his attacks.At Vicksburg, he
had already committed to operating without a supply line once across the river;
he then changed his plans in order to attack Jackson once he saw an opportunity,
and then turned around once Pemberton finally came out and changed his attack
plans again in order to attack and defeat Pemberton “in detail”. So Grant could be proactively agile.

That Grant thought this way is apparently confirmed by an
incident later in the war, at the Wilderness.Lee was pressing Grant hard, and staff officers accustomed to previous
Union commanders were panicked over whether Lee could destroy the Union
army.Grant blew up at them:some of them seemed to think, he said, that
Lee was going to do a giant somersault and attack the army from behind.Instead of concentrating on what Lee was
going to do to them, Grant said, they should concentrate on what they were
going to do to Lee.

Now, I can’t say that Grant was completely agile – because he
apparently sometimes did not spend enough time taking care of potential
counter-moves by his opponents.Thus, in
Shiloh, he chose to hasten the arrival of reinforcements for an attack rather
than be on the scene getting folks dug in to receive one before he had a chance
to attack.

One final thought – and here I admit I’m really
reaching.An interesting characteristic
of Grant’s attacks was that, given the chance, Grant would attack “up the
middle”, but not to split the enemy’s lines, as Lee attempted to do with
Pickett’s charge at Gettysburg. Rather, the main purpose was to hold the enemy
in place as he then detached an equal force to hit the enemy on or behind one
end, like a jab immediately followed by a roundhouse swing. If the enemy then
detached too many to deal with problems on the end, the up-the-middle attack
could then succeed, as in Tennessee. There is a vague analogy here to agile
new-product development, in which one “fixes the competitor in place” with a
strong product in an existing market, and then “hits the competitor on the end”
with major innovative features, as the iPhone did.

I conclude that Grant was not a modern agile strategist, not
fully.However, compared to others in
the Civil War, and many generals since, he thought in a much more agile
way.And note that it had nothing to do
with how hard he worked.He spent less
effort than most, and achieved much more.He was not efficient, he was effective.As Lincoln once said about Grant, I can’t spare this general, he fights.

Thursday, November 8, 2012

I thought that after the usual commentary on the election, I'd try something a bit different. During this campaign, who showed political guts, and when?

My definition of political guts is, I hope, simple. You have to stick your neck out, in a way that most others aren't, and it has to be a reasonable conclusion about something important.

I saw two instances of political guts this campaign; others may have their own lists (spare me). One, President Obama came out for gay marriage. Yes, the polls had been showing that support for that had finally (barely) reached majority status; but politicians also try not to anger minorities within their own party, and especially in what appeared to be an especially close election. I think that took political guts.

Second, Mayor Michael Bloomberg of NYC flatly declared that Hurricane Sandy was about global warming, and we needed to do something about it (he then, as a Republican, endorsed President Obama because he was better for climate change, but since Bloomberg was the mayor of NYC, which strongly backed Obama, that took very little guts). In a campaign in which one side has been flatly denying that there is any such thing (or refusing to answer while advocating policies that will make it worse), while the other side is refusing to treat it as an important issue, that took political guts.

What's interesting is that in both cases, once the barrier was breached, no one attacked either Obama or Bloomberg viciously for their stances. It is as if there was this great pretense in the media and in political commentators that things were one way, and then when the opportunity for attack and innuendo came, they suddenly started rethinking things.

What's happened since? In the case of gay marriage, one amendment against turned down, and three additional states putting it into law. In effect, gay marriage has begun to move from both coasts towards the middle of the country. In the case of climate change, not much; but at least the balance of the conversation is focused on global warming itself, not denial.

Tuesday, October 16, 2012

It was nice to see, in a recent book I have been reading,
some recognition of the usefulness of master data management (MDM), and how the
functions included in data virtualization solutions give users a flexibility in
architecture design that’s worth its weight in gold (IBM and other vendors,
take note). What I have perhaps not sufficiently appreciated in the past is its
usefulness in speeding MDM implementation, as duly noted by the book.

I think that this is because I have assumed that users will
inevitably reinvent the wheel and replicate the functions of data
virtualization in order to afford users a spectrum of choices between putting
all master data in a central database and only there, and leaving the existing
master data right where it is. It now
appears that they have been slow to do so.
And that, in turn, means that the “cache” of a data virtualization
solution can act as that master-data central repository while preserving the
farflung local data that compose it. Or, the data virtualization server can
provide discovery of the components of a customer master-data record, give a
sandbox to define a master data record, alert to new data types that will need
to change the master record, and enforce consistency – all key functions of
such a flexible MDM solution.

But the major value-add is the speedup of implementing the
MDM solution in the first place, by speeding definition of master data, writing
code on top of it for application interfaces, and allowing rapid but safe
testing and upgrade. As the book says, abstraction gives these benefits.

Therefore, it continues to be worth it for both existing and
new MDM implementations to seriously consider data virtualization.

Today, it was confirmed that the Arctic sea ice area is
farther below normal than ever before since record-keeping began. We are
probably poised to say the same about the total global sea ice area (including
the Antarctic). Meanwhile, those media who are saying anything are talking
about a meaningless blip in Antarctic sea ice.

And, of course, there's the other litany of related news. Record temperatures for this date in parts of Greenland. September tied for the warmest global temperature since record-keeping began. That kind of thing.

Nothing to see here. This isn’t the human extinction event
you aren’t looking for. You can move along.

Monday, October 15, 2012

In the interest of “truthiness”, consultants at Composite
Software’s Data Virtualization Day last Wednesday said that the one likely
tradeoff for all the virtues of a data virtualization (DV) server was a
decrease in performance.It was
inevitable, they said, because DV inserts an extra layer of software between a
SQL query and the engine performing that query, and that extra layer’s tasks
necessarily increases response time. And, after all, these are consultants who
have seen the effects of data-virtualization implementation in the real world.
Moreover, it has always been one of the tricks of my analyst trade to realize
that emulation (in many ways, a software technique similar to DV) must
inevitably involve a performance hit – due to the additional layer of software.

And yet, I would assert that three factors difficult to
discern in the heat of implementation may make data virtualization actually better-performing than the system as it
was pre-implementation.These are:

·Querying optimization built into the data
virtualization server

·The increasingly prevalent option of cross-data-source
queries

·Data virtualization’s ability to coordinate
across multiple instances of the same database.

Let’s take these one at a time.

DV Per-Database Optimization

I remember that shortly after IBM released their DV product
in around 2005, they did a study in which they asked a bunch of their expert
programmers to write a set of queries against an instance of IBM DB2, iirc, and
then compared their product’s performance against these programmers’.Astonishingly, their product won – and yet,
seemingly, the deck was entirely stacked against it.This was new programmer code optimized for
the latest release of DB2, based on in-depth experience, and the DV product had
that extra layer of software. What happened?

According to IBM, it was simply the fact that no single
programmer could put together the kind of sophisticated optimization that was
in the DV product. Among other things, this DV optimization considered not only
the needs of the individual querying program, but also its context as part of a
set of programs accessing the same database. Now consider that in the typical
implementation, the deck is not as stacked against DV: the programs being
superseded may have been optimized for a previous release and never adequately
upgraded, or the programmers who wrote them or kept them current with the
latest release may have been inexperienced.All in all, there is a significant chance (I wouldn't be surprised if it
was better than 50%) that DV will perform better than the status quo for
existing single-database-using apps “out of the box.”

Moreover, that chance increases steadily over time – and so
an apparent performance hit on initial DV implementation will inevitably turn
into a performance advantage 1-2 years down the line. Not only does the
percentage of “older” SQL-involving code increase over time, but the need for
database upgrade (as, for example, upgrading DB2 every 2 years should pay off
in spades, according to my recent analyses), means that these DV performance advantages
widen – and, if emulation is any guide, the performance cost from an extra
layer never gets worse than 10-20%.

The Cross-Data-Source Querying Option

Suppose you had to merge two data warehouses or
customer-facing apps as part of a takeover or merger.If you used DV to do so, you might see (as in
the previous section) the initial queries to either app or data warehouse be
slower.However, it seems to me that’s
not the appropriate comparison.You have
to merge the two somehow.The
alternative is to physically merge the data stores and maybe the databases
accessing those data stores.If so, the
comparison is with a merged data store for which neither set of querying code
is optimized, and a database for which one of the two sets of querying code, at
the least, is not optimized. In that case, DV should have an actual performance
advantage, since it provides an ability to tap into the optimizations of both
databases instead of sub-optimizing one or both.

And we haven’t even considered the physical effort and time of
merging two data stores and two databases (very possibly, including the
operational databases, many more than that).DV has always sold itself on its major advantages in rapid
implementation of merging – and has constantly proved its case. It is no exaggeration to say that a year saved
in merger time is a year of database performance improvement gained.

Again, as noted above, this is not obvious in the first DV
implementation.However, for those who
care to look, it is definitely a real performance advantage a year down the
line.

But the key point about this performance advantage of DV
solutions is that this type of coordination of multiple databases/data stores
instead of combining them into one or even feeding copies into one central data
warehouse is becoming a major use case and strategic direction in
large-enterprise shops. It was clear from DV Day that major IT shops have
finally accepted that not all data can be funneled into a data warehouse, and
that the trend is indeed in the opposite direction.Thus, an increasing proportion (I would
venture to say, in many cases approaching 50%) of corporate in-house data is
going to involve cross-data-source querying, as in the merger case. And there,
as we have seen, the performance advantages are probably on the DV side,
compared to physical merging.

DV Multiple-Instance Optimization

This is perhaps a consideration more suited to abroad and to
medium-sized businesses, where per-state or regional databases and/or data
marts must be coordinated. However, it may well be a future direction for data
warehouse and app performance optimization – see my thoughts on the Olympic
database in a previous post. The idea is that these databases have multiple
distributed copies of data. These copies have “grown like Topsy”, on an ad-hoc,
as needed basis.There is no overall
mechanism for deciding how many copies to create in which instances, and how to
load balance across copies.

That’s what a data virtualization server can provide.It automagically decides how to optimize
given today’s incidence of copies, and ensures in a distributed environment
that the processing is “pushed down” to the right database instance. In other
words, it is very likely that data virtualization provides a central processing
software layer rather than local ones – so no performance hit in most cases –
plus load balancing and visibility into the distribution of copies, which
allows database administrators to achieve further optimization by changing that
copy distribution. And this means that DV should, effectively implemented,
deliver better performance than existing solutions in most if not all cases.

Where databases are used both operationally (including for
master data management) and for a data warehouse, the same considerations may
apply – even though we are now in cases where the types of operation (e.g., updates
vs. querying) and the types of data (e.g., customer vs. financial) may be
somewhat different. One-way replication with its attendant ETL-style data
cleansing is only one way to coordinate the overall performance of multiple
instances, not to mention queries spanning them.DV’s added flexibility gives users the
ability to optimize better in many cases across the entire set of use cases.

Again, this advantage may not have been perceived (a)
because not many implementers are focused on the multiple-copy case and (b) because
DV implementation is probably compared against performance against each
individual instance instead of or as well as against the multiple-instance
database as a whole. Nevertheless, at least theoretically, this performance
advantage should appear – and especially because, in this case, the “extra
software layer” should not typically add some DV performance cost.

The User Bottom Line:Where’s The Pain?

It seems that, theoretically at least, we might expect to
see actual performance gains over the next 1-2 years over “business as usual”
from DV implementation in the majority of use cases, and that this proportion
should increase, both over time after implementation and as corporations’
information architectures continue to elaborate. The key to detecting these
advantages now, if the IT shop is willing to do it, is more sophisticated
metrics about just what constitutes a performance hit or a performance
improvement, as described above.

So maybe there isn’t such a performance tradeoff for all the
undoubted benefits of DV, after all.Or
maybe there is.After all, there is a
direct analogy here with agile software development, which seems to lose by
traditional metrics of cost efficiency and quality attention, and yet winds up
lower-cost and higher-quality after all.The key secret ingredient in both is ability to react or proact rapidly
in response to a changing environment, and better metrics reveal that overall
advantage. But the “tradeoff” for both DV and agile practices may well be the
pain of embracing change instead of reducing risk.Except that practitioners of agile software
development report that embracing change is actually a lot more fun.Could it be that DV offers a kind of support
for organizational “information agility” that has the same eventual effect:
gain without pain?

Impossible. Gain without pain. Perish the very thought.How will we know we are being
organizationally virtuous without the pains that accompany that virtue?How could we possibly improve without
sacrifice?

Well, I don’t know the answer to that one.However, I do suggest that maybe, rather than
the onus being on DV to prove it won’t torch performance, the onus should perhaps
be on those advocating the status quo, to prove it will.Because it seems to me that there are
plausible reasons to anticipate improvements, not decreases, in real-world performance
from DV.

Wednesday, October 10, 2012

Ten years ago I put out the first EII (now data
virtualization) report. In it I said:

·The value of DV is both in its being a database veneer across disparate databases, and
in its discovery and storage of enterprise-wide global metadata

·DV can be used for querying and updates

·DV is of potential value to users and developers
and administrators. Users see a far wider array of data, and new data sources are
added more quickly/semi-automatically. Developers have “one database API to
choke.” Administrators can use it to manage multiple databases/data stores at a
time, for cost savings

·DV can be great for mergers, metadata
standardization, and as a complement to existing enterprise information
architectures (obviously, including data warehousing)

·DV is a “Swiss army knife” that can be used in
any database-related IT project

·DV is strategic, with effects not just on IT
costs but also on corporate flexibility and speed to implement new products (I
didn’t have the concept of business agility then)

·DV can give you better access to information outside the business.

DV can serve as the “glue” of an enterprise information architecture.

I’m at DV Day 2012 (Composite Software) in NYC. Today, for
the first time, I have heard not just vendors talking about implementing these
things, but users actually doing them – global metadata, user self-service
access to the full range of corporate data, development using SQL access, use
for updating in operational databases, use for administration of at least the
metadata management of multiple databases as well as archiving administrative
tasks, use in mergers, use in “data standardization”, use as an equal partner
with data warehousing, use in just about every database-related IT project,
selling by as strategic to the business with citations of impacts on the bottom
line through strategic products as well as “business agility”, use to access extra-enterprise Big Data across multiple clouds, and use as the
framework of an enterprise information architecture.

I just wanted to say, on a personal note, that this is what
makes being an analyst worthwhile. Not
that I was right 10 years ago. That, for once, by the efforts of everyone
pulling together to make the point and elaborate the details, we have managed
to get the world to implement a wonderful idea. I firmly believed that, at
least in some small way, the world would be better off if we implemented DV.
Now, it’s clear that that’s true, and that DV is unstoppable.

Monday, October 8, 2012

On a weekend, bleak and dreary, as I was pondering, weak and
weary, on the present state and future of data virtualization technology, I was
struck by a sudden software design thought.
I have no idea whether it’s of worth; but I put it out there for discussion.

The immediate cause was an assertion that one future use for
data virtualization servers was as an appliance – in its present meaning,
hardware on which, pretty much, everything was designed for maximum performance
of a particular piece of software, such as an application, a database, or, in
this case, a data virtualization solution.
That I question: by its very
nature, most keys to data virtualization performance lie on the servers of the
databases and file management tools data virtualization servers invoke, and it
seems to me likely that having dedicated data-virtualization hardware will make
the architecture more complicated (thereby adding administrative and other
costs) to achieve a minimal gain in overall performance. However, it did lead to my thought, and I
call it the “Olympic database.”

The Olympic Database

Terrible name, you say.
This guy will never be a marketer, you say. That’s true. In fact, when they asked me as a
programmer for a name for Prime Computer software doing a graphical user
interface, I suggested Primal Screens.
For some reason, no one’s asked me to name something since then.

Anyway, the idea runs as follows. Assemble the usual array of databases (and
Hadoop, yada). Each will specialize in
handling particular types of data. One
can imagine splitting relational data between that suited for columnar and that
not so suited, and then applying a columnar database to the one and a
traditional relational database to the other, as Oracle Exadata appears to do. But
here’s the twist: each database will
also contain a relatively small subset of the data in at least one other
database – maybe of a different type. In
other words, up to 10%, say, of each database will be a duplicate of another
database – typically, the data that queries will typically want in
cross-database queries, or the data that in the past a database incorporates
just to save time switching between databases.
In effect, each database will have a cache of data in which it does not specialize, with its own interface to it, SQL
or other.

On top of that, we place a data virtualization server. Only this server’s primary purpose is not
necessarily to handle data of varying types that a particular database can’t
handle. Rather, the server’s purpose is
to carry out load balancing and query optimization across the entire set of
databases. It does this by choosing the
correct database for a particular type of data – any multiplexer can do that –
but also by picking the right database among two or several options when all
the data is found in two or more databases, as well as the right combination of
databases when no one database has all the data needed. It is, in effect, a very flexible method of
sacrificing some disk space for duplicate data for the purpose of query
optimization – just as the original relational databases sacrificed pure 9NF
and found that duplicating data in a star or snowflake schema yielded major performance
improvements in large-scale querying.

Now picture this architecture in your mind, with the data
stores as rings. Each ring will
intersect in a small way with at least one other data-store “ring” of
data. Kind of like the Olympic
rings. Even if it’s a terrible name,
there’s a reason I called it the Olympic database.

It seems to me that such an Olympic database would have three
advantages over anything out there:
specialization in multiple-data-type processing in an era in which that’s
becoming more and more common, a big jump in performance from increased ability
to load balance and optimize across databases, and a big jump in the ability to
change the caches and hence the load to balance dynamically – not just every
time the database vendor adds a new data type.

Why Use a Data Virtualization Server?

Well, because most of the technology is already there – and that’s
certainly not true for other databases or file management systems. To optimize queries, the “super-database” has
to know just which combination of specialization and non-specialization will
yield better performance – say, columnar or Hadoop “delayed consistency”. That’s definitely something a data
virtualization solution and supplier knows in general, and no one else does. We
can argue forever about whether incorporating XML data in relational databases is
better than two specialized databases – but the answer really is, it depends;
and only data virtualization servers know just how it depends.

The price for such a use of a data virtualization server
would be that data virtualization would need to go pretty much whole hog in
being a “database veneer”: full admin
tools, etc., just like a regular database. But here’s the thing: we wouldn’t get rid of the old data
virtualization server. It’s just as useful as it ever was, for the endless new
cases of new data types that no database has yet combined with its own
specialization. All the use cases of the
old data virtualization server will still be there. And an evolution of the data virtualization
will accept a fixed number of databases to support with a fixed number of data
types, in exchange for doing better than any of the old databases could in
those conditions.

One of the fascinating things about the agile marketing
movement is its identification of leverageable similarities with agile
development, as well as the necessary differences.Recently, the Boston branch of the movement
identified another possible point of similarity:an analogy with the agile-development concept
called “technical debt.” I promised myself I’d put down some thoughts on the
idea of “customer debt”, so here they are.

What Is Technical Debt?

Over the last few
years, there has been increasing “buzz” in agile development circles and
elsewhere about the concept of “technical debt.” Very briefly, as it stands
now, technical debt appears to represent
the idea of calculating and assigning the costs of future repair to deferred
software maintenance and bug fixes, especially those tasks set aside in the
rush to finish the current release. In fact, “technical debt” can also be stretched to include those
parts of legacy applications that,
in the past, were never adequately taken care of. Thus, the technical debt of a
given piece of software can include:

1.Inadequacies in documentation
that will make future repairs or upgrades more difficult if not impossible.

2.Poor structuring of the software
(typically because it has not been “refactored” into a more easily changeable
form) that make future changes or additions to the software far more difficult.

3.Bugs or flaws that are minor
enough that the software is fully operational without them, but that, when
combined with similar bugs and flaws such as those that have escaped detection
or those that are likewise scanted in future releases, will make the software
less and less useful over time, and bug repairs more and more likely to
introduce new, equally serious bugs. It should be noted here that “the cure is
worse than the disease” is a frequently reported characteristic of legacy
applications from the 1970s and before – and it is also beginning to show up
even in applications written in the 2000s.

For both traditional
and agile development, the rubber meets
the road when release time nears. At that point, the hard decisions that
must be made, about what functionality goes into the release and what doesn’t,
what gets fixed and what doesn’t, are needed and are far easier to make. This
is the point at which technical debt is most likely to be incurred – and,
therefore, presenting “technical debt” costs to the project manager and
corporate to influence their decisions at this point is a praiseworthy attempt
to inject added reality into those decisions. This is especially true for agile
development, where corporate is desperately clinging to outdated metrics for
project costs and benefits that conflict with the very culture and business
case of agile development, and yet the circle must be squared. Technical debt
metrics simply say at this point:Debt
must be paid. Pay me now, or pay me more later.

I have several
concerns about technical debt as it is typically used today.However, imho, the basic concept is very
sound. Can we find a similar concept in marketing?

What’s My Idea of Customer Debt?

Here I focus on
product marketing – although I am assured that there are similar concepts in
branding and corporate messaging, as well. Product marketing can be thought of
as an endless iteration of introduction of new solutions to satisfy a target
audience:the customer (customer base,
market). However, as I argue in a previous discussion of continuous delivery,
there is some sense in which each solution introduction falls behind the full
needs of the customer:because it was
designed for a previous time when customer needs were a subset of what they are
now, or because hard decisions had to be made when release time neared about
what stayed and what was “deferred.” Customer debt I see as basically both of
those things: lagging behind the evolution of customer needs, and failing to
live up to all of one’s “promises.”

More abstractly, in an
ongoing relationship between customer and vendor, the vendor seeks to ensure good-customer
loyalty by staying as close as possible to the customer, including satisfying
the customer’s needs as they change, as well as it is able. Yes, the market may
take a turn like the iPhone that is very hard to react swiftly to; but within
those constraints, the agile marketer should be able in both solution design
and messaging to capture both customer needs and the predictable evolution of
those needs.“We just don’t have the
budget” for communicating the latest and greatest to a customer effectively
enough, “but we’ll do it when the product succeeds,” and “sorry, development or
engineering just can’t fit it into this release, so we have to make some hard
choices, but it’ll be in the next release,” are examples of this kind of
customer debt.

The problem with this
kind of customer debt is that it is not as easy to measure as technical debt –
and technical debt isn’t that easy to measure. However, I believe that it
should be possible to find some suggestive metrics in many agile marketing
efforts.The project backlog is an
obvious source of these.There will
always be stuff that doesn’t get done, no matter how many sprints you can do
before the grand announcement.There
should be a way (using analytics, of course) to project the effect of not doing
them on immediate sales.However, as in
the case of technical debt, costs (e.g., of lost customer loyalty and branding
opportunities) should rise sharply the longer you fail to clear up the backlog.
I don’t yet see a clear way of identifying the rate of growth of such costs.

The Value of the Concept of Customer Debt

To me, the key value
of this concept is that up to now, in my experience, neither corporate hearing
marketing’s plans nor, to some extent, marketing itself has really assessed the
damage done by delaying marketing efforts, and therefore has tended to assume
there is none – or, at least, none that every other vendor isn’t dealing with. I
think it is very possible that once we take a closer look at customer debt, as
in the case of technical debt, we will find that it causes far more damage than
we assumed.And, again as in the case of
technical debt, we will have numbers to put behind that damage.

I am reminded of my
colleague David Hill’s story about the Sunday school class asked if each would
like to go to Heaven. All agreed except little Johnny.What’s the matter, Johnny, asked the teacher,
don’t you want to go to Heaven?Oh, yes,
Johnny replied, but I thought you meant right now.In the same way, those who would abstractly
agree that deferring marketing dollars might hurt a customer relationship but not really mean it might
have a very different reaction when confronted with figures showing that it was
hurting sales right now – not to mention even worse in the immediate future.

The Marketing Bottom Line

Lest you continue to
think that losses from piling up customer debt are not likely to be
substantial, let me draw an example from technical debt in projects in which
features are delayed or features were not added to meet new customer needs
arriving over the course of the project.It turns out that these are not losses just if we fix the problems on
the next release. On the contrary – and this is an absolutely key point – by
deciding not to spend on this kind of technical debt, the organization is
establishing a business rule, and that means that in every other release beyond
this one, the organization will be assumed to apply the same business rule –
which means, dollars to donuts, that in
all releases involving this software over, say, the next two years, the
organization will not fix this problem either.

Wait, bleat the
managers, of course we’ll fix it at some point. Sorry, you say, that is not the
correct investment assessment methodology. That is wrong accounting. In every
other case in your books, a business rule stays a business rule, by assumption,
unless it is explicitly changed. Anything else is shoddy accounting. And you
can tell them I said so.

(If you want to go
into the fine details, by the way, the reason this is so is that the
organization is fixing this problem in the next release, but deferring another
problem. So, instead of one problem extended over two years, you have 24
monthly problems, if you release monthly. Same effect, no matter the release
frequency)

So what does this
mean? It means that, according to a very crude experience-suggested guess, on
average, over those two years, perhaps
an average of 2-3 added major features will be delayed by about a month.
Moreover, an average of 2-3 other new
major new features will have been built that depend on this software, and
so, whether or not you do fix the original problem, these will be delayed by up to a month. It is no
exaggeration to say that major “technical debt” in today’s terms
equates to moving back the date of delivery for all releases of this particular
product by a month. That is in strict accounting terms, and it is
conservatively estimated (remember, we didn’t even try to figure out the costs
of software maintenance, or the opportunity costs/revenue losses of product
releases beyond that).

So, to sum up, my
version of technical debt is an estimation of the costs from a decrease in
business agility. From my surveys, agile software new product development (NPD)
appears to deliver perhaps 25% gross margin improvements, year after year, over
the long term, compared to traditional NPD. 1/24th of that is still a decrease
of more than 1% in gross margin – not to mention the effects on
customer satisfaction (although, today, you’re probably still way ahead of the
game there). Let’s see, do this on 25 large parts of a large piece of software
and – oops, I guess we don’t seem to be very agile any more. Wonder what
happened?

Now, translate that to
marketing terms:an advertising venue
unexploited, a product feature delayed, a misidentification of the customer
from too-low analytics spending. Again, we are talking about an integral part
of new product development – the marketing end. And we are talking just as
directly about customer satisfaction.Why shouldn’t the effects of customer debt be comparable to the effects
of technical debt?

So my initial thoughts
on customer debt are:

·It’s a valuable concept.

·It can potentially be a valuable addition to agile marketing
project metrics.

Sunday, October 7, 2012

A while back, discussions of Arctic sea ice, methane, and
other related matters seemed dominated by the idea that there was a “tipping
point” involved, a point before which we could return to the halcyon equilibria
of yore, and after which we were irrevocably committed to a new, unspecified,
but clearly disastrous equilibrium.
Surprisingly, this idea was recently revived as the overriding theme of
a British Government report assessing trends in Arctic sea ice and their likely
effects on the UK itself. It is cast as
a debate between Profs. Slingo of the Met Office and Wadhams, and the report
comes out in indirect but clear support of Wadham’s position that these trends
are in no sense “business as usual”.
However, it casts this conclusion as the idea that there are “tipping
points” in methane emissions, carbon emissions, and sea ice extent, that these
are in danger of being crossed, and that once these are crossed the consequences
are inevitable and dire – an idea that seems prevalent in national discussions
of an emissions “target” of no more than enough tonnage to cause no more than 2
degrees Centigrade warming by 2100.

Here, I’ll pause for a bit of personal reminiscence. My late
father-in-law, who was a destroyer captain in WW II, told me that once during
the early days of the US’ involvement, during a storm in the North Atlantic,
the destroyer heeled over by 35 degrees. Had it heeled over by 1 or 2 more
degrees, it would have turned turtle and probably all lives aboard would have
been lost. As it was, it righted itself with no casualties.

That, to me, is a real “tipping point”. The idea is that up
to a certain amount of deviation, the tendency is to return to the equilibrium
point; beyond that, a new equilibrium results. 35 degrees or less, the ship
tends to return to an upright position; beyond that, it tends to go to an
upside down position, and stay there.

So what’s wrong with applying the idea of such a tipping
point to what’s going on in climate change?
Superficially, at least, it’s a great way to communicate urgency, via
the idea that even if it’s not obvious to all that there’s a problem, we are
rapidly approaching a point of no return.

Problem One: It Ain’t True

More specifically, if there ever was a “tipping point” in
Arctic sea ice, carbon emissions, and methane emissions, we are long past
it. The correct measure of Arctic sea
ice trends, now validated by Cryosat, is volume. That has been on an accelerating
downward trend just about since estimates began in 1979, clearly driven by
global warming, which in turn is clearly driven by human-caused carbon
emissions. Carbon emissions themselves have risen in an accelerated fashion
from about 1 ppm/year in 1950 at the start of measurements to about 2.1-2.5
ppm/year today. Methane emissions from natural sources (a follow-on to carbon
emissions’ effect on rising global temperature) were not clearly a factor until
very recently, but it is becoming clear that they have risen a minimum of
20-30% over the last decade, and are accelerating. By way of context, these methane
emissions are accompanied by additional carbon emissions beyond those in
present models, with the methane emissions being about 3% and the carbon
emissions being about 97% of added emissions from such sources as permafrost,
but with the methane being 20 to 70 times as potent, for a net effect that is
double or triple that of the added carbon emissions alone – an effect that adds
(in a far too optimistic forecast) around 0.5 to 1 degree Celsius to previous warming
forecasts by 2100.

In other words, it is extremely likely that the idea of keeping
global warming to 2 degrees Celsius is toast, even if our carbon atmospheric
ppm levels off at around 450.

Problem Two: We Need To Understand It Can Always Get Worse

Yet the idea that we can combat global warming deniers or
make things plain to folks reasonably preoccupied with their own problems by
saying “we’re on a slippery slope, we’re getting close to a disaster” is that
it is all too easily obfuscated or denied, and the sayer labeled as one who “cries
wolf.” Rather, we need to communicate the idea of a steadily increasing problem
in which doing nothing is bad and doing the wrong thing (in this case, adapting
to climate change by using more energy for air conditioning and therefore
drilling for more oil and natural gas, increasing emissions) is even worse. This
idea is one that all too many voters in democracies find it hard to understand,
as they vote to “throw the bums out” when the economy turns bad without being
clear about whether the alternative proposal is better. How’s that working out for you, UK?

The sad fact is that even when things are dreadful, they can
always get worse – as Germany found out when it went from depression and a
Communist scare to Hitler. It requires that both politicians and voters somehow
manage to find better solutions, not just different ones. For example, in Greece today, it appears (yes, I may be uninformed) that
one party that was briefly voted in may well have had a better solution that
involved questioning austerity and renegotiating the terms of European support.
Two parties committed to doing nothing, and one far right-wing party committed
to unspecified changes in government that probably threatened democracy. After
failing to give the “good” party enough power in one election, the voters
returned power to the two do-nothing parties, with the result that the
situation continues to get worse. Now, more than a fifth of voters have
gravitated to the far-right party, which would manage to make things yet
worse.

And that is the message that climate change without tipping
points is delivering: not that changing our ways is useless because we have
failed to avoid a tipping point, but doing the right thing is becoming more
urgent because if we do nothing, things will get worse in an accelerating
fashion, and if we do the wrong thing, things will get even worse than
that. Tipping point? One big effort, and it’ll be over one way or
another. Accelerating slide? You pay me
now, or you pay me much more later.

Or Is That Au Revoir?

An old British comedy skit in the revue Beyond the Fringe, a
take-off on WW II movies, has one character tell another: “Perkins, we need a futile gesture at this
stage. Pop over to France. Don’t come back.” The other responds: “Then goodbye,
sir. Or perhaps it’s au revoir [until we meet again]?” The officer looks at him
and simply says “No, Perkins.”

The idea of a tipping point in climate change is like that
hope that somehow, some way, things just might return to the good old days. But
there is no au revoir. Say goodbye to
tipping points. Say hello to “it can always get worse.”

Monday, September 24, 2012

"Responding to Rupert Murdoch’s disinformation campaign, one Australian climate scientist put it bluntly: 'The Murdoch media empire has cost humanity perhaps one or two decades of time in the battle against climate change.'"Or, to put it another way: perhaps 500 million murders.I really, really, really, really hope that I am wildly exaggerating, even though the evidence suggests that maybe not. Because 500 million here, 500 million there, and pretty soon -- perhaps 40 years from now -- those 500 million we delay too late to save will be our own great-great-grandchildren.

Sunday, September 23, 2012

I have a feeling that a fair amount of readers – especially
vendors and IT BI types – are going to be upset by what I have to say in this
post. However, viewing some of the
material that has passed across my desk recently, I really think it’s time to
raise the question of whether too much organizational power given to data
warehouse folks is beginning to cause some significant under-performance in
meeting today’s key organizational information management needs.

The immediate occasion for these reflections is that I am
partway through a book on a related subject that goes into some detail on data
warehousing’s view of the world: how BI
should be handled, what the organizational information architecture should be,
and how we got this way. This book will
remain nameless, because in many ways it’s an excellent primer. However, over the last 22-31 years (depending
on whether you count my software development days), I have had a
cross-organization, cross-vendor view of the same area, and I have to say that
the book redefines history and the purposes of various things in the ideal
information architecture in major ways.

Usually, I find that going over history just wastes time in
a blog post – but here, it helps to see how data warehouse concepts of common
information management terms make them reinterpret the purposes of the
underlying products, making the
information architecture – and the whole information handling process –
potentially (and, probably, actually) less effective in the medium and long
term. So let’s combine history and exposition of my assertion.

A Data Warehousing View of the World

In brief, the book’s view of the information architecture
seems to be as follows: Data of all types comes in to production systems, which
immediately pass it on to the data warehouse for cleansing and aggregation.
Behind the data warehouse is an optional operational data store for key data,
and things like master data management operate in parallel with the data
warehouse to provide a global view of multiple local ways to store customer
data. On top of the data warehouse are key Business Intelligence applications,
which include both repetitive, scheduled reporting and analytics.

Now, this view of the world seems reasonable if you were
born yesterday, or if you’ve spent the last fifteen years entirely in data
warehousing. However, there are, in my
view, some major problems with it.

In the first place, afaik, only in data warehousing are the
databases at the initial entry point referred to as “production systems”. For twenty years, I have been calling them
“operational databases”. In fact, they were business-critical before data
warehousing existed, and so were the apps on top of them – like ERP.

Why does this matter? Because it allows data warehouse folks
to shift the “operational data store” behind the data warehouse. The operational data store is a later
concept, and one that I (among others, I assume) wrote papers proposing around
2004 and 2005. The idea is that the data warehouse is simply too slow to react
immediately to key operational data – but that operational data is scattered
across multiple operational data stores, and so an “operational data store”
makes sure that a subset of operational data for quick decision-making is
either put in a central point for quick analysis in parallel with its arrival,
or monitored by a central “virtual database.” Putting the operational data
store behind the data warehouse defeats its entire purpose.

Likewise, the master data management system. I wrote papers
on this in assessing IBM’s version of the concept in 2006 and 2007. Again,
the notion was of combining operational data coming in to operational databases
– in this case, by enforcing a common format that allowed cross-organization
and cross-country leveraging of operational data by ERP and customer
intelligence apps. By redefining the master data management as existing within
the data warehouse or at the same remove from operational databases, data
warehouse folks ensure that master data management moves no faster than the
data warehouse.

And finally, there is the idea that (implicitly) analytics
is entirely contained in BI, and hence is entirely dependent on the data
warehouse. On the contrary, an increasing amount of analytics goes on outside
of BI. For example, analytics is part of
products that analyze computer infrastructure semi-automatically to optimize
performance or detect upcoming problems. Or, it is used to analyze key
computer-supported business processes.
This is “intelligence” in the sense of “military intelligence” –
proactively going out and finding out what’s going on – but it is not “business
intelligence” in the sense of finding out what’s going on inside and outside
the business on the basis of data that is handed to you, and that your
reporting tools are too slow or shallow to tell you. In other words, these
applications of analytics are entirely outside of a reactive data warehouse.

Why It Matters

There are two places that over-emphasis on data warehousing
can impede organizational BI and other information management
effectiveness: the information
architecture, and the organization’s “agility” in responding to new kinds of
information from outside. As I’ve suggested in the previous section, a data
warehousing view of the information architecture shifts operations that involve
lots of “updates” and data just arrived from outside to the data warehouse or
behind it. That means going through the
data-warehouse cleansing and aggregation process and arriving in a centralized
location that is handling queries from all over the organization and is
optimized for adding new data not “on the fly” but in delayed bursts. There is
simply no way that is going to be as timely as performing tasks on the data as
it arrives in the operational systems.

Just as troubling, the entire emphasis of the organization
is now more reactive and focused farther away from the organization’s
“antennae” to the outside environment. The IT organization appears to be
focused on responding to new demands from business for timelier data, not
actively seeking the latest new information and merging it back into existing
systems. The IT organization appears to emphasize cleaning up the data and
merging it and only then analyzing it at an internal “choke point”, rather than
handling the information faster where it arrives.

If you think these concerns are theoretical, think about the
case of social-media Big Data. Yes, Oracle as a major vendor is emphasizing
inhaling huge amounts of this data from multiple clouds into the data warehouse
and then analyzing it – when the whole purpose of the NoSQL movement is to
allow rapid in-cloud analysis of inconsistent, uncleansed data – but it would
not do so unless there was some organizational push to avoid analytics outside
the data warehouse. I conclude that
there is some strong evidence that a data warehousing focus is impeding
organizational ability to process and feed to business decision makers key
information in as timely a fashion as possible.

Moreover, there is some sense that this is not an
organizational quirk but a tendency so embedded in the IT organization that
this impediment is a symptom not of a temporary problem that is easy to fix,
but rather of an organizational “disease.” In other words, simply directing the
organization to pay more attention to doing social-media processing in the
cloud will probably not work.

Action Strategies and Conclusion

First (although I think there is little danger of this) I
must caution against throwing the baby out with the bathwater. There are very good reasons to have a data
warehouse performing the core functions of querying for BI. I have, in the
past, conjectured that if I were to design a new information architecture
today, I might not create a data warehouse or data mart at all – instead, I
might impose “data virtualization” and master data management tools over
existing operational databases. However, practically speaking, in most if not
all cases, the sheer experience behind today’s data warehousing products makes
them far more preferable for core functions.

Rather, I would suggest that data warehousing be placed
under, and be responsive to rather than dominant over, an information
architecture and information strategy function aimed more at the edge of the
organization than its central data center. This is not a matter of making the
organization more responsive to the business; it is a matter of making the IT
organization more agile (by my definition, which stresses the utility of
proactive and outside-the-organization-directed agility).

Until I saw this book, which suggested that data warehouse
folks had gone too far in asserting “IT information handling is all about the
data warehouse”, I was not too concerned about data warehousing folks; I would get
into annoying arguments with folks who thought I just didn’t “get” data
warehousing, but it seemed to me that the benefits of a powerful
database-related IT function outweighed the negatives of data warehouse folks’
“not invented here” blind spots. Now, I
am rethinking my position. If the result
of this type of rewriting of history is an increasingly sub-optimal information
architecture, then such a “disease” is not so harmless after all.

Does your organization suffer from data warehouse
disease? If so, what do you think should
be done about it?

Monday, September 17, 2012

At this time, Arctic sea ice extent has now reached about 3.47 mkm2, about 21 % below the previous record; area has reached about 2.23 mkm2, about 23% below the previous record; and scientists report that up to 150 miles from the Pole (as far as they investigated) ice was very thin and broken into small pieces. Apparently, this means that present measures of extent are overestimating it. For the first time since monitoring began, measures of air temperature above 80 degrees North are not decreasing towards the refreeze point. What more can I say than I have said?

Monday, September 3, 2012

The last five or so years have provided a useful case study
in lying and how to see through the lies – useful in assessing products, in
assessing strategies, and in reassessing one’s national and global views that
affect how we all act in business and out of it. I am referring to the
fascinating case of following the ups and downs of Arctic sea ice.

What does this have to do with lying in daily life? We’ll see.

Setting the Stage

Starting in the late 1960s, scientists began to establish
that climate – the overall patterns of temperature, wind, and precipitation at
various points on the globe within which weather fluctuates – was being
affected by carbon in the atmosphere, and the data began to suggest that human
carbon emissions were a major if not the primary cause of this change.

In reaction, self-called “skeptics” began to deny the role
of humans in climate change. Over time, these became known as “climate change
deniers” or “climate deniers” for short.

One of the key areas of focus of both climate scientists and
deniers has been Arctic sea ice. Climate change science predicts that Arctic
sea ice will melt due to human-caused climate change, first to almost nothing
at minimum in September, and eventually year-round. Satellite and buoy data
available beginning in 1979 has kept track of the area and extent (that is,
area including cells with both open water and ice). A model supplemented by
sampling has estimated volume (including the depth of the ice), and this year
for the first time a good method of measuring volume has supplemented the
model.

The reason that Arctic sea ice is of such fascination is
that it is the equivalent of a “canary in a coal mine.” Like the canaries that
coal miners carried with them whose sickness and death were a first warning of
bad air in the mine, Arctic sea ice tells just how imminent major human-cause
climate change is, and how quickly it is proceeding – and it is one of the
first really visible signs of major change.

However, until very late in the process of melting, Arctic
sea ice diminution is not very visible. What we see on the surface is the area
and extent, and the ice is being melted on the top, bottom, and sides every
year, and then in winter it is being frozen again. In essence, Arctic sea ice
is more or less like a giant thin ice cube floating in the Arctic Ocean, with
wind and currents constantly pushing ice out of the ocean to melt at one end
and new ice forming at the other end. As a result, volume may drop steadily
year after year, and only in September of one year late in the process (when
the ice becomes too thin at minimum) do we see major drops in area and extent.

The Lie

The basic lie of the climate denier is that there is no such
thing as human-caused global warming. Behind that lie is a psychological
message: You (the listener) need not be
forced to do anything about it, or even think about it, except as an amusing
hobby. Those who insist are “them”, and
they are trying to bother “us” for selfish purposes. Stand guard against “them”,
and do not be fooled.

Behind that lie is an endless series of “fall-back
positions.” Global climate change is not occurring. It is not human-caused, but
caused by many other factors. The data on each bit of evidence is wrong, or not
to be trusted, because it comes from “them.” And each argument visibly and
clearly refuted simply means that the denier stops talking about that argument
and focuses on the next one, while preserving the basic lie.

In politics, there is a final fall-back position, in which
the politician pretends that he or she never was a proponent of the lie in the
first place. However, the psychological message remains: Yes, I agree with human-cause climate change
whole-heartedly and always have (!), but it’s no big deal. Let’s do as little
as possible, as slowly as possible, because the methods of dealing with it are
the ones being pushed by “them,” for their own selfish purposes.

Politics is particularly relevant here, because in this case
governments fund data collection. The less data collected, the less easily the
lie is exposed. The corresponding case in business is the collection of
customer and accounting data. The “power center” in the company has a vested
interest in saying that present strategies and tactics are not wrong-headed. In
many cases, it can be difficult to tell the source of failure. There is, for
example, the reported case of the performance-testing team that reported a slow
software product, to the point of likely major customer dissatisfaction – the person
in charge simply fired the team, and blame for the resulting poor sales was
passed to his successor.

So the denier alleges initially that there is no change in
Arctic sea ice that is not accounted for by “natural variability.” He or she
drops or alters arguments to suit over the years as data comes in. And each new
or continuing listener, safe in the cocoon of the lie, moves ever further into
delusion.

The Lie Exposed

Perhaps the foremost exponent of the Arctic sea ice variant
of the lie is Anthony Watts of WattsUpWithThat – although Andrew Revkin of the
NY Times in his blog has apparently persistently played a subtler denier role.
Over the last 2-3 years I have followed at a distance the evolution of their
arguments as the data on Arctic sea ice continues to come in. However, as we
will see, the fallout from 30 and more years of previous lies has also affected
what happens as the lie gets exposed.

Let’s start with an odd event: Al Gore a little over 2 years
ago embracing some scientific predictions that Arctic sea ice would go to near
zero by about 2016. Now, I know that some people reading this will immediately
want to stop reading, because they have an image of Al Gore as an untrustworthy
politician. Unfortunately for that preconception, there is ample testimony from
climate scientists that Gore has taken great pains to understand climate
science better, and is therefore to an astonishing degree reasonably close to representing
fairly the scientific findings and what they mean. To put it bluntly: whatever
you think of Al Gore in other areas, he is not a typical politician in this
area, and therefore your mistrust is just plain wrong.

Gore’s remark was immediately seized on as yet another proof
of the ludicrousness of climate change predictions in general and Arctic sea
ice ones in particular. In 2007, there had been some concern among a few, as,
aided by a confluence of weather factors, Arctic sea ice area and extent had
fallen to a new low (since 1979) in early September. However, due to the
absence of these weather factors, area and extent at minimum had rebounded
somewhat in 2008 and 2009, and deniers pointed to these numbers and asked how
one could possibly believe that it would all be gone in five years. Loosed by
Watts and his ilk, “trolls” haunted serious or denial-countering sites jeering
at those who, like Neven (see one of my previous blog posts), were attempting
to follow the clear thread of the data.

A particular focus of their ire was the use of a “speculative”
volume model – clearly, not in anyone’s scientific mainstream. The fact that
the model had been refined and checked by physical sampling was of no
relevance, nor did deniers raise the point that, if accurate, it was a better
measure of what was going on.

And then, in September of 2010, area and extent turned
downward again – and volume took a major plunge. None of that was reflected in
Watts – it was just part of “natural variability.” By September of 2011, area
and extent had reached close to their 2007 lows, and volume continued to
decrease, while the only semi-troll on the Neven site tried to argue that even
if other areas melted, the Central Arctic Basin would take a long time to do
so, if ever.

By this time, a little sporting competition had developed,
with scientific models and enthusiastic amateurs filing their predictions for
this year’s minimum area and extent. In 2012, Watts finally abandoned his
perennial prediction that these would move back to pre-2007 – but he was still
on the high side, with a 4.7 mkm2 extent prediction. And, of course, there was
no indication in his blog that he was wrong in the slightest, or that there was
anything amiss.

And now here we are at the beginning of September, and all
previous records have been easily shattered. Extent is at 3.67 mkm2, and
probably will wind up below 3.5 mkm2. Area is already almost 20% below 2011 and
2007, and probably will wind up at 20% below. Volume is already 10% below 2011,
and will probably wind up 15%-20% below. The Central Arctic Basin is already
easily at a record low. And weather conditions have not been favorable for records
at all.

So what has Watts said and done? At first, Watts kept
pointing to the records that had not yet been broken. Then, he resorted to
comparing the largest measure of extent in 2012 to one of the smallest measures
in 2007. And now, apparently, he has ascribed this year to, in one commentator’s
pithy phrase, “natural unnatural variability” – the argument that this is a
once in heaven-knows-how-many-years occurrence. Other, that is, than not
talking about it at all. Neven at one point posted a comment on Watt’s blog
saying, sarcastically, “Hey, there’s nothing going on with Arctic sea ice,
right?” and Watts’ only response was to dismiss him as a troll.

When a Lie Is Exposed and No One Notices

But it has been the reaction of most of the world that makes
it very clear how much most of us have been affected by the lie. Since shortly
after the beginning of August, the Neven web site and Joe Romm at www.climateprogress.com have been
telling us this was coming and how serious it is. In fact, since at least 2010,
both have been telling us how serious the situation was. And so what was the
reaction of the world?

Well, in the US, I can find no major publication – or even
minor one – pointing out this was coming. When the records actually fell,
pretty much all in one week, a week before the end of August, no major
publication reported it for until the very end of August, more than a week
later than most of the record-setting.
Of those that have – Bloomberg BusinessWeek, NBC News, and US News and
World Report are a reasonable sample – none has come anywhere near
understanding the magnitude of the loss, nor the implications. Over the last
three years, only Joe Romm among major commentators has shown an appreciation
for the likelihood that this would happen. Only very recently did Paul Krugman
connect the dots between his reading of Joe and the implications for climate
change’s effect on the global economy. The rest of the news and commentary?
Just about nothing.

Meanwhile, in politics, only www.dailykos.com, a so-called “liberal” web
site (clearly, part of “them”) has paid this subject the attention it deserves,
and then only in the last half-month. We continue to see the spectacle of the
Republican party and the 46% of voters who support it denying that either
global warming or its human cause is settled science, and pledged to do even
less than is already being done to combat it. Abroad, the Australian Prime
Minister is threatened with being voted out of office primarily for having
pushed a clearly inadequate attempt to combat carbon emissions. Canada’s Harper
has persistently been quoted as believing that in the foreseeable future,
Arctic sea ice will not melt enough that shippers can bypass Canada’s Northwest
Passage. The powers that ring the Arctic Ocean are busy contemplating oil
drilling in the Arctic that would increase carbon emissions in future, and the
first vessels from Shell, an oil company, only failed to start exploration this
summer because they failed to ready themselves in time.

In other words, some form of belief in the lie is pervasive.
Either we believe that global warming isn’t happening, or that it isn’t
human-caused, or that Arctic sea ice has nothing to do with it, or that we don’t
need to do something about it, because it won’t happen or affect us in the near
future. And even when the data becomes overwhelming and visible in Arctic sea
ice, we don’t revisit or connect the dots.

How can this be? And how can we do better at detecting lies?

Doing Better

The first thing to notice about such lies is that they work
only if they become embedded in some way in “history.” Often, this happens when
the public notices an accusation but not its disproof, as witness the idea that
Al Gore claimed he invented the Internet, or John Kerry’s “swiftboating.” Or,
the lie simply becomes repeated long enough that those not paying attention
assume it’s true. To take a recent
example, the so-called Simpson-Bowles commission made no majority
recommendation at all – and yet, we hear politicians from both parties claiming
the contrary. A couple of years ago I was shocked, when attending a graduation,
to hear a prominent business/economics professor at Yale refer to 1933, “when
the Great Depression was starting.” Not only did this ignore the steady
unraveling since about the great stock market crash of 1929, complete with
starving veteran Bonus Marchers in Washington; it also ignored the ways in
which revered figures played a role, with America refusing to forgive any of
its WWI loans, Winston Churchill clinging disastrously to a gold standard,
Andrew Jackson’s deep-sixing of a National Bank leading to a series of severe
recessions of which this was only the latest, and the failure to regulate
separation of bank and investment company – hence the junking of those FDR
regulations, mainly by business and Republicans, in the late 1990s. I have
written about similar “false memories,” as I perceive them, in the computer
industry.

And so, the work of doing better begins with combating the
lack of accurate “institutional memory.” This should not be as difficult as it
sounds, in business as in politics, because the particular person who has
something to gain from a particular version of the lie has often moved on by
five years down the line. It is therefore important, even if it seems not so,
to bring back the truth if it has been distorted, and to keep the truth alive
in your mind. It is important to
remember.

But that, it seems to me, is only half the task. We may, at some times, be constantly
bombarded by these lies. Those who
surround themselves by a cocoon of lies create a worldview and invite you in –
and even if you do not enter, it is very hard to not always have second
thoughts or to begin to think the same way. There’s a marvelous Mark Twain joke
about the man who hated his neighbors and started a rumor there was gold in
Hell. A little while later a friend stopped by and saw him packing to go there
himself. Why? The neighbor asked. Well, the man said, I got to thinking there
must be something in that rumor.

However, lies are crafted, piece by piece, as needed, and
the contradictions and seams begin to show more and more as you examine them.
The truth, by contrast, hangs together – the loose ends are those that have not
yet been fully investigated. And this is particularly true of scientific truth –
which we call scientific theory. Your job, as laymen, is to look at the
information provided and ask, what’s the model? How does it cover everything?
What does it predict in these situations? And only then do you ask, are those
predictions near reality, as far as you can tell? For example, ask, what is a
free market? And only after that do you ask, does that make sense to you? Does
it really seem to capture what happens to you in your work? What more is needed?

And finally, I should add that we should be humble about
connecting the virtues and vices of the person with whether something is lies
or the truth. Yes, there’s a connection, as in the old joke that noted that
once a person first starts in to murder, eventually even Sabbath-breaking is
not beyond his capability for evil. But it’s not a simple connection. The
connection is more between the person’s ability to perceive reality and the
truth or between the person’s expertise and the subject at hand. Al Gore understands
climate science pretty well; but given a choice between his model and that of
James Hansen, I’ll start with Hansen first, even though Hansen’s politics is
alien to me.

It’s Just a Flesh Wound!

We laugh at the memorable Monty Python routine in which the
Black Knight, amputated in most limbs, refuses to recognize any problem and
demands that our hero continue fighting – “It’s just a flesh wound!” And that
is precisely what Anthony Watts, James Inhofe, and the like are saying today –
and will probably continue to say, in one form or another, indefinitely.

However, I must point out that in this, at least, I and many
more like me have been able, even as laymen, to see through the lie. And I did
it more or less as I described above: refused to accept the false implanted
memories of Al Gore, refused to buy into the assertions of “us” vs. “them”, and
took some time to put together a model in layman’s terms for Arctic sea ice and
global warming in general, based on reflection on scientific papers as much as
or more than assurances by folks such as Neven and Joe Romm. And so, for more
than two years, I have been saying that this time was coming sometime between
2012 and 2015, that volume would turn out to be the key metric, and that
decline was exponential, not linear. And we’ve been in the middle of the
plausible, not on the outer fringe, as denier interpretations of scientific
conservatism would have you believe. So maybe it’s time for you to consider
applying this either to global warming – which is about as important as it gets
– or to ideas like data virtualization or agile marketing.

And one more thing:
once you’ve handled the lie, one effective way of combating it going forward
is simply, whenever possible, to nail a specific version of it that’s clearly
false. Not in the denier’s cocoon;
outside, in blog comments where all are welcome, or in conversations where it
is permitted to say, that’s not true. It is amazing how that kind of modest but
powerful statement gets across to the persuadable, where ad hominem argument
obeys a kind of Gresham’s Law and makes the reader see all as indistinguishably
bad.

Watts may be mortal, but lies are much more durable. It is
one of our tasks, not to hope that all the big questions will not force us to
do something, but to make a good effort to perceive big lies, so that the truth
never quits, either. Because if the truth never quits, then there is some hope
that a big lie will be only a flesh wound. Rather than the cause of a massive
human disaster. Happy Labor Day.

Wayne Kernochan

About Me

I have recently retired. Before retirement, I was a long-time computer industry analyst at firms like Aberdeen Group and Yankee Group, and before that a programmer at Prime Computer and Computer Corp. of America. Sloan/MIT MBA, Cornell Computer Science Master's, and Harvard college degrees. Used to play the violin, and have written unpublished books about personal finance, violin playing, and the relationship between religion and mathematics, as well as three plays, two musicals, a screenplay on climate change, short stories, and poetry. I intend to use this blog in future both to continue to enjoy the computing field and to pursue my interests in many other areas (e.g., climate change, history, issues of the day).