Why would the idea of falsification be too high of a bar for business modelling? I would think it should be a requirement. When business leaders implement anything, there are putting lives and money at risk. Often more so than pilots, doctors or engineers. No one would allow an aircraft, medical procedure, drug, bridge or building to be put into commercial practice unless it has passed the bar of falsification.

After Enron, BPR, dot.bomb, et al., why accept the slippery inductive reasoning of any fad or model that has not shown itself open to and capable of passing true falsification tests?

When Mendeleev crafted his periodic table, his model opened itself to falsification when he predicted the existence and properties of several new elements. Doesn't confirm it is correct, but when its predictions came true it passed from psuedo-science to science.

Perceived usefulness is not enough. It should pass some measure of confirmation. Despite receiving “plenty of support” and “having utility”, the flat earth model failed its falsification tests. Even though most people felt “it represented reality.”

Even economists and their cagey models are subject to falsification tests.* Porter’s model become accepted precisely because it created conditions where it could be proved wrong and it held up. The trouble is some of those conditions, like Mendeleev’s table, are now shaking things up. It remains to be seen how it continues to hold up.

BTW, you examples of falsification for your model are incorrect. You’ve phrased “flawed” alternative models as confirming your model. No good. I can be wrong all day long but it doesn’t make you or anyone else right.

A true test for your model would be something like,
- "What does my model predict?"
- “Can I create real-life conditions where my model’s predictions may be wrong?”

If you cannot think of any, then you haven’t thought through the failings of other confirmed models. That’s why my first reaction/tests to your blog chapter was a value chain analysis (falsification test).

As the model's creator, you should seek out these tests, not chastise them. Because if you pass them, then you know you are really on to something. That's when the clever people at Almaden start to pay attention.

If your reply is something like, “How can that be practical for business modellers?” I’d respond,”How can it not be practical?” Otherwise the practice and its models never rise above pseudo-science, despite any reputed utility. No matter how many white swans you count, it never proves that there are no black swans.

*(Many people, many of them economists, would wince at your term “dismal science” for reasons other than what you think. This is seen as a racist term for its origin in a famous paper where a respected scientist railed for the re-instatement of slavery. Calling the counter arguments "dismal science." Every now and then some pundit - usually a non-economist pretending to be one - would publicly use the term and receive a public thrashing about ignorant racist views. It’s in your best interest never use that term again.)

This has been an interesting exchange, and I think I am done. In general, the points you've made seem a bit idealistic and academic. Is rigorous falsifiability possible in the domain of business modeling? We know that we can't fully test or prove software in the general case, and software is much more precise than abstractions like "value chain." And there have been plenty of things put into production by engineers and scientists that passed all the available tests of the time, and yet still failed.

The value chain view is only one perspective on a complex reality that also requires data modeling and systems modeling at a minimum, with matrix analysis to link the perspectives together. If you're going to critique my method then you need to consider that entire framework. Debating whether I have somehow misused Porter isn't that interesting to me, because the process model that I call a value chain is well within the mainstream of BPM at least. Now, if you were to find compelling evidence that Information Engineering-based enterprise architecture (Finkelstein/Martin as interpreted by Spewak) were harmful as a method, I would pay more attention, because matrixing of process, data, and system is at the heart of what I do.

I am not sure if you are talking about falsifiability of method or falsifiability of model. The two are different to me. Falsification of model is often possible; falsification of method is more difficult. I am puzzled by how I might "falsify" data or process modeling as methods, for example. They have their bases in mathematically proven algebras. And the primary way I know to falsify an *instance* of one of those methods is for a user to tell me, "that does not represent my reality." The methods give me syntactic correctness, but semantic correctness is beyond their scope.

The market is the final arbiter in my world, and it falsifies things quite satisfactorily. Sometimes people lose money. That's the way it goes. There is no certainty beyond death and taxes.

In this discussion, you have not indicated much awareness of or experience with the real world practice of enterprise architecture modeling or the IT governance and service management literature. I'm quite comfortable that the book I have written and the articles on my weblog are well within the mainstream of those discourses. I have never claimed to be scientific in the pure sense; my only goal is to be useful to my peer practitioners. Mostly, I am just trying to describe reality, not predict it, using some very well accepted techniques in my community.

If you are going to take the idealist academic perspective and insist on falsifiability, you are essentially taking on a very large body of practitioner literature, including ITIL, COBIT, ISO20000, the IT governance and portfolio management literature, probably the majority of the enterprise architecture literature, and much more. Are you calling it all "pseudoscience"?

I do monitor the allegedly more "rigorous" peer reviewed research, being a member of both IEEE and ACM and a regular visitor to the local research university library. In general, there is little if any peer reviewed work coming out useful to practitioners like myself, who are concerned with questions of large scale IT operations and not with technical issues per se.

I did correspond with the head of a nonprofit organization dedicated to furthering the cause of value chain analysis, and he confirmed for me that service value chains are part of what they do. So I have both him and my author friend supporting what I've done. That's good enough for me.

Despite appearances here, I am very critical of both the form and content of my work. Recently a chief architect for a noted IT portfolio management vendor started some correspondence with me that has led me to question my view that it is possible to distinguish service request management from demand management. I will probably be publicly retracting that aspect of my framework. But your points unfortunately do not carry the same weight of specificity and industry peer credibility that his do.

Re: dismal science. It's a phrase with intuitive resonance, regardless of provenance. That's why it's persisted. Neither economists nor academics have any influence over me and my work in the private sector, so I don't perceive any "interest" in this matter.