The Finite Improbability Calculator is a tool for exploring the very small probabilities encountered in applying some of the formulas in William Dembski's "No Free Lunch" to biological phenomena. Some basic functions are implemented, such as factorial, change of base, permutation, and combination. Further, several of the formulas found in section 5.10 of "No Free Lunch" are implemented.

I did this as an aid to my own analysis of Dembski's work, and realized that others could benefit from it as well. The routines are specifically made so that they handle very large and very small numbers without causing floating-point overflow or underflow errors.

Dembski provides two basic equations for probabilities. The one for p_local is defined on NFL p.293. Both p_orig and p_config are calculated on the basis of what Dembski calls a perturbation probability. Dembski provides a variety of forms or approximations to a perturbation probability (NFL pp.297,299,300,301).

p_local calculations (NFL p.293)

p_local = (units in system * substitutions / total different units)^(units in system * copies)

What is interesting here is that numbers for "substitutions" and "copies" are simply invented by Dembski, not referenced as the result of empirical study on the system in question. Yet the equation is highly sensitive to changes in these numbers. For the numbers provided by Dembski (50 units in the system, 4289 total different units, 5 copies, 10 substitutions), the resulting probability is 4.502871e-234 (all calculations done via the Finite Improbability Calculator). If we change "substitutions" to 11 instead of 10, the resulting probability is 1.003831e-223, or about 11 orders of magnitude. If we change "copies" to 4 instead of 5, the resulting probability is 2.102769e-187, or about 47 orders of magnitude different. This extreme sensitivity is something which Dembski does not even note in his discussion.

The other numbers in the calculation would at first glance appear to be more stable. But the total number of units, 4289, is simply taken as the number of proteins which the E. coli genome is known to code for. There is no justification given for using this number in the context of flagellar construction. It is well known in developmental biology that not all proteins coded for are present or expressed within a cell at all times, yet this is exactly the sort of counterfactual assumption Dembski makes in deploying this number. If we assume a mere 10% of possible proteins are not present at the time of flagellar construction, the calculated probability changes to 1.246450e-222, or 12 orders of magnitude more likely. Even the number 50, for proteins used within the flagellum, is not beyond critical examination. There is no indication that this is the minimum number of proteins necessary for flagellar construction, just that this is the characteristic number seen in E. coli flagella. Change this to 49, and the probability rises to 1.481864e-231, or about 3 orders of magnitude more likely.

Dembski has failed to establish the biological relevance of his p_local calculation. He has overlooked the developmental aspect of the E. coli cell entirely. His invented parameters are not grounded in empirical research. The extreme sensitivity of his provided equation to changes in values of all parameters lends little confidence to the results. In no sense does he justify this calculation as providing an upper bound on the probability of even "random localization", as he must if this calculation is supposed to be relevant in any sense to the issue at hand. And, of course, there is no justification for the assumption that "random localization" is the sole relevant chance hypothesis to be considered.

am i to understand that dembski thinks that all proteins interact with the same binding affinity? that's a pretty ridiculously wrong assumption.

also, does dembski think that all proteins are expressed at equal concentrations in the bacterium at any given time? that is also fabulously wrong.

i also noticed that dembski does not factor in the volume of the cell in his calculation. his formula would give the same value for the flagellar components expressed in the bacterium at physiological concentrations as it would for 4900 individual proteins floating around a pool.

i noticed a distinct similarity between dembski's formula and a common situation used in probability questions in introductory statistics. a box is filled with pieces of paper with numbers written on them. the student is asked to calculate the probability of a certain outcome when one or more pieces of paper are removed from the box, with replacement. by "replacement", i mean that each time a piece is drawn, it is put back into the box before the next piece is drawn. does dembski really believe that a cell behaves in this manner? that the bacterium will "draw" 250 proteins from a pool at random (with replacement), test the entire combination for flagellar formation, then if it fails put them all back and start over again?

MG> I sure hope that even though you don't know all my intentions, and even MG> though a certain amount of contrivance regarding matter, energy, and the MG> laws of physics are involved you are still making the judgment that my MG> posts are designed, thus confirming the utility of Demski 'sThree Part MG> Filter.

MV> Your posts have actually very few signs of design. But I am fascinated:MV> can you describe how Dembski's three part filter can be used to determineMV> that your postings are the result of intelligent design?

MG>Actually I think it would be far more instructive for you to describeMG>how Dembski's filter would not be useful in determing thatMG>intelligent design was not the best mode of explanation for myMG>postings, asuuming that is what you think.. It would be a goodMG>excercise in thinking out of the box for you. (But I won't hold myMG>breath)

I don't know that it is more "instructive", since those making the positive claim have the burden of proof. Mike's claim that Dembski's EF/DI has "utility" is a positive claim, and thus it is Mike who has the burden of proof here.

Does Mike take up his burden? Rather predictably, Mike attempts to shift the burden to others. This is simple abandonment of the claim. Mike apparently has no clue how to actually use Dembski's EF/DI, and rather than forthrightly admit this, Mike tries to distract others from recognizing this.

But Mike is not the only person for whom Dembski's EF/DI is simply too cumbersome to apply to real-world problems. Dembksi himself has attempted only four applications of varying degrees of completeness in the period from 1996 to the present. Which reminds me of the following:

Dembski's criticism of Gell-Mann's "effective complexity" is far more apposite when applied to his own concept of "specified complexity". No one but Dembski has, to my knowledge, even attempted a calculation of the sort required by Dembski's description of his EF/DI. Hmm... Actually, I may be the only other person than Dembski to attempt a calculation following his EF/DI as it was described in "The Design Inference". I seem to recall a post here in t.o. some years back showing that solutions of the "travelling salesman problem" were examples of specified complexity.

So what would have to happen for Mike to become the very first person other than William Dembski and Dembski's critics to actually apply Dembski's EF/DI, and not simply assert that it is applicable?

Dembski lays out his "argument schema" for his somewhat revised EF/DI in "No Free Lunch" on pages 72-73. Mike should refer to it for the full specification of what has to happen for an analysis to match the technical requirements of the EF/DI. Failure to fully apply this framework is rampant, as analysis of Dembski's four examples shows.

First, observe an event. It is interesting that while Dembski says that "subject S learns that an event E has occurred", Dembski is fond of using hypotheticals instead of real-world events.

Second, generate a set {H} of chance hypotheses relevant to the production of event E. This seems to be a stumbling block, for one can note that failure is common in this regard. Fully 25% of Dembski's proffered calculations (one of them) is notable for *not* including natural selection among relevant chance hypotheses (see section 5.10 of "No Free Lunch").

Third, identify a "rejection function f" and "rejection region R" such that E is in R and R "is an extremal set of f". Even Dembski skipped this part in section 5.10 of "No Free Lunch". Don't forget the gammas and deltas discussed on p.72! This requires math, not handwaving.

Fourth, identify the "background knowledge K" that "explicitly and univocally identifies the rejection function f" from step (3). Again, this step is notable by how seldom it is actually deployed, as can be seen by its absence from the discussion in section 5.10 of "No Free Lunch".

Fifth, identify the "probabilistic resources" for E "to occur and be specified relative to S's context of inquiry". BTW, Mike, S is you in this discussion. And again, even Dembski omits this step from section 5.10 of "No Free Lunch".

Sixth, fix a significance level alpha so that events less probable than alpha remains improbable conditioned on each of the chance hypotheses in {H} even when the probabilistic resources of (5) are applied. This one requires some knowledge of probability and statistics, and thus may prove more difficult for Mike than it was for Dembski.

Seventh, confirm that the probability of the rejection region R is less than alpha for all of the chance hypotheses in {H}. Again, this requires actual math, not vague handwaving, and may prove somewhat difficult for Mike.

Step 8 is just a conclusion that E exhibits specified complexity. Mike has shown no problem in jumping to conclusions regardless of the lack of warrant for them, so assuming he makes it through the preceding steps, this one should pose no difficulty. In fact, this step is so easy that most of the "examples" cited by Dembski are composed entirely of the assertion that some phenomenon E exhibits specified complexity with no accompanying justification of any sort whatsoever. In the overwhelming majority of cases, no "calculation" of any kind is offered.

If Dembski's EF/DI did have "utility" for some applications, it seems to me that someone somewhere in the six years that it has been available publicly should have picked it up and applied it to accomplish something non-trivial. Even if Mike successfully deployed the full EF/DI apparatus (an event that itself discourages breathholding), the end result (a conclusion that Mike's posts show "design" sensu Dembski) is trivial and would not support Mike's claim that Dembski's EF/DI has "utility" in any non-trivial sense.

There are other approaches to analysis of events based on algorithmic information theory that can do the useful,utilitarian tasks that Dembski talks about in making ordinary design inferences without the many drawbacks that critics have noted in Dembski's EF/DI apparatus. Wherever someone wishes to apply the EF/DI, they very likely would be better off using an alternative analytical tool. However, the alternatives do not lead to a conclusion, either deductively or inductively, of intelligent agent causation. So far, the only "utility" that has been demonstrated for Dembski's EF/DI is based not upon its "application to real-world problems", but rather in its very existence as a tool for Christian apologetics. The various failures to completely deploy the EF/DI seem to have no effect on its effectiveness in apologetics.

I got excited for the thirty seconds it took me to read your posts. I wasn't aware of Dembski's calculator but having now seen a little background is appears to be an embarassing oversimplification. Thanks for the post.