A recent article in Physics Today discusses the search for SETI using optical detectors. On Uncommon Descent, Dembski claims that OSETI shows how the explanatory filter is used in sciences. Since Robert Camp already has shown why such a claim is inappropriate for SETI, I would like to explore Dembski’s latest claim as it applies to OSETI.

I will quote from the article to show how OSETI mimics the explanatory filter in the sense that it can generate false positives. Ironically, Dembski quotes the same passage, which suggests that Dembski accepts false positives for his explanatory filter, and which would render the filter useless.

OSET is, like SETI, an attempt to detect intelligently designed signals but unlike SETI ,which focuses on narrow band signals, OSETI relies on nanosecond optical pulses which it claims are more likely generated by intelligent sources because of the lack of known natural mechanisms that would generate such pulses.

Because no known astrophysical source could put out a bright nanosecond optical pulse, some SETI searchers have concluded that looking for signals from technologically advanced aliens is more promising with optical telescopes than with radio telescopes

If we find nanosecond pulses, we can’t lose,” says Horowitz. “If it’s not from an alien civilization, at least we will have discovered an astrophysical phenomenon that no one anticipated. Not a bad consolation prize.

In other words, if nanosecond pulses are found, science will be in a a ‘win-win’ situation since either the pulse indicates intelligent design or the pulse indicates a new astrophysical phenomenon. In other words, a design inference in OSETI, unlike the Explanatory Filter, still leaves open a natural explanation.

Based on this article, Dembski claims that:

Dembski wrote:

These SETI researchers are therefore using optical telescopes as an explanatory filter.

The approach mimics to a limited exent the Explanatory Filter in the sense that the researchers are very well aware that their approach can lead to false positives. This means that ‘design inference’ by itself does not resolve the issue of apparant versus actual design since we cannot exclude the real possibility of having missed a scientific explanation. In order to strengthen the ‘design inference’, these scientists add, as I will show, assumptions to their hypothesis which address such issues as means, motives and opportunities.

We should not underestimate the impact of reliability on the usefulness of the Explanatory Filter. Dembski is very clear:

Dembski wrote:

“On the other hand, if things end up in the net that are not designed, the criterion will be useless.”
Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology. P 141.

This is because ID refuses to accept the need and requirements of adding additional assumptions to the hypothesis.
Let’s look at some relevant papers. I predict that we can quickly reject Dembski’s claim.

Camp quickly points to the differences between a design inference based on the Explanatory Filter and how science applies design inferences.

Camp wrote:

It is my intent to demonstrate that the analogy fails because, first, in ID the distinction drawn between necessity/chance and intelligence is a terminus, it is the goal and the end of the process. In forensics, cryptography, and archeology this distinction is merely an expedient without which the science itself would not take place. Second, although Dembski wishes to paint ID with a coat of science borrowed from these disciplines, the methodological locus between the two is not analogous. And third, the kinds of phenomena ID investigates are not comparable to those dealt with by SETI, forensics, cryptography, and archeology. ID phenomena are inaccessible to science.

Camp even accepts that the Explanatory Filter is a legitimate approach, even though the evidence strongly supports that it is inherently unreliable, leading one to reject the EF approach ‘a-posteriori’ as useless.

Camp wrote:

For the purposes of discussing the value of an analogy between ID and SETI (and other sciences), however, we can accept for the moment the legitimacy of the EF. It is my intent to demonstrate that the analogy fails because, first, in ID the distinction drawn between necessity/chance and intelligence is a terminus, it is the goal and the end of the process.

Camp points out that the assumptions and methodology behind SETI are quite different from the rarefied design inference attempts by the Explanatory Filter.

Camp wrote:

It is obvious that this is something quite different from the assumption of intelligence behind an unexplained phenomenon. As with forensics, SETI investigation is a process that employs specific assumptions about the intelligence it investigates. SETI as a science is more than just an attempt to distinguish between necessity/chance and design. Cornell astrophysicist Loren Petrich makes this point clearly,

These reasons are very distinct from Dembski’s Explanatory Filter, which focuses on alleged unexplainability as a natural phenomenon; they are an attempt to predict what an extraterrestrial broadcaster is likely to do, using the fact that they live in the same kind of Universe that we do.

Let’s look at some relevant OSETI papers and websites to show how Dembski’s thesis quickly unravels

With “Earth 2000” technology we could generate a directed laser pulse that outshines the broadband visible light of the Sun by four orders of magnitude. This is a conservative lower bound for the technical capability of a communicating civilization; optical interstellar communication is thus technically plausible.

In other words, with present technology we can create a directed pulse laser that could be visible to other civilizations. In other words, OSETI assumes that there exists similarly or more advanced civilizations who have an interest to be detected by other civilizations. In other words, we are making already assumptions about the motives and means of the designers and we are constraining them to known technologies.

and

Optical SETI versus Radio SETI

Several arguments exist for the choice to search for signals in the optical region of the electromagnetic spectrum over the radio portion. These reasons stem primarily from the benefits for another civilization to send a beacon or signal in the visible rather than in the microwave. Briefly, some of the reasons include:

Visible light-emitting devices are smaller and lighter than microwave or radio-emitting devices.

Again we are discussing claims of motives. The fourth assumption is based on our current knowledge. No wonder that the authors are very aware of the real possibilities of a false positive. While for ID, the design detection would be the end, for science the design detection would be the beginning of additional research.

So how would we deal with rarefied life forms, in other words, life which is significantly different from us. How can we constrain the motives and means of such life forms? Expressing her skepticism about the problem facing ID, she points out that detecting life we don’t know or cannot constrain seems an unresolved issue. In fact, as Wilkins and Elsberry have shown in their paper, this is the difference between ordinary and rarefied design.

Life-forms that would send signals would probably be very different from us, says Carol Cleland, an associate professor of philosophy at the University of Colorado. It all makes her slightly skeptical.

The paper is subscription only and rather expensive so I have to base this on the quote only.

Consider for a moment why they are looking nanosecond impulses. There must be an infinite amount of potential phenomena that have “no known astrophysical source”. Are we to conclude an alien civilisation every time we come across a new phenomenon for which we do not know the source? Of course not. The reason is that man has developed laser technology that makes this particular phenomenon possible. So now if we were to observe this phenomenon we have some reason for adopting a human-like civilisation as an explanation.

Excellent point, we are looking for laser pulses because we have developed laser pulse technology.

From the quote it appears that despite this they would not dismiss the alternative type of explanation “an astrophysical phenomenon that no one anticipated”. So if the phenomenon occurred there would then be some further assessment of possible causes.

What role does the filter play in all this reasoning?

Excellent question. How does the filter play in all this reasoning and how is the possibility of false positives resolved? In other words, how do we estimate the probability of a design inference versus ‘we don’t know’? Unless we can estimate some probabilities for the design inference we cannot reject the we don’t know explanation. Both can in principle explain the observations as both are based on our ignorance.

C19
Take a case in which the prior probability is extremely low that a designer can effect the potential “design” being observed. (By this I do not mean that this is a generally usable method for evaluating cases, rather I am specifying that in this case that prior probability can be known. I do not mean that such prior probability can regularly be known.) Also assume that there is a rather high probability that something was missed in the steps of analyzing chance and necessity in the explanatory filter. (In other words that the “argument from ignorance” aspect actually may have an important case that the observer is ignorant of, and this is a high probability in this case.) In this case the Bayesean posterior probability that the “designer did it” is often lower than the posterior probability that the missed case is the explanation. Now considering cases in which the prior probability is unknown (a basic assumption of the normal application of the “explanatory filter”) the reasonableness of the EF is dependent on the actual prior probability, though unknown. If one has certain religious reasons, for example, of having differing views of that prior probability, then the result changes based on those views. The EF is not an objective methodology, and its “reliability” differs depending on precisely that prior probability.

Gedanken’s critiques on ISCID are very insightful

The practical usage of the EF actually smuggles this concept of doing the comparison of prior knowledge of structure of designer action into the process (even though explicitly eschewed by the formal steps). If one codified this smuggling into formal process steps (made it explicit) then one may develop a procedure that does not suffer from the limitations that Eric is referring to. (But of course such a codified version may not be applicable in the areas of interest to those in the ID movement, such as historical investigations of biology.)

In other words, the major difference between the EF and how science applies design inferences is that science makes explicit assumptions based on prior knowledge of the structure of designer actions while the EF explicitly disclaims that such assumptions are needed. The reason why ID takes this flawed step is that the design of interest is not really open to such assumptions.

Seems that Dembski’s claim that OSETI applies the explanatory filter could benefit from a more rigorous analysis showing that indeed OSETI applies the EF methodology and does not smuggle in any information. ID should be able to stand by itself and should not depend on riding the coat tails of science. Especially since ID activists have argued that ID, unlike Methodological Naturalism, does not reject a design inference ‘a-priori’. I guess that Dembski implicitly is admitting that science and by logical extension methodological naturalism, does apply design inferences quite reliably.