Intelligent design is a scientific theory that argues that the best explanation for some natural phenomena is an intelligence cause, especially when we find certain types of information and complexity in nature which in our experience are caused by intelligence.

… topics …

1. ID uses a positive argument based upon finding high levels of complex and specified information.

2. ID is NOT a theory about the designer or the supernatural

3. ID is NOT a theory of everything

He then goes on to say what it is:

1. ID uses a positive argument based upon finding high levels of complex and specified information

The theory of intelligent design begins with observations of how intelligent agents act when they design things. Human intelligence provides a large empirical dataset for studying the products of the action of intelligent agents. This present-day observation-based dataset establishes cause-and-effect relationships between intelligent action and certain types of information.

William Dembski observes that “[t]he principle characteristic of intelligent agency is directed contingency, or what we call choice.”15 Dembski calls ID “a theory of information” where “information becomes a reliable indicator of design as well as a proper object for scientific investigation.”16 A cause-and-effect relationship can be established between mind and information. As information theorist Henry Quastler observed, the “creation of new information is habitually associated with conscious activity.”17

The most commonly cited type of “information” that reliably indicates design is “specified complexity.” As Dembski writes, “the defining feature of intelligent causes is their ability to create novel information and, in particular, specified complexity.”18 Though the terms were not originally coined by an ID proponent, Dembski suggests that design can be detected when one finds a rare or highly unlikely event (making it complex) which conforms to an independently derived pattern (making it specified). ID proponents call this complex and specified information, or “CSI.” Stephen Meyer explains that in our experience, only intelligent agents produce this type of information:More.

Note: It’s depressing that so much “opposition” to the notion of design in the universe/life forms comes from Jesus-hollering academics who say things like “Well, that would make God responsible for bad design!”

I (O’Leary for News) wrote about that in “Here’s one bad reason for rejecting ID,” pointing out that when speaking to Moses, God takes responsibility for things that don’t work. (Ex. 4:11) These facts cannot be used as an argument against divine authorship or involvement by anyone claiming to operate within the Judaeo-Christian tradition.

So far as I am concerned, any Christian academic using such arguments should rightly be suspected of not actually knowing, caring about, or even taking seriously what the Bible says. Of course, many theistic evolutionists/Christian Darwinists probably do not know, care about, or even take seriously what the Bible says about anything if it conflicts with current fashion. But they are only allowed to openly say that about topics like Adam and Eve, about whom they make silly jokes. If they start saying that they don’t think Moses ever really talked with God or reported what he said accurately, why then … why then they might be asked just what their issues really are, and those issues won’t turn out to be “information theory” or “specified complexity.” Hence all the evasion and fancy dancing.

But don’t expect serious questions to be asked any time soon. Too many people are complicit now.

At any rate, the issues thinking atheists who don’t work for lobbies raise are far more honest.

85 Responses to ID theory … in one handy article

Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? Wm. Dembski

Yes, they can.

Most, if not all, anti-IDists always try to force any theory of intelligent design to say something about the designer and the process involved BEFORE it can be considered as scientific. This is strange because in every use-able form of design detection in which there isn’t any direct observation or designer input, it works the other way, i.e. first we determine design (or not) and then we determine the process and/ or designer. IOW any and all of our knowledge about the process and/ or designer comes from first detecting and then understanding the design.

IOW reality dictates the the only possible way to make any determination about the designer(s) or the specific process(es) used, in the absence of direct observation or designer input, is by studying the design in question.

If anyone doubts that fact then all you have to do is show me a scenario in which the designer(s) or the process(es) were determined without designer input, direct observation or by studying the design in question.

If you can’t than shut up and leave the design detection to those who know what they are doing.

This is a virtue of design-centric venues. It allows us to neatly separate whether something is designed from how it was produced and/ or who produced it (when, where, why):

“Once specified complexity tells us that something is designed, there is nothing to stop us from inquiring into its production. A design inference therefore does not avoid the problem of how a designing intelligence might have produced an object. It simply makes it a separate question.”
Wm. Dembski- pg 112 of No Free Lunch

Stonehenge- design determined; further research to establish how, by whom, why and when.

Nasca Plain, Peru- design determined; further research to establish how, by whom, why and when.

Puma Punku- design determined; further research to establish how, by whom, why and when.

Any artifact (archeology/ anthropology)- design determined; further research to establish how, by whom, why and when- that is unless we have direct observation and/ or designer input.

Fire investigation- if arson is determined (ie design); further research to establish how, by whom, why and when- that is unless we have direct observation and/ or designer input.

An artifact does not stop being an artifact just because we do not know who, what, when, where, why and how. But it would be stupid to dismiss the object as being an artifact just because no one was up to the task of demonstrating a method of production and/ or the designing agent.

And even if we did determine a process by which the object in question may have been produced it does not follow that it will be the process used.

As for the people who have some “God phobia”:

Guillermo Gonzalez tells AP that “Darwinism does not mandate followers to adopt atheism; just as intelligent design doesn’t require a belief in God.”

(As a comparison no need to look any further than abiogenesis and evolutionism. Evolutionitwits make those separate questions even though life’s origin bears directly on its subsequent diversity. And just because it is a separate question does not hinder anyone from trying to answer either or both. Forget about a process except for the vague “random mutations, random genetic drift, random recombination culled by natural selection”. And as for a way to test that premise “forgetaboutit”.)

Intellegent Design is about the DESIGN not the designer(s). The design exists in the physical world and as such is open to scientific investigation.

All that said we have made some progress. By going over the evidence we infer that our place in the cosmos was designed for (scientific) discovery. We have also figured out that targeted searches are very powerful design mechanisms when given a resource-rich configuration space.

Intelligent Design is the study of patterns in nature that are best explained as the result of intelligence. — William A. Dembski

For example, when Meyer says that “in our experience, only intelligent agents produce this type of information [i.e. CSI]”, is he talking about CSI under a null hypothesis of equiprobability, or under all relevant non-design hypotheses? If it’s the latter, then how do we determine which hypotheses are relevant?

Since Meyer is talking about our experience, presumably we should know how we go about making CSI assessments. So maybe the OP author and the other participants in this forum can tell me how they personally choose their null hypotheses when determining whether something exhibits CSI

,,, I don’t consciously go through a bunch of mathematical calculations to discern that it was brilliantly designed. Much the same with the Bacterial Flagellum which has energy conversion efficiency of nearly 100% and rotates at 1,500 rps which is far faster than even the fastest formula 1 race car,,

To me it is beyond ludicrous for Darwinists to insist that something that far exceeds a formula 1 race car in terms of energy efficiency and operational capacity is best explained as a chance assemblage of parts, especially when,,

“There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject.”
James Shapiro – Molecular Biologist

I don’t know about you Robb, but it is not rocket science for me to discern brilliant design when I see it:

Souped-Up Hyper-Drive Flagellum Discovered – December 3, 2012
Excerpt: Get a load of this — a bacterium that packs a gear-driven, seven-engine, magnetic-guided flagellar bundle that gets 0 to 300 micrometers in one second, ten times faster than E. coli.
If you thought the standard bacterial flagellum made the case for intelligent design, wait till you hear the specs on MO-1,,,
Harvard’s mastermind of flagellum reverse engineering, this paper describes the Ferrari of flagella.
“Instead of being a simple helically wound propeller driven by a rotary motor, it is a complex organelle consisting of 7 flagella and 24 fibrils that form a tight bundle enveloped by a glycoprotein sheath…. the flagella of MO-1 must rotate individually, and yet the entire bundle functions as a unit to comprise a motility organelle.”
To feel the Wow! factor, jump ahead to Figure 6 in the paper. It shows seven engines in one, arranged in a hexagonal array, stylized by the authors in a cross-sectional model that shows them all as gears interacting with 24 smaller gears between them. The flagella rotate one way, and the smaller gears rotate the opposite way to maximize torque while minimizing friction. Download the movie from the Supplemental Information page to see the gears in action.http://www.evolutionnews.org/2.....66921.html

I wonder why evolutionists such as Coyne, P Z Myers, Fox and Liddle et all studied the ordered nature of the universe, what we call, ’empirical science’ (however currently hobbled by atheist bigotry), instead of just the statistics of randomness?

All very puzzling. Taking minds ‘full of chaos’, viz nonsense, to institutes of higher learning, in order to have them informed about the ordered nature of the universe, then making careers out of rubbishing science in favour of chaos!

Hey R0bb, AGAIN if you don’t like it all YOU have to do is demonstrate that nature, operating freely can produce what we call CSI.

If you’d like to see an example of nature producing CSI, you can read Winston Ewert’s article here. Under a uniformly distributed null, he calculates over a million bits of CSI for the given natural pattern.

Which brings us back to my questions: Are you talking about CSI under a uniformly distributed chance hypothesis (which seems to be the way that you personally calculate CSI — you just “count the bits”), or under all relevant hypotheses? If the latter, then how do you go about determining which hypotheses are relevant?

If you’d like to see an example of nature producing CSI, you can read Winston Ewert’s article here. Under a uniformly distributed null, he calculates over a million bits of CSI for the given natural pattern.

Is that really what you think the article says?

And isn’t it up to the anti-IDists to tell us what their relevant hypotheses are? And what happens if there aren’t any such relevant hypotheses?

If you’d like to see an example of nature producing CSI, you can read Winston Ewert’s article here. Under a uniformly distributed null, he calculates over a million bits of CSI for the given natural pattern.

Is that really what you think the article says?

Of course that’s what the article says, unless I’m hallucinating this part:

A first hypothesis to consider for this image is that it was generated by choosing uniformly over the set of all possible gray-scale images of the same size. The image is 795 by 658 pixels with 256 possible levels of gray. This gives us 2 to the 4,191,240 possible images. Expressed in terms of Shannon information that is 4,191,240 bits. Using the formula for specified complexity given in the essay “Specification,” we obtain a result of approximately 1,068,017 bits.

Which gives the lie to the oft-repeated claim that “in our experience, only intelligent agents produce” CSI. Everything has CSI under some null hypotheses and lacks CSI under other null hypotheses.

Joe:

And isn’t it up to the anti-IDists to tell us what their relevant hypotheses are?

So when Dembski says “relevant chance hypotheses”, do you interpret that to mean “hypotheses that anti-IDists consider relevant”? If so, I’m curious how you arrived at that interpretation. If, say, detectives want to use Dembski’s method to determine whether a fire was intentionally lit, do they have to contact anti-IDists in order to determine the relevant chance hypotheses?

Although it’s somewhat OT, there’s a reason that I’m asking how to determine which chance hypotheses are relevant. As Dembski teaches, and Winston Ewert has emphasized in his recent articles at evolutionnews.org, we have to clear the field of all relevant chance hypotheses in order to infer design. But rarely does any IDist even attempt to do this when using Dembski’s design detection method. Almost always, a hypothesis of equiprobability is assumed, usually tacitly, with no attempt to justify the rejection of all other chance hypotheses.

Ewert tries to do it the right way in his response to Elizabeth Liddle’s challenge. That is, he actually considers more than one null hypothesis:

– Hypothesis 1: The pattern was produced by random sampling according to a uniform probability distribution.

– Hypothesis 2: The pattern was produced by random sampling according to a probability distribution, which we’ll call D, that matches that actual distribution of the various pixel colors in the given image.

– Hypothesis 3: The pattern was produced by a Markov process with a transition matrix, which we’ll call M, that matches the actual distribution of transitions in the given image.

– Hypothesis 4: The pattern was produced by a deterministic process that was guaranteed to yield the given pattern.

Ewert performs CSI calculations for the first three hypotheses, but dismisses the fourth because we have no evidence that such a process may have been operating. But what evidence do we have for hypotheses 2 and 3? The only evidence of processes that are characterized by distribution D or transition matrix Mis the event itself. But if the event counts as evidence for hypotheses 2 and 3, then it counts as evidence for hypothesis 4 also.

Ewert is thus inconsistent in his determination of which hypotheses are relevant. So where can we go to find examples of making this determination the right way?

This is one of several modeling choices in Dembski’s method that seem to be quite ad hoc, allowing us to reach the conclusion that we want to reach. If there are IDists who dispute this criticism, I invite them to post any evidence that Dembski’s method yields the same result when applied independently by different people.

An arbitrary image makes this difficult, as we cannot determine what natural processes are operating to generate the chance hypothesis. The best that we can do is postulate processes similar to those which have been observed. That’s what the first three hypotheses tested here did.

You are correct, we’ve got no evidence for any of the hypotheses in my article. That point of the article is that we can’t generate the relevant chance hypotheses without doing due diligence in investigating the natural processes in operation. So I’m not doing proper hypothesis generation there.

For the first three hypotheses, we could quite easily believe that such a process existed. We have precedents with a uniform distribution, or a biased distribution, or a transitional distribution. The precedent provides some minimal evidence for those hypotheses. The same cannot said for the deterministic hypothesis. We don’t have precedent for processes which deterministically produce complicated images like that.

For the first three hypotheses, we could quite easily believe that such a process existed. We have precedents with a uniform distribution, or a biased distribution, or a transitional distribution. The precedent provides some minimal evidence for those hypotheses.

But your second and third hypotheses are not merely a biased sampling process and a Markov process. They are, respectively, a biased sampling process with distribution D and a Markov process with transition matrix M. There are no precedents for processes with those distributions, so the only evidence for them is the event itself.

The same cannot said for the deterministic hypothesis. We don’t have precedent for processes which deterministically produce complicated images like that.

There are precedents for deterministic processes, just like there are precedents for biased sampling and Markov processes. Whether there are precedents for deterministic processes that produce complicated images “like that” depends on what you mean by “like that”.

Again, does the event itself count as evidence for the hypothesized process? If so, then why does it not count as evidence for hypothesis 4?

The point I’m trying to make is the ad hoc nature of applications of Dembski’s method. For example, the image could have had a bit depth of 24, or a higher resolution, or you could have chosen a better compression algorithm than PNG, and the numbers would have come out differently, perhaps with all three hypotheses meeting the CSI threshold.

Similar points have been made about Dembski’s Caputo example, where he seems to draw arbitrary lines in order to come up the parameters of his calculation.

As far as doing due diligence in investigating the natural processes in operation, how do we balance that with Dembski’s assertion that “unknown chance hypotheses (and the unknown material mechanisms that supposedly induce them) have no epistemic force in showing that we are wrong”? Is your first hypothesis a “known” hypothesis? If so, why can’t we infer design from that hypothesis alone? What does it even mean for a hypothesis to be “known”? Aren’t hypotheses explanations that we invent, which may or may not have precedent?

The image of the ash bands from the glacier [as I duly identified almost immediately], as an image amenable to digital media is designed, but I hardly think that is what you meant to say.

What you are trying to do is to eliminate he SPECIFICITY constraint in the discussion and substitute raw complexity. There is no significant constraint on the ash bands that forced them to be this way rather than that, on any independently grounds, just that we have complexity the result of the happenstance of ash falls [a chaotic process] and snow fall.

The ash bands carry out no function dependent on specific configuration, and are complex but not specified in the relevant sense.

F/N: Onlookers, it is rabbit trails like this which in material part lead me to focus on functionally specific complex organisation and associated information. If there is not an identifiable function depending on fairly specific configuration, we do not have isolated islands of function and a needle in an astronomical haystack search challenge to deal with. The IMAGE, to be faithful to the original and amenable to display mechanisms, has rather exacting constraints. The ash banding has no such functional requisites. It is complex but not functionally specific and organised. The information in the relevant sense lies in the image, not the pattern dependent on the chaotic dynamics of volcanoes and weather. Such could be considerably different and make no practical difference to the occurrence of ash banding. Or here on this volcanic island, layering and deposition of rocks in layers due to ashfall, rain, ash flows [pyroclastic flows) and mud flows (lahars if you will). KF

As I have said the most interesting thing is the behavior of the anti-ID people. Never a direct answer to a question and a constant avoidance of the obvious. Always a diversion to the irrelevant. Constant harping on the meaningless. Pseudo sophistication in the use of technical terms.

For the new trolls there is always the ad hominem before they inevitably disappear.

What you are trying to do is to eliminate he SPECIFICITY constraint in the discussion and substitute raw complexity

Where in the world did you get that idea?

The ash bands carry out no function dependent on specific configuration, and are complex but not specified in the relevant sense.

By now you know or should long since know this.

So, why are you insisting on setting up and knocking over a strawman?

It was Ewert, not I, who based the specificational resources on compressibility rather than function. If you consider this to be strawmanning, then you should take it up with him. I’m sure he’ll respond to your accusation with the penitence it deserves.

R0bb, not being a math geek, I tend to follow what the empirical evidence can tell me about a hypothesis. For instance, from what I can gather, I believe you are holding that random configurations of material particles can generate functional information fairly easily, despite not having any empirical evidence of even one instance of your belief:

But RObb, even though this is certainly not good news for the person who would want to toe the neo-Darwinian line of reductive materialism, (i.e. random configurations of material particles generating functional information), the problem, from an empirical evidence point of view, gets much worse from the reductive materialist. You see RObb, it is now found that Quantum Entaglement/Information resides in molecular biology on a massive scale, in every DNA and protein molecule! For instance:

Coherent Intrachain energy migration at room temperature – Elisabetta Collini and Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73
Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.http://www.scimednet.org/quant.....d-protein/

Where this creates an insurmountable problem for the reductive materialism of neo-Darwinism is that quantum entanglement falsifies the reductive materialism, upon which neo-Darwinism is based, as to being true:

Quantum theory survives latest challenge – Dec 15, 2010
Excerpt: Even assuming that entangled photons could respond to one another instantly, the correlations between polarization states still violated Leggett’s inequality. The conclusion being that instantaneous communication is not enough to explain entanglement and realism must also be abandoned.
This conclusion is now backed up by Sonja Franke-Arnold and collegues at the University of Glasgow and University of Strathclyde who have performed another experiment showing that entangled photons,, show stronger correlations than allowed for particles with individually defined properties – even if they would be allowed to communicate constantly.
– per physics world

Looking Beyond Space and Time to Cope With Quantum Theory – (Oct. 28, 2012)
Excerpt: To derive their inequality, which sets up a measurement of entanglement between four particles, the researchers considered what behaviours are possible for four particles that are connected by influences that stay hidden and that travel at some arbitrary finite speed.
Mathematically (and mind-bogglingly), these constraints define an 80-dimensional object. The testable hidden influence inequality is the boundary of the shadow this 80-dimensional shape casts in 44 dimensions. The researchers showed that quantum predictions can lie outside this boundary, which means they are going against one of the assumptions. Outside the boundary, either the influences can’t stay hidden, or they must have infinite speed.,,,The remaining option is to accept that (quantum) influences must be infinitely fast,,,
“Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” says Nicolas Gisin, Professor at the University of Geneva, Switzerland,,,
– per science daily

Now RObb let’s walk through this. Quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints, is now found in molecular biology on a massive scale. Yet how can the quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) ’cause’ when the quantum entanglement ‘effect’ falsified material particles as its own ‘causation’ in the first place? (A. Aspect; Zeilinger),, Appealing to the probability of various configurations of material particles, as Darwinists do, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any space-time constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place to supply! ,,,

supplemental notes:

,,,Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, and such as what we find encoded in computer programs, and yes, as we find encoded in DNA, is found to be a subset of ‘transcendent’ (beyond space and time) quantum entanglement/information by the following method:,,,

,,,This following research provides solid falsification for the late Rolf Landauer’s decades old contention that ‘information is physical’ (merely ‘emergent’ from a material basis) since he believed it always required energy to erase it;

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”http://www.sciencedaily.com/re.....134300.htm

Moreover quantum information is now held to be ‘conserved’:

Quantum no-hiding theorem experimentally confirmed for first time – March 2011
Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed.http://www.physorg.com/news/20.....tally.html

Quantum no-deleting theorem
Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist.
– per wikipedia

Also of interest:

The Unbearable Wholeness of Beings – Steve Talbott
Excerpt: Virtually the same collection of molecules exists in the canine cells during the moments immediately before and after death. But after the fateful transition no one will any longer think of genes as being regulated, nor will anyone refer to normal or proper chromosome functioning. No molecules will be said to guide other molecules to specific targets, and no molecules will be carrying signals, which is just as well because there will be no structures recognizing signals. Code, information, and communication, in their biological sense, will have disappeared from the scientist’s vocabulary.
,,,Rather than becoming progressively disordered in their mutual relations (as indeed happens after death, when the whole dissolves into separate fragments), the processes hold together in a larger unity.http://www.thenewatlantis.com/.....-of-beings

It is also very interesting to note, in Darwinism’s inability to explain the finding of transcendent, ‘non-local’, quantum information within molecular biology, information that is not reducible to a material basis in any way, shape or form, that Theism has always postulated a transcendent, eternal, component to man that is not part of this temporal realm. i.e. Theism has always postulated a ‘living soul’ for man.

Verse and Music:

Genesis 2:7
“And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul.”

It seems to me that there is a dishonesty and an agenda on the part of those who oppose ID to continuously fail to make a distinction between intelligent causation (whether agnostic theism or panspermia)and the theistic implications of intelligent causation (of which theistic implications are ALSO distinct from the religious implications – of such agnostic theism).

Scientific evidence that points to something being authored by intelligence is not the same thing as theistic implication (however such an inference COULD be made in later steps in a cumulative case…just as religious implications could be made after the cumulative case for agnostic theism is made).

ID threatens atheism because they know what comes next in the cumulative case….perhaps we should show them the cumulative steps to the God of Abraham and really rustle some feathers. Question everything.

…this is certainly not good news for the person who would want to toe the neo-Darwinian line of reductive materialism, (i.e. random configurations of material particles generating functional information), the problem, from an empirical evidence point of view, gets much worse from the reductive materialist.

processes of random generation never organize and arrange things because they do not put things in order… order is basically an opposite condition of random … so comes the problem for the materialist. random doesn’t ever produce useful information and this is open for falsification.

It seems to me that there is a dishonesty and an agenda on the part of those who oppose ID to continuously fail to make a distinction between intelligent causation (whether agnostic theism or panspermia)and the theistic implications of intelligent causation (of which theistic implications are ALSO distinct from the religious implications – of such agnostic theism)

You guys are so quick with your accusations of dishonesty! I (honestly) think that the two are not so easy to separate. That’s because I (honestly) believe that just to offer “intelligence” or “design” as an explanation without saying something about why or how is not an explanation any more than saying “chance” is an explanation without saying how would be an explanation. I might be wrong – but I am not being intentionally misleading or beside the point.

In Lizzie’s glacier picture there isn’t any specuification and there isn’t any CSI. And only morons think that science is conducted using pictures only- especially pictures not showing the entire scene…

There have been long discussions on the term “CSI” without any quiet feeling of confidence about the term. That is why FCSI or FSCI was introduced to delineate those sub cases where there is a functional connection between one set of complex information and another.

I have been recently reading and watching a lot of videos on fossils. There are something called trace fossils which are mainly just squiggles in a rock which are supposed to represent some organism moving about. Then there are much more graphic traces which seem to be the outline of the organism with some indentations from the actual organism which gives the shape of the organism. Some have more detail.

They are like snapshots of the organism but not the organism itself. Nature did this through obvious natural processes. There are impressions left by rocks in the sediment which again are like snapshots of the rock. These impressions specify or point to a pattern outside of itself and yet they are completely natural. So are they CSI?

I know these are bogus examples in the evolution debate but if they are CSI does it mean that we must be more careful with the definition of the term. FCSI does not suffer from this shortcoming.

This is not to give solace to anyone questioning CSI because those who have are being completely disingenuous. Why do I say that? Because it is so obvious that it is a meaningful concept. If they were honest, they would point out the possible short comings and would then provide guidance to a better definition to eliminate these shortcoings. But no we get all the nonsense we see here and have seen before.

CSI, a fairly simple concept has been subjected to the rhetorical pretzels game for so long that my own response is to highlight that functional specificity is the relevant kind.

The fossil mould, whatever it is, is simply an event. It is complex, presumably, but not in any way that locks us down to 1 in 10^150 of a config space with no significant constraints on outcome. Providing you have clay or some cementitious mud etc, moulds will form by mechanical necessity around inclusions. Like here, we see moulds in cemented volcanic deposits surrounding stones or tree trunks. And I don’t doubt that under the mud at Plymouth we have fossils now.

CSI is not isolated from the explanatory filter.

FSCO/I is much easier to see, for those willing.

Big problem.

_______

Robb:

Please, look at what you did above that provoked my response. Whatever Ewert did or did not do does not excuse you on what you know or should know.

<blockquote.Which gives the lie to the oft-repeated claim that “in our experience, only intelligent agents produce” CSI. Everything has CSI under some null hypotheses and lacks CSI under other null hypotheses.

There isn’t any CSI in Lizzie’s picture, R0bb.
And yet Ewert’s calculations tell us that there are 1,068,017 bits of CSI (I checked it and got 1,062,056, but close enough) under a null hypothesis of equiprobability, and 593,493 bits of CSI under his second null hypothesis (I got the same number). If you see errors in these calculations, can you point them out to us?

So when Dembski says “relevant chance hypotheses”, do you interpret that to mean “hypotheses that anti-IDists consider relevant”?

There areb’t any because the anti-IDists cannot produce any.

Since you didn’t answer the question, I’ll assume that your answer is yes. So can you tell me how we determine, without chance hypotheses, whether something has CSI?

If, say, detectives want to use Dembski’s method to determine whether a fire was intentionally lit, do they have to contact anti-IDists in order to determine the relevant chance hypotheses?

What do you mean “if”? Do you think they use some other process? Please specify it.

Again you didn’t answer the question.

As to “some other process”, I don’t know what processes they use. But I imagine that if I were the investigator, I would weigh the merits of the arson hypothesis against the merits of other hypotheses.

What are your feelings on the claim of Dembski, Meyer, and others that we have uniform experience with CSI and that it invariably indicates design?

Absolutely true. Anyone who denies it is playing games and is disingenuous. I was pointing out that the definition needs some tightening to eliminate the examples I gave. There is a little bit of a definitional problem. That is all. Anyone with common sense would say design.

And yet Ewert’s calculations tell us that there are 1,068,017 bits of CSI (I checked it and got 1,062,056, but close enough) under a null hypothesis of equiprobability, and 593,493 bits of CSI under his second null hypothesis (I got the same number). If you see errors in these calculations, can you point them out to us?

You misread what he said.

So can you tell me how we determine, without chance hypotheses, whether something has CSI?

LoL! How something arose has NOTHING to do with whether or not it has CSI.

And thanks for proving that there aren’t any chance hypotheses.

As to “some other process”, I don’t know what processes they use.

In order to follow the rules of scientific investigation they have to use the explanatory filter or something exactly like it.

I’ll be happy to if you’ll tell me what I did. Was it because I referred Joe to Ewert’s article, where Ewert calculates the specified complexity of something that you consider to not be specified “in the relevant sense”?

As to my alleged strawman, whose position did I misrepresent and how did I misrepresent it?

If they were honest, they would point out the possible short comings and would then provide guidance to a better definition to eliminate these shortcoings.

I’m happy to point you to critics who have done just that (see for example Elsberry and Shallit’s “SAI”, or Erik Tellgren’s suggestions). The problem is that some shortcomings are too fundamental to be fixed with a better definition (e.g. the fact that CSI doesn’t take into account the merits — or lack thereof — of the design hypothesis).

Let me say a few simple things about CSI. This is not meant in any way to be comprehensive and there have been long discussions about this and I haven’t commented here much in the last few years so I do not know what has been said in that time.

A basic example – take the carvings on Mt. Rushmore. This is an example of CSI. I do not know how one would calculate the information or the amount of information in the carvings but no one in their right minds would deny that these carvings were not complex and that the entity, Mr. Rushmore, is independent of the thing which the carvings refer to or specify. Also no one would deny they are intelligent based. Something similar would be the stones at Easter Island and with this example, the independent pattern being specified or referred to is unknown. One might add Stonehedge but here it is not clear what it all means except it is apparently related to the solar year and could have a hundred different functions.

Getting more iffy would be arrow heads and shards of rock in debris at the bottom of a mesa. So is a triangular piece of rock, an arrow head made by an intelligent source or by natural forces. Different aspects of the rock fragment would determine if it was or was not or maybe it would be undetermined. A typical forensic crime show (these shows are often coincidently named CSI in US Television for crime scene investigation) will occasionally look at crime scenes to determine if certain patterns are man made or not.

My examples above were for fossils which somehow recorded an image of a real thing almost like a photograph, one an animal and the other a rock. Both are natural processes and the image is not quite like Easter Island or Stonehedge but it is a recording of another object. So is it CSI? I was pointing out that the definition of CSI may not be as definitive as one would want. There are some rather complicated mathematical definitions which I do not understand but it should be translatable to a layman’s language. I am still not sure if that is true.

FCSI is where an entity actually specifies another entity independent of it and the second entity has an identifiable function. Language, computer code and DNA/proteins are all examples of FSCI. This may not be the best explanation but it essentially gets at the concept. A specifies B which is independent of A and B has a function. DNA/proteins are the only example found in nature. Not only are the proteins and the DNA specifying them extremely complex but so are the systems that these proteins form. No one knows where the instructions are that specify the building of the organism and the interactions that must take place but these must also be FCSI and incredibly complex. Other forms of FCSI are quite common and all intelligent based.

Anyone commenting here for very long understands this. So if someone denies the obvious, then I have to ask why. There has been a pattern of objections over the years that deny the obviousness of some things which goes along with the behavior of never answering direct questions or even admitting that there may be something valid in what others are saying. When that happens the intelligent thing is to infer that they are playing games. It is not rare here. It happens numerous times every day.

If the ID position is invalid, why all the dancing. Why not go directly to the issue and say it.

Coming from you that is meaningless as you don’t understand anything ID.

And if something looks designed then that alone is enough to look into it to see if it was. And when no one can demonstrate that any other process can produce it, then we are very safe to infer design.

OTOH your position doesn’t even have any methodology except to say “it isn’t designed and we don’t care if no one knows how it arose it wasn’t designed”.

Another example of what I was talking about. Notice the very careful choice of words. There is no denial that intelligence is the cause only an academic phrasing that somehow intelligence can be true but invalid. In this case it is academic-ese for I can’t possibly dispute the obviousness of your conclusion but I can find some tiny little fault in your method, and because of this your conclusion is not to be accepted even if true.

Even the fancy math boils down to:

This thing looks designed, therefore it was designed.

Ah, yes that is what the math says but it also says that it is almost impossible for anything else to have caused it. Now ID does not say absolutely that there is no other possible explanation only that it is extremely unlikely. While those who oppose ID say it is absolutely not a consideration. The implied hypocrisy in this statement is amazing. Or

Um, yes, Joe, it is. In that paper Dembski calls it “chi”, or “specified complexity”, but it’s the same thing as he and others elsewhere have called Complex Specified Information, as Dennis Jones points out here:

Complex Specified Information (CSI) is also called specified complexity.

Another example of what I was talking about. Notice the very careful choice of words.

Indeed. Words matter.

There is no denial that intelligence is the cause

The “denial” is that the cause has been demonstrated.

only an academic phrasing that somehow intelligence can be true but invalid.

Of course it can.

“This vase must have been deliberately smashed by my teenager” may be true, but it’s an invalid conclusion merely from the evidence of the smashed vase. It could ave been the cat, or the wind.

It is perfectly possible to arrive at a true conclusion via an invalid reasoning process.

In this case it is academic-ese for I can’t possibly dispute the obviousness of your conclusion but I can find some tiny little fault in your method,

No, it isn’t. That’s a mistranslation.

It’s scientific-ese for: “your conclusion may or may not be correct but it doesn’t follow from your evidence and/or reasoning”. It’s one of the questions peer-reviewers routinely have to answer when reviewing scientific papers: are the paper’s conclusions supported by the evidence and argument”?

Note, the question is NOT: “is the conclusion true”?

and because of this your conclusion is not to be accepted even if true.

No. This forms no part of my position, nor is it implied by what I wrote.

It is high time ID proponents recognised that “God didn’t do it” is not a scientific claim, and not one made by anyone qua scientist.

Robb: I already did, and right now I have neither time nor energy for the usual rhetorical games from your side. The very presentation of a case long since known to be chance plus necessity leading to something non-specific, ash bands in snow, aptly illustrates my point. To be CSI, there has to be JOINT complexity and specificity on a given aspect of an entity, leading to the isolated islands in a vast config space of beyond astronomical search scope effect. I think Dembsky’s 500 bit limit is only applicable to solar system scale (our practical universe for chemical scale interactions), and 1,000 tightens that up. No blind chance and mechanical necessity based search process on the scope of our solar system can adequately sample a space for 500 bits to make finding such isolated zones T plausible in W = 3.27*10^150 or more possibilities [a 1 straw to 1,000 light year haystack ratio of search to space], and 1,000 bits much more overwhelmingly swamps the observable cosmos. Remember the latter has every atom sampling a config through any blind process every 10^-45 s, i.e. Planck time. Not even such a process can sample 1 in 10^150 of the scope for something of 1,000 bits complexity. Notice, I am exactly not specifying any particular chance hyp, nor estimating any probability, I am applying sample theory to give a cruder but very effective result — as I have pointed out for years. Some lotteries are unwinnable per needle in haystack search challenge reasons. As well you know or should know. And if you want a concrete context, think OOL where chemicals in a pond need to go to a functional self replicating, metabolising, code and algorithm using cell. KF

He is using a similar definition to Dembski’s except raising the threshold slightly – in other words defining CSI in terms of the probability of the observed under a chance hypothesis, given the “probabilistic resources” of the observable universe.

Hope your son is doing well Kairosfocus. I spent the weekend playing Caribbean music, and it kept me thinking of you.

WmAD is giving fundamentally an INFORMATION metric, as taking it one step forward — as I did and pointed out to you — will show. We are fully justified to take that step and to hold that the info and redundancy patterns in the living cell enfold and reflect all relevant chance hyps.

Once that is done it is quite plain that the functionally specified complexity involved in protein fold domains is far beyond the reasonable reach of ANY chance and mechanical necessity blind process on the gamut of solar system or observed cosmos. For needle in haystack search reasons.

As I just said to Robb the OOL context makes the point most plainly.

Right now, I have neither time nor energy to go on another one of those endless crocodile death roll rhetorical circles games where the material point is blatantly obvious and cogently compelling save to the ideologically predisposed.

If you want to break ID on the merits, show us a case where OOL is EMPIRICALLY grounded on blind search such as has been discussed so many times.

You don’t have that, so the only thing you can do is obfuscate the issue.

WmAD is giving fundamentally an INFORMATION metric, as taking it one step forward — as I did and pointed out to you — will show. We are fully justified to take that step and to hold that the info and redundancy patterns in the living cell enfold and reflect all relevant chance hyps.

Once that is done it is quite plain that the functionally specified complexity involved in protein fold domains is far beyond the reasonable reach of ANY chance and mechanical necessity blind process on the gamut of solar system or observed cosmos. For needle in haystack search reasons.

As I just said to Robb the OOL context makes the point most plainly.

Right now, I have neither time nor energy to go on another one of those endless crocodile death roll rhetorical circles games where the material point is blatantly obvious and cogently compelling save to the ideologically predisposed.

If you want to break ID on the merits, show us a case where OOL is EMPIRICALLY grounded on blind search such as has been discussed so many times.

You don’t have that, so the only thing you can do is obfuscate the issue.

Game over.

KF

I’m not “obfuscating”, KF. I’m just pointing out to Joe that Dembski’s definition of CSI is essentially the same as yours.

If you want to break ID on the merits, show us a case where OOL is EMPIRICALLY grounded on blind search such as has been discussed so many times.

Classic default argument. If we can’t explain OoL, then “Intelligent Design” is assumed by default. Notwithstanding, evolutionary theory is silent on the origin of life. ToE is a scientific theory explaining life’s diversity not its origin.

BTW KF, take a break. This site and everything ever posted on it could disappear tomorrow and it would matter not one jot. Some things, especially family are much more important.

Classic default argument. If we can’t explain OoL, then “Intelligent Design” is assumed by default.

Absolutely no. It just makes ID more likely as the reason for OOL. There is never the absolute such as ID cannot possibly accepted.

Natural explanations are always in the set of potential explanations. It is just that ID is also in that set but some have arbitrarily excluded ID from this set.

Notwithstanding, evolutionary theory is silent on the origin of life. ToE is a scientific theory explaining life’s diversity not its origin.

How is evolutionary theory a scientific theory. Of course there is no definition here so “evolutionary theory” could mean anything including ID. There is no evidence for any naturalistic theory proposed by man but there is evidence for ID which can explain the diversity of life. So are you advocating ID? It certainly sounds like it.

That is not the case. I realise it is widely believed in ID circles but the cases are not symmetrical.

There is no scientific conclusion that no deity was involved. All there are are provisional explanatory models. They are never sufficient. Nor do they, or can they, exclude a non-material prime mover, or even interventionist mover.

There is no “default” to “materialism” in science. It doesn’t even make any sense. Science is the process of discovering predictive models. If part of what we observe is unpredictable by virtue of having a non-material cause, then science won’t be able to discover it. It certainly can’t exclude it.

It’s a scientific theory by virtue of generating testable hypotheses that can, and are, tested against new data.

Of course there is no definition here so “evolutionary theory” could mean anything including ID.

It could, but it is usually taken to refer to the body of theory that posits that all known life on earth descended from a universal common ancestral population of very simple living things, and diversified by a process of adaptation, speciation and drift.

There is no evidence for any naturalistic theory proposed by man

There is lots of evidence for many naturalistic theories proposed by man (and woman), including the theory of evolution. None of them exclude the possibility that all that exists was created by a divine mind with the intention of bringing about what has in fact occurred.

but there is evidence for ID which can explain the diversity of life.

Not much (any?) in the form of positive evidence. All the ID arguments have been negative (X is unlikely under non-design, therefore design).

1. ID uses a positive argument based upon finding high levels of complex and specified information.

As an ID advocate, I disagree with this. Positive argument means to most people, “we see the God in action creating designs in the present day, therefore the designs of life in the present were made by God.” That is a the form of a positive argument. ID is mostly based on analogy and heavy amounts of negative arguments.

This is equivocating what most people mean by “positive argument”. I don’t like it when Darwinists equivocate, and its rare that ID proponents equivocate, and maybe this is the only instance I know in ID literature of equivocating the notion of a “positive argument”. Making claims like this is overplaying ones hand, it does lend credibility or believability to ones claims.

I’ve argued, it may suffice to say that life resembles designed objects, that’s a defensible claim. That’s a scientifically defensible claim…

Arguing whether there is a Designer, and Intelligence at the root of the Designs…that is matter of accepting a reasonable (but not absolutely provable) postulate.

The inference to design is reasonable, it is almost hard for some (myself included) to believe the opposite, but let’s not say ID makes “positive arguments”, it’s vague at best and equivocating at worst.

The argument from complexity is not a “positive argument” in the sense I would use the term – it’s specifically the argument that what is observed is UNlikely under the null of non-design.

A positive argument would have this kind of form:

If X is designed then we would see A, not B, whereas if X evolved/resulted from physics and chemistry we would see B, not A.

But to do that you’d have to have a specific design hypothesis (e.g. “front-loading” or “intermittent intervention”, or “intervention at OOL”).

And that is what ID proponents have traditionally shied away from doing.

Unless ID has a specific hypothesis, all phenomena are compatible with ID, which leaves only negative arguments (ID can account for everything, therefore anything unaccounted for by non-design must be due to design).

Elizabeth, I understand what you mean about heterogeneity, but there shouldn’t be THAT much heterogeneity. Otherwise, the domain cannot be identifiable as a domain.

So, what I would expect is maybe 4-6 contested definitions, of which 2-3 are the primary schools, 2 emphasize some small aspects, and 2 are outlier or radical alternatives.

What I would not expect is either absence of definition or refusal to provide a definition. ID proponents seem to me to do both. It just makes no sense to me to have a Theory of X where no one wants to talk about what X actually is and is not.

Can you imagine having, for instance, a Theory of Mind in which no one discusses what the mind is and does but rather only repetitively asserts the existence of the mind and the failure of no-mind theories?

In order to follow the rules of scientific investigation they have to use the explanatory filter or something exactly like it.

So if they were to weigh the merits of the design hypothesis against the merits of other hypotheses, rather than using the EF, that would be against the rules of scientific investigation? Where did you learn these rules?

No blind chance and mechanical necessity based search process on the scope of our solar system can adequately sample a space for 500 bits to make finding such isolated zones T plausible in W = 3.27*10^150 or more possibilities [a 1 straw to 1,000 light year haystack ratio of search to space], and 1,000 bits much more overwhelmingly swamps the observable cosmos. Remember the latter has every atom sampling a config through any blind process every 10^-45 s, i.e. Planck time. Not even such a process can sample 1 in 10^150 of the scope for something of 1,000 bits complexity. Notice, I am exactly not specifying any particular chance hyp, nor estimating any probability, I am applying sample theory to give a cruder but very effective result — as I have pointed out for years.