At this point, with all due respect, you look like someone making stuff up to fit your predetermined conclusion.

I know you think so.

[a –> Jerad, I will pause to mark up. I would further with all due respect suggest that I have some warrant for my remark, especially given how glaringly you mishandled the design inference framework in your remark I responded to earlier.]

{Let me add a diagram of the per aspect explanatory filter, using the more elaborated form this time}

The ID Inference Explanatory Filter. Note in particular the sequence of decision nodes

You have for sure seen the per apsect design filter and know that the first default explanaiton is that something is caused by law of necessity, for good reason; that is the bulk of the cosmos. You know similarly that highly contingent outcomes have two empirically warrantged causal sources: chance and choice.

You kinow full well that he reason chance is teh default is to give the plain benefit of the doubnt to chance, even at the expense of false negatives.

I suppose. Again, I don’t think of it like that. I take each case and consider it’s context before I think the most likely explanation to be.
[b –> You have already had adequate summary on how scientific investigations evaluate items we cannot directly observe based on traces and causal patterns and signs we can directly establish as reliable, and comparison. This is the exact procedure used in design inference, a pattern that famously traces to Newton’s uniformity principle of reasoning in science.]

I think SETI signals are a good example of really having no idea what’s being looked at.

[c –> There are no, zip, zilch, nada, SETI signals of consequence. And certainly no coded messages. But it is beyond dispute that if such a signal were received, it would be taken very seriously indeed. In the case of dFSCI, we are examining patterns relevant to coded signals. And, we have a highly relevant case in point in the living cell, which points to the origin of life. Which of course is an area that has been highlighted as pivotal on the whole issue of origins, but which is one where you have determined not to tread any more than you have to.]

I suppose, in that case, they do go through something like you’re steps . . . first thing: seeing if the new signals is similar to known and explained stuff.

[d –> If you take off materialist blinkers for the moment and look at what the design filter does, you will see that it is saying, what is it that we are doing in an empirically based, scientific explanation, and how does this relate to the empirical fact that design exists and affects the world leaving evident traces? We see that the first thing that is looked for is natural regularities, tracing to laws of mechanical necessity. Second — and my home discipline pioneered in this in C19 — we look at stochastically distributed patterns of behaviour that credibly trace to chance processes. Then it asks, what happens if we look for distinguishing characteristics of the other cause of high contingency, design? And in so doing, we see that there are indeed empirically reliable signs of design, which have considerable relevance to how we look at among other things, origins. But more broadly, it grounds the intuition that there are markers of design as opposed to chance.]

And you know the stringency of the criterion of specificity (especially functional) JOINED TO complexity beyond 500 or 1,000 bits worth, as a pivot to show cases where the only reasonable, empirically warranted explanation is design.

I still think you’re calling design too early.

[e –> Give a false positive, or show warrant for the dismissal. Remember, just on the solar system scope, we are talking about a result that identifies that by using the entire resources of the solar system for its typically estimated lifespan to date, we could only sample something like 1 straw to a cubical haystack 1,000 light years across. If you think that he sampling theory result that a small but significant random sample will typically capture the bulk of a distribution is unsound, kindly show us why, and how that affects sampling theory in light of the issue of fluctuations. Failing that, I have every epistemic right to suggest that what we are seeing instead is your a priori commitment to not infer design peeking through.]

And, to be honest, the only things I’ve seen the design community call design on is DNA and, in a very different way, the cosmos.

[f –> Not so. What happens is that design is most contentious on these, but in fact the design inference is used all the time in all sorts of fields, often on an intuitive or semi intuitive basis. As just one example, consider how fires are explained as arson vs accident. Similarly, how a particular effect in our bodies is explained as a signature of drug intervention vs chance behaviour or natural mechanism. And of course there is the whole world of hypothesis testing by examining whether we are in the bulk or the far skirt and whether it is reasonable to expect such on the particularities of the situation.]

The real problem, with all respect, as already highlighted is obviously that this filter will point out cell based life as designed. Which — even though you do not have an empirically well warranted causal explanation for otherwise, you do not wish to accept.

I don’t think you’ve made the case yet.

[f –> On the evidence it is plain that there is a controlling a priori commitment at work, so the case will never be perceived as made, as there will always be a selectively hyperskeptical objection that demands an increment of warrant that is calculated or by unreflective assertion, unreasonable to demand, by comparison with essentially similar situations. Notice, how ever so many swallow a timeline model of the past without batting an eye, but strain at a design inference that is much more empirically reliable on the causal patterns and signs that we have. That’s a case of straining at a gnat while swallowing a camel.]

I don’t think the design inference has been rigorously established as an objective measure.

[g –> Dismissive assertion, in a context where “rigorous’ is often a signature of selective hyperskepticism at work, cf, the above. The inference on algorithmic digital code that has been the subject of Nobel Prize awards should be plain enough.]

I think you’ve decided that only intelligence can create stuff like DNA.

[h –> Rubbish, and I do not appreciate your putting words in my mouth or thoughts in my head that do not belong there, to justify a turnabout assertion. You know or full well should know, that — as is true for any significant science — a single well documented case of FSCO/I reliably coming about by blind chance and/or mechanical necessity would suffice to break the empirical reliability of the inference that eh only observed — billions of cases — cause of FSCO/I is design. That you are objecting on projecting question-begging (that is exactly what your assertion means) instead of putting forth clear counter-examples, is strong evidence in itself that the observation is quite correct. That observation is backed by the needle in the haystack analysis that shows why beyond a certain level of complexity joined to the sort of specificity that makes relevant cases come from narrow zones T in large config spaces W, it is utterly unlikely to observe cases E from T based on blind chance and mechanical necessity.]

I haven’t seen any objective way to determine that except to say: it’s over so many bits long so it’s designed.

[i –> Strawman caricature. You know better, a lot better. You full well know that we are looking at complexity AND specificity that confines us to narrow zones T in wide spaces of possibilities W such that the atomic resources of our solar system or the observed cosmos will be swamped by the amount of haystack to be searched. Where you have been given the reasoning on sampling theory as to why we would only expect blind samples comparable to 1 straw to a hay bale 1,000 light years across (as thick as our galaxy) will reliably only pick up the bulk, even if the haystack were superposed on our galaxy near earth. Indeed, just above you had opportunity to see a concrete example of a text string in English and how easily it passes the specificity-complexity criterion.]

And I just don’t think that’s good enough.

[j –> Knocking over a strawman. Kindly, deal with the real issue that has been put to you over and over, in more than adequate details.]

But that inference is based on what we do know, the reliable cause of FSCO/I and the related needle in the haystack analysis. (As was just shown for a concrete case.)

But you don’t know that there was an intelligence around when one needed to be around which means you’re assuming a cause.
[k –> Really! You have repeatedly been advised that we are addressing inference on empirically reliable sign per patterns we investigate in the present. Surely, that we see that reliably, where there is a sign, we have confirmed the presence of the associated cause, is an empirical base of fact that shows something that is at least a good candidate for being a uniform pattern. We back it up with an analysis that shows on well accepted and uncontroversial statistical principles, why this is so. Then we look at cases where we see traces from the past that are comparable to the signs we just confirmed to be reliable indices. Such signs, to any reasonable person not ideologically committed to a contrary position, will count as evidence of similar causes acting in the past. But more tellingly, we can point to other cases such as the reconstructed timeline of the earth’s past where on much weaker correlations between effects and putative causes, those who object to the design inference make highly confident conclusions about the past and in so doing, even go so far as to present them as though they were indisputable facts. The inconsistency is glaringly obvious, save to the true believers in the evo mat scheme.]

And you’re not addressing all the evidence which points to universal common descent with modification.

[l –> I have started form the evidence at the root of the tree of life and find that there is no credible reason to infer that chemistry and physics in some still warm pond or the like will assemble at once or incre4mentally, a gated, encapsulated, metabolising entity using a von Neumann, code based self replicator, based on highly endothermic and information rich macromolecules. So, I see there is no root to the alleged tree of life, on Darwinist premises. I look at the dFSCI in the living cell, a trace form the past, note that it is a case of FSCO/I and on the pattern of causal investigations and inductions already outlined I see I have excellent reason to conclude that the living cell is a work of skilled ART, not blind chance and mechanical necessity. thereafter, ay evidence of common descent or the like is to be viewed in that pivotal light. And I find that common design rather than descent is superior, given the systematic pattern of — too often papered over — islands of molecular function (try protein fold domains) ranging up to suddenness, stasis and the scope of fresh FSCO/I involved in novel body plans and reflected in the 1/4 million plus fossil species, plus mosaic animals etc that point to libraries of reusable parts, and more, give me high confidence that I am seeing a pattern of common design rather than common descent. This is reinforced when I see that ideological a prioris are heavily involved in forcing the Darwinist blind watchmaker thesis model of the past.]

We’re going around in circles here.

[m –> On the contrary, what is coming out loud and clear is the ideological a priori that drives circularity in the evolutionary materialist reconstruction of the deep past of origins. KF]>>

If a string for which we have correctly assesses dFSCI is proved to have historically emerged without any design intervention, that would be a false positive. dFSCI has been correctly assessed, but it does not correspond empirically to a design origin.

It is important to remind that no such example is empirically known. That’s why we say that dFSCI has 100% specificity as an indicator of design.

If a few examples of that kind were found, the specificity of the tool would be lower. We could still keep some use for it, but I admit that its relevance for a design inference in such a fundamental issue like the interpretation of biological information woudl be heavily compromised.

As you should know, the first default is look for mechanical necessity. The neutron star model of pulsars suffices to explain what we see.

Homing beacons come in networks — I here look at DECCA, LORAN and the like up to today’s GPS, and are highly complex nodes. They are parts of communication networks with highly complex and functionally specific communication systems. Where encoders, modulators, transmitters, receivers, demodulators and decoders have to be precisely and exactly matched.

Just take an antenna tower if you don’t want to look at anything more complex.

KF>>

__________

I am fairly sure that this discussion, now in excess of 1,500 comments, lets us all see what is really going on in the debate over the design inference. END

F/N: I have moved discussion for the TSZ-Jerad continuation thread here, in the main because of bandwidth and loading issues. We should note that a month on, there has been no response to the challenge to supporters of the blind watchmaker scheme for origins to submit to UD a 6,000 word essay justifying the view on empirical evidence from OOL on. KF

KF (799 from previous thread . . . do I get a prize for dragging things out to the third incarnation? How many hits do I generate?)

As you should know, the first default is look for mechanical necessity. The neutron star model of pulsars suffices to explain what we see.

Yes, now. But when they were first detected no one knew what they were. So, at that time, were they candidate SCI signals? Some researchers at the time thought they might be signs of alien life.

Follow on thought: if we detected a regular signal and could not find a source would that be a candidate of SCI?

Homing beacons come in networks — I here look at DECCA, LORAN and the like up to today’s GPS, and are highly complex nodes. They are parts of communication networks with highly complex and functionally specific communication systems. Where encoders, modulators, transmitters, receivers, demodulators and decoders have to be precisely and exactly matched.

If you detected a regular signal coming from deep space and you couldn’t find a source could it be a homing beacon sent out by a space-going civilisation from another planet? How would you decide?

The regularity of the signal, from the first, led to a natural regularity due to mechanical necessity as the main candidate. Yes, there was a flurry on little green men on the Wow signal, but that was never serious.

Next, your suggestion of a homing beacon as a nav signal with a regularity would be classified by the explanatory filter — as it is designed to — as natural regularity. At most a false negative.

The filter was never designed to detect any and all cases of design (it is not a universal decoder algorithm, and we have good reason to believe such are not feasible), just those that are unequivocal per tested and reliable signs.

But how can we look for something that has dFSCI without any clear idea of what dFSCI is in any real object or process that we might care to consider.

Sounds like a personal problem Alan as dFSCI has been properly defined. IOW many people know what dFSCI is. If YOU do not then blame yourself.

Define dFSCI as something that is 100% an indicator of design and, bingo, all objects having dFCSI are designed.

Spoken like a loser.

AGAIN dFSCI is a design indicator because every time we have observed dFSCI and knew the cause it was always via some agency. ALL of our observations and experiences demonstrate that only agency can produce dFSCI.

And THAT means, Alan, if someone, not you because you are useless, ever steps up and demonstrates some “necessity mechanism” can produce dFSCI then our inference is shot down. But that is going to take work and we know evos are not into that.

(I admit to a couple of a small diversions from Lizzie’s principles this morning – but given Gpuccio’s tirade against all of us, and having been called a [snip] and “moron” by Joe, I feel I am allowed some license).

Well just stop acting like a [snip] and a moron and I won’t be able to report on those observances. Unless of course it isn’t an act…
_______Joe, kindly keep questionable words out of UD threads. Warning. KF

True – but our point is that Gpuccio’s definition of dFSCI includes no necessity mechanism clause.

The definition of dFSCI does NOT include any cause. dFSCI is INDEPENDENT of the source.

That has been explained to you many times now and you refuse to understand it.

What are people supposed to think of you, seeing you act like this? Do you really think you are helping your case by ignoring what is posted? Or do you think people will say that you are a [snip] for doing so?

There are no, zip, zilch, nada, SETI signals of consequence. And certainly no coded messages. But it is beyond dispute that if such a signal were received, it would be taken very seriously indeed. In the case of dFSCI, we are examining patterns relevant to coded signals. And, we have a highly relevant case in point in the living cell, which points to the origin of life. Which of course is an area that has been highlighted as pivotal on the whole issue of origins, but which is one where you have determined not to tread any more than you have to.

My point is, what if something like the following happens again AND we can’t find a natural source.

From Wikipedia:

The first pulsar was observed on November 28, 1967, by Jocelyn Bell Burnell and Antony Hewish. The observed emission from the pulsar was pulses separated by 1.33 seconds, originated from the same location on the sky, and kept to sidereal time. In looking for explanations for the pulses, the short period of the pulses eliminated most astrophysical sources of radiation, such as stars, and since the pulses followed sidereal time, it could not be man-made radio frequency interference. When observations with another telescope confirmed the emission, it eliminated any sort of instrumental effects. At this point, Burnell notes of herself and Hewish that “we did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission. It is an interesting problem—if one thinks one may have detected life elsewhere in the universe, how does one announce the results responsibly?” Even so, they nicknamed the signal LGM-1, for “little green men” (a playful name for intelligent beings of extraterrestrial origin). It was not until a second pulsating source was discovered in a different part of the sky that the “LGM hypothesis” was entirely abandoned. Their pulsar was later dubbed CP 1919, and is now known by a number of designators including PSR 1919+21, PSR B1919+21 and PSR J1921+2153. Although CP 1919 emits in radio wavelengths, pulsars have, subsequently, been found to emit in visible light, X-ray, and/or gamma ray wavelengths.

So, what if it happens again and there is only one source, not multiple sources. One source emitting at regular intervals at a fixed frequency coming from a fixed point in space. Is that a SCI candidate?

You know or full well should know, that — as is true for any significant science — a single well documented case of FSCO/I reliably coming about by blind chance and/or mechanical necessity would suffice to break the empirical reliability of the inference that eh only observed — billions of cases — cause of FSCO/I is design. That you are objecting on projecting question-begging (that is exactly what your assertion means) instead of putting forth clear counter-examples, is strong evidence in itself that the observation is quite correct. That observation is backed by the needle in the haystack analysis that shows why beyond a certain level of complexity joined to the sort of specificity that makes relevant cases come from narrow zones T in large config spaces W, it is utterly unlikely to observe cases E from T based on blind chance and mechanical necessity.

Your argument is based on a random search of a whole configuration space. And that’s NOT how new body plans are developed according to evolutionary theory so the argument is not applicable for anything after the first basic replicator as an argument against universal common descent with modification. I don’t think it’s going to work before then either ’cause chemicals don’t form bonds willy-nilly and so there’s no need or cause to search the whole configuration space then either.

I think the ‘functional’ configurations are much more likely than you’re guessing in your model.

You know better, a lot better. You full well know that we are looking at complexity AND specificity that confines us to narrow zones T in wide spaces of possibilities W such that the atomic resources of our solar system or the observed cosmos will be swamped by the amount of haystack to be searched. Where you have been given the reasoning on sampling theory as to why we would only expect blind samples comparable to 1 straw to a hay bale 1,000 light years across (as thick as our galaxy) will reliably only pick up the bulk, even if the haystack were superposed on our galaxy near earth. Indeed, just above you had opportunity to see a concrete example of a text string in English and how easily it passes the specificity-complexity criterion.

Living systems don’t do random searches across whole configuration spaces to find new body plans. I don’t know where this idea comes from. You’re assuming universal common descent is not true and then coming up with an argument why ‘islands of function’ are too hard to find via a random search.

How do you know universal common descent is not correct? Let’s start with that.

Really! You have repeatedly been advised that we are addressing inference on empirically reliable sign per patterns we investigate in the present. Surely, that we see that reliably, where there is a sign, we have confirmed the presence of the associated cause, is an empirical base of fact that shows something that is at least a good candidate for being a uniform pattern. We back it up with an analysis that shows on well accepted and uncontroversial statistical principles, why this is so. Then we look at cases where we see traces from the past that are comparable to the signs we just confirmed to be reliable indices. Such signs, to any reasonable person not ideologically committed to a contrary position, will count as evidence of similar causes acting in the past. But more tellingly, we can point to other cases such as the reconstructed timeline of the earth’s past where on much weaker correlations between effects and putative causes, those who object to the design inference make highly confident conclusions about the past and in so doing, even go so far as to present them as though they were indisputable facts. The inconsistency is glaringly obvious, save to the true believers in the evo mat scheme.

I’m happy to draw inferences to causes known to be operating at the given time or reasonably likely to have been operating at the given time. As you say, that’s a common method of reasoning in many fields.

But only in ID are people inferring to a cause that has no independent proof of being in existence at the time in question. That does not follow from common practice.

Eric von Daniken tried something like that: look, these objects are very complicated and we don’t know how to make them (which wasn’t true by the way) so I can hardly imagine our ancient ancestors would have known how. It must have been some aliens from other planets! Here’s a picture which I think looks like a spaceship.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

No, it doesn’t. There isn’t any genetic evidence that suports the alleged transformations. The fossil evidence shows fish->tetrapods-> fish-a-pods- out of sequence.

No genetic evidence? So what’s your explanation for protein functional redundancy? And for DNA functional redundancy? And for transposons? And for redundant pseudogenes? And for ERVs? All those things are consistent with universal common descent. And that’s only part of the genetic evidence.

What about the bio-geographic evidence? Why are lemurs naturally endemic to one island? I know you think lots and lots of life was pre-coded so . . . where is that coding and how does it limit lemurs to one island?

And if you don’t have any idea on the number of mutations it takes, then you don’t have science.

Do you know how many mutations it takes?

What about my other, simple question: if a signal from space was detected that was on a constant frequency at a constant interval from a single location would it be a candidate for being SCI?

Thank you for a post that makes sense (I don’t think your previous one did).

The definition of dFCSI is not circular. Something has dFCSI if it has enough functional information that this cannot have arisen by random processes like mutation, and if that functional information cannot be explained by deterministic processes (which include natural selection). So far nothing circular about that.

I am happy that somebody can still use reason correctly.

Drawing from the presence of dFCSI a conclusion that a genotype is the result of Design is

* redundant. We already concluded that it cannot be explained by nonintelligent natural processes, which leaves only Design,

* unnecessary. For the same reason.

* circular, because we used property X of a genotype to conclude that dFCSI was present in it, but then used the presence of dFCSI in that genotype to conclude that it has property X. (Property X is the our inability to explain the genotype’s presence by random or by nonintelligent deterministic means).

Well, the first points are frankly nonsense:

a) We infer design exactly because we have concluded that dFSCI is present. The fact that something is unlikely as a random output and is not explained by a necessity mechanism does not logically imply that it is designed. Many designed things could in theory originate for RV, or from a necessity mechanism, if they are simple enough. And we don’t know hoe a designer generates things with high functional complexity.

Therefore, the connection between dFSCI and a design origin cannot be given fro granted on logical grounds: it must be based on empirical observation, the observation that DFSCI detects design with 100% specificity in all cases where the true historical origin can be ascertained.

For the same reason dFSCI is not unnecessary at all. WE must distinguish between simple biological molecules, that can be explained by RV or bnevessity, and highly complex polymers that cannot. We cannot just look at them and decide, we need a metrics.

Now, the “circular”. You have alredy conceded that the definition is not circular. Thank you for that.

So, what is in the definition?

1) A functional specification

2) High digital complexity linked to that specification

3) No known necessity mechanism that cam explain that complexity

Is that OK?

Now, what is in the inference?

4) The empirical observation that all strings for which we assess dFSCI as present have a designed origin.

What in 4) is logically implied by the definition? Nothing.

To have a design origin us a fact that is ascertained by empirical observation. It is not a property of the object. It is not an inference. It is not a deduction. It is an observable fact.

We observe, by experience, a strong connection (with 100% specificity) between the property of exhibiting dFSCI (that is, point 1,2 and 3), and the fact of having origin in a design process.

No circularity, as everybody can see.

Now, please, if you go on using the word “circularity”, please explain what is wrong in what I have said.

As everyone can see, your statement:

“but then used the presence of dFCSI in that genotype to conclude that it has property X.”

is completely wrong. We use property X to infer origin O.

Why do you say something completely different, after I have cleraly specified this point a lot of times in the last few days? Is it a misunderstanding on your part? Simple mental confusion on your part? A simple lie on your part?

I don’t know any more. You tell.

I see that gpuccio is quite angered by characterizations like the above and is calling some of the people who make them liars.

Yes, I am.

If we could come up with even one case in which there was a “known” case of dFCSI that resulted from natural selection, then this would be a Big Problem for the use of dFCSI to infer Design.

That’s true.

But “known” to who? I would say that a simple GA case with enough genes will bring about dFCSI. (But gpuccio rejects GAs as examples, on what I think are insufficient grounds).

Yes, I reject them.

But that’s not the point. I have inferred dFSCI explicitly for many protein domains (all those that, in Durston’t paper, exhibit more than 150 bits of functionla information).

I am taking my risks. If you can show a credible, detailed explanation for any of them, I will promptly admit that all my theory about the application of dFSCI to biological information has received a very hard, maybe mortal, blow.

In any case, if someone does come up with a natural selection mechanism to explain the presence of a putative case of dFCSI, does that case then automatically become not a case of dFCSI?

This is a good question, and it deserves a clear answer, also because I have seen a lot of discussion about that, most of it very confused.

First of all, I must say that dFSCI is for me a property of the object, which cab be objectively assessed in the object. However, it is not just “observed” in the object, because it is a complex property that needs an assessment through an integrated judgement.

If that judgment is given correctly, according to the definition, I would say that any successive falsification of that judgement is a falsification of the utility of dFSCI itself, IOWs, the demonstration of a false positive.

I will be more clear. But I must say, before going on, what I consider a correct assesment of dFSCI:

a) The function must be defined explicitly, and must be objectively measurable.

b) The threshold must be appropriate for the sytstem being considered, and for its probabilistic resources.

c) The target space/search space ratio must be approximated as well as possible, and must be credible.

d) All strings that exhibit high regularity and compressibility should not be considered as exhibiting dFSCI, just to be cautious. Those output can be very likely explained by necessity mechanisms.

e) For all strings whose formal appearance is of the “pseudo-random” type, with no apparent order or regularity, we can usually infer dFSCI with safety, if all other conditions are present. However, a thorough consideration of the laws that act in the system must be done, and we must be reasonably sure that those laws have no special connection with the specific string we are considering.

If all these conditions are well satisfied, I consider the assessment of DFSCO as correct. As you can see, there is a lot of work to be done to assess a property that you label as “redundant” and “unnecessary”.

Now, I do believe that if all those properties are satisfied, no future explanation will ever be found for that dFSCI, except obviously design. That is not a defintion, nor an inference, nor a fact, just to be clear. Let’s call it “a prediction”. Empirical experience will confirm that prediction, or will falsify it.

It is not so strange, after all. Even Mark has admitted that he does not really believe that any future necessity mechanism will ever be found to explain that sonnet. Let’s say that I am as sure that no mechanism will ever explain protein domains, as Mark is that no mechanism will ever explain the sonnet.

So, let’s say that if such a mechanism is credibly shown, I will consider my theory falsified.

You ask: but then, has the protein still dFSCI? The answer is, this time, really useless. The protein had dFSCI correctly assessed. With all the available knowledge, it exhibited dFSCI. If a non design explanation is found, this is and remains a false positive.

I hope the answer is clear enough.

That is what the explanation above, immediately after gpuccio’s question, was assuming. My guess is that the answer is “yes”. And if so, then the argument really is circular.

Well, I have not answered “yes”, but I would like to add that if I had, the argument would not have become “circualr”, but certainly “weaker”.

dFSCI is a disgnostic procedure. We must assess its specificity by applying it as it is. Any future development that can come into existence can only affect our judgement on that evaluation.

I must remind you that we assess the specificity of dFSCI by strings whose origin is known. When we apply it to strings whose origin is not know, we can only “assume” that it will show the same specificity. IOWs, we are making an inference, not a deduction. Ther is not absolute certainty in an inference.

If future developments undermine the validity of our inference, we have to admit that our tool did not show, in the applied field, the same specificity we observed in the testing phase.

Just wanted to give my own answer to gpuccio’s question. I am saddened that all that gpuccio could make of my previous comment was that “Joe Felsenstein, I must say with great regret, is beyond any sense.”

That was referred to your previous post. With all respect, I will maintain that judgement.

I am well aware of the limitations of my ability to explain things, but I have written textbooks, including the standard text on inference of phylogenies. Reviews of my writings usually call then “clear” even when I’d prefer to have them called “elegant” or “inspiring”. But “clear” is the adjective people use most often. I fancy my previous couple of comments to have been clear, and am sorry if gpuccio thinks that they are “beyond any sense”, or that I myself am “beyond any sense”.

We can have different opinions. This last comment of yours was very clear.

I’m also grateful for gpuccio’s conclusion in an earlier case that “At least you have avoided an explicit lie.” Gee, thanks.

All that is necessary to invalidate gpuccio’s claim regarding diabetes is one false positive. Same for dFSCI. Which is why I think he is reluctant to give a specific example. The state of research is shifting rapidly, and protein evolution is at the center of a lot of research.

Well, I am not affirming that glycemia has exactly 100% specificity. I made that example just to ahow that there in no circularity in that kind of statement. They can be right or wrong, but they are not circular.

I don’t really believe that hyperglicemia has 100% specificity for diabetes, but at that thershold (300 mg/dl, if I remember well) it must be pretty near.

This is another important point: the high specificity can in many cases easily be obtained by setting the diagnostic threshold, but that implies having more false megatives. That’s exactly what we do in ID with CSI. That’s what I was suggesting with my example of a 100 mg/dl threshold for diabetes.

It seems rather odd that gpuccio would cite medical diagnosis as the prototype for diagnosing design. It has not been too long since medical conditions were considered caused by spirits or were punishment for sin. Diagnosis is a poster child for leaky bucket classification.

Maybe because I am a medial doctor? However, I can probably share with you many criticisms about my category 🙂
_____

GP you have more than earned the recommendation of all concerned at UD as a first rate practitioner. And BTW, judging by differences in Luke’s diagnostic remarks [which distinguish natural and supernatural causes of similar complaints], it seems P is at least 2,000 years out of date attributing such diagnoses across the board to physicians. KF

How many times have I had to point out to you that if you opt for a blindly selected subspace, then you are looking at searching, not the original space blindly, of magnitude W, but its POWER SET, of magnitude 2^W. For the space of 1,000 bits, there are W = 1.07*10^301 possibilities. The search in the power set, gives you the need to explain searching a secondary space so big that its log to base 2 is 1.07*10^301, expressed in decimal digits.

In short, you have substituted a much harder second order search. Compared to that, the original search is a conservative estimate.

The alternative is to already be in the target zone, which leaves the zone unexplained, or to have intelligent choice of the zone of search, which is what you do not want.

KF

PS: You are again missing out evidence of common design, and until you can soundly address the OOL-common ancestral cell problem on blind chance and mechanical necessity, you do not have even the root for the proposed tree of life. Forgot, a signal that is regular would most plainly be attributed to unknown natural cause as the contingency is low. What would be attributed as designed by the EF is a complex aperiodic and plainly functional signal. Remember, false negatives would be cases of design not detected on grounds that there is not sufficient indication of design. Better to toss the little ones back. Back to crashes, incidents and reports, sigh.

How many times have I had to point out to you that if you opt for a blindly selected subspace, then you are looking at searching, not the original space blindly, of magnitude W, but its POWER SET, of magnitude 2^W. For the space of 1,000 bits, there are W = 1.07*10^301 possibilities. The search in the power set, gives you the need to explain searching a secondary space so big that its log to base 2 is 1.07*10^301, expressed in decimal digits.

Good thing I’m not opting for that then eh?

In short, you have substituted a much harder second order search. Compared to that, the original search is a conservative estimate.

Good thing I’m not making that substitution then.

The alternative is to already be in the target zone, which leaves the zone unexplained, or to have intelligent choice of the zone of search, which is what you do not want.

The first basic replicator wouldn’t be the first basic replicator if it weren’t already in a ‘target zone’, exactly. You don’t have to do a search for a zone of search!

Like I’ve said: as far as evolutionary theory is concerned, that first basic replicator could have ridden in on a meteor. Or fallen out of a alien visitor’s lunch box. I think those options just push the problem back but they are possibilities.

Whatever, first basic replicator that uses our genetic code appears. . . you’re on a big island of function (if there is more than one) and life starts covering it with life forms. Simple. No ‘islands of function’. Just universal common descent with modification.

How did the first basic replicator arise? I don’t know. I don’t think you have to search a huge configuration space (or it’s power space) to get there though. Some chemical bonds aren’t gonna happen. Not all configurations are ‘sampled’. Some bonds will build on themselves and each other. It’s a problem, I agree. People are working on it. They haven’t figured it out yet. But I disagree that it’s time to throw in the towel and say it couldn’t have happened via necessity. Or chance. We don’t know yet. I am not dodging the issue, I am saying we don’t know.

We don’t even know what that first basic replicator looked like. You envision it to be too complicated to have arisen via chance or necessity but how do you know that if you don’t know what it was?

The alternative to design is NOT random searches on huge configuration spaces. You can make that argument as long as you like but you’re not attacking evolutionary theory. That’s not what evolutionary theory is saying. As far as I know no one is making that argument for the very reasons you lay out.

Forgot, a signal that is regular would most plainly be attributed to unknown natural cause as the contingency is low. What would be attributed as designed by the EF is a complex aperiodic and plainly functional signal. Remember, false negatives would be cases of design not detected on grounds that there is not sufficient indication of design. Better to toss the little ones back. Back to crashes, incidents and reports, sigh.

Thank you for giving a direct and clear answer. I agree with you and I left out functional intentionally obviously.

I hope things are starting to calm down. We’re having a big upheaval here in England regarding a now deceased (rather famous) individual who apparently was a serial abuser and he was for decades without anyone getting his behaviour looked into. It’s gonna be a while before this works through.

Yes, you can. If you observe the effects, you can infer something was there. e.g., deer tracks.

You can’t infer to something if you’re not sure it was there…

We’re not inferring some thing. We’re inferring a cause known to be capable of producing the effect. So yes, you can infer the cause if you observe the effect. That’s how science works.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

What assumption? No one is making any assumption here.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

So if there’s no competing theory which explain the phenomena, you can in fact infer to something, even if you’re not sure it was there?

And what’s the competing theory for the system KF described, the system that must have been in place for your common descent theory to even be tenable?

And if there is none, we’re warranted in inferring design for OOL?

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

Yes, you can. Then you’d just have two competing theories. But I can’t imagine what that other theory would look like if it too wasn’t operating on inference from effect to cause. Can you give an example?

I’m happy to draw inferences to causes known to be operating at the given time or reasonably likely to have been operating at the given time. As you say, that’s a common method of reasoning in many fields.

If the effect is there, then the cause is known to have been operating at the given time or reasonably likely to have been operating at the given time.

But only in ID are people inferring to a cause that has no independent proof of being in existence at the time in question.

Independent proof? That’s your standard? So in addition to being able to infer the cause from the effect you need independent proof? Proof of what?

How do you propose that we separate the cause from the effect such that we can establish independent proof of the cause?

You are being completely unreasonable. Once again, your intellectual dishonesty is showing through.

Your argument is based on a random search of a whole configuration space. And that’s NOT how new body plans are developed according to evolutionary theory so the argument is not applicable for anything after the first basic replicator as an argument against universal common descent with modification.

So evolutionary theory includes a theory about how body plans develop? Do tell. (Not by intelligent design doesn’t count as a theory.)

Is there a configuration space? Is there a random walk?

Your argument is based on a random search of a whole configuration space.

It’s not possible to search the whole space! So no, that’s not his argument.

…and so there’s no need or cause to search the whole configuration space then either.

Given random mutations, what is constraining the search to only a small subset of the configuration space?

Living systems don’t do random searches across whole configuration spaces to find new body plans.

Well, duh! Living systems already have a body plan. So they don’t need to go off looking for one.

Are you saying you believe in special creation? If not, where did those body plans come from?

I think the ‘functional’ configurations are much more likely than you’re guessing in your model.

Wishful thinking is no a substitute for evidence and arguments.

You’d certainly like them to be more likely. In fact you need them to be more likely. But what is your evidence that they are more likely? Please provide independent proof! You’re wishing it was so does not make it so.

Like I’ve said: as far as evolutionary theory is concerned, that first basic replicator could have ridden in on a meteor. Or fallen out of a alien visitor’s lunch box.

In which case there could have been multiple meteors and multiple aliens with lunchboxes, and life could have been seeded on earth at independent times and places, and evolutionary theory is perfectly consistent with that.

Non that supports the transformations required are even possible. Do try to stay focused.

So what’s your explanation for protein functional redundancy?

What does that have to do with what I said about the transformations required?

And for DNA functional redundancy? And for transposons? And for redundant pseudogenes? And for ERVs? All those things are consistent with universal common descent. And that’s only part of the genetic evidence.

How do you know that is evidence for universal common descent? What part of UCD mandates protein functional redundancy?

What about the bio-geographic evidence? Why are lemurs naturally endemic to one island? I know you think lots and lots of life was pre-coded so . . . where is that coding and how does it limit lemurs to one island?

Your position can’t explain lemurs. So perhaps you should stop with your “Gish Gallop” and focus on that.

And if you don’t have any idea on the number of mutations it takes, then you don’t have science.

Do you know how many mutations it takes?

The point is there isn’t any evidence that any amount of mutational accumulation can account for the transformations required.

What about my other, simple question: if a signal from space was detected that was on a constant frequency at a constant interval from a single location would it be a candidate for being SCI?

I would think 3) should be in the inference. If we see 1) and 2) we infer design because of 3) and 4)

If we observe 1) and 2) and it turns out that some necessity mechansim produced it, it does not stop having “a functional specification and High digital complexity linked to that specification”– a bacterial flagellum is still a specified functional thingy regardless of how it came to be that way. It doesn’t stop exhibiting dFSCI if natural selection didit. However dFSCI does stop being a design indicator if that is ever demonstrated.

It would either be dFSCI is no longer a design indicator of dFSCI doesn’t exist. Because if you say that if a necessity mechanism produced it, it ain’t dFSCI even though it meets the criteria, then you do have a circular definition.

Yes, you can. If you observe the effects, you can infer something was there. e.g., deer tracks.

Sure, if you know there’s deer around at the time. Do you know there was a designer around when you’re inferring one?

You can’t infer to something if you’re not sure it was there…

We’re not inferring some thing. We’re inferring a cause known to be capable of producing the effect. So yes, you can infer the cause if you observe the effect. That’s how science works.

But you don’t know if there was a designer around at the pertinent time.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

What assumption? No one is making any assumption here.

You’re assuming there was a designer around with no independent evidence for one.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

So if there’s no competing theory which explain the phenomena, you can in fact infer to something, even if you’re not sure it was there?

And what’s the competing theory for the system KF described, the system that must have been in place for your common descent theory to even be tenable?

And if there is none, we’re warranted in inferring design for OOL?

If there’s no competing theory then you can try but it’s still just a hypothesis and one that needs more evidence.

The competing theory is the modern evolutionary synthesis.

You can’t infer to something if you’re not sure it was there when there’s other theories which explain the phenomena which do not require that assumption.

Yes, you can. Then you’d just have two competing theories. But I can’t imagine what that other theory would look like if it too wasn’t operating on inference from effect to cause. Can you give an example?

Of course you’d be operating on effect to cause but to a cause known to be present at the time!!

I’m happy to draw inferences to causes known to be operating at the given time or reasonably likely to have been operating at the given time. As you say, that’s a common method of reasoning in many fields.

If the effect is there, then the cause is known to have been operating at the given time or reasonably likely to have been operating at the given time.

Prove the cause exists AND is present at the given time if you want to compete with theories which don’t require special pleading.

But only in ID are people inferring to a cause that has no independent proof of being in existence at the time in question.

Independent proof? That’s your standard? So in addition to being able to infer the cause from the effect you need independent proof? Proof of what?

How do you propose that we separate the cause from the effect such that we can establish independent proof of the cause?

You are being completely unreasonable. Once again, your intellectual dishonesty is showing through.

Show me some physical evidence that a designer was present. Some artefacts. Some living quarters. Some lab equipment. Some documentation.

We separate the cause from the effect all the time. I’m not being unreasonable when you’re asking me to accept an undefined and unobserved designer who did something at some undefined time at some undefined place for some undefined reason.

Hey, Mung! I think there’s an alien spacecraft that’s shadowing the Voyager spacecraft which is exerting a small gravitational pull on it which explains why it’s travelling slightly slower than we expect it to. Would you buy that explanation? We know that kind of gravitational effect would work. We know how to build spacecraft. It’s a hypothesis.

Hey, Mung! I’ve detected a regularly occurring radio signal from a constant point in space at a fixed frequency. I think it’s an alien distress call, a homing beacon. It’s the kind of homing beacon our ships and craft send out. We can conceive of doing that kind of thing. It’s a hypothesis.

Erich von Daniken convinced a lot of people that the lines at Nazca were created by alien astronauts. We have astronauts. We can make spaceships and flying craft. It’s a hypothesis.

You just don’t get it. If your position could substantiate its claims then we wouldn’t be having this discussion as Newton’s Four Rules of Scientific Investigation say we do not add entities unnecessarily.

Also we do not know that humans designed Stonehenge. We may infer it because allegedly they were around when it was built, but that doesn’t mean they did it. And if it wasn’t for Stonehenge’s existence we wouldn’t think that the people of that island could construct such a thing.

No one has found any plans, nor documentation nor lab experiment.

Pertaining to independent evidence ofr a designer- the evidence for a designer from biology is independent from the evidence for a designer from cosmology with is independent from the evidence for a designer in astronomy which is independent from the evidence for a designer from physics, etc (chemistry, geology)

Here you are once again being intellectually dishonest. If you can’t be trusted to be honest there’s not much point in debating you.

You wrote:

I’m happy to draw inferences to causes known to be operating at the given time or reasonably likely to have been operating at the given time.

You flip flop back and forth in order to immunize yourself against reason.

Given deer tracks, it’s reasonably likely there was s deer around at the time. Given your own words, this should be enough for you. But when it comes to ID, you want to change the rules.

Do you know there was a designer around when you’re inferring one?

Who said anything about inferring a designer?

Here’s what I wrote:

So yes, you can infer the cause if you observe the effect.

Do you disagree that a cause can be inferred by it’s effects? Have you ever seen gravity? How do you even know it exists?

But you don’t know if there was a designer around at the pertinent time.

So? I don’t need to know a designer was around. I look up in the sky and see a contrail. Do I need to know a jet was around at the time before I can make an inference as to what caused the contrail? Do I need to go find independent proof that a jet was around? The answer is no, I don’t. You know it, I know it, and everyone reading this thread knows it. Your requirements are bogus.

Prove the cause exists AND is present at the given time if you want to compete with theories which don’t require special pleading.

There’s no alternative theory that I’m aware of. Do you have one that you’re willing to put forward and defend? We all already know the answer to that question. So, again, intellectual dishonesty.

And as I said, even if there is a competing theory that doesn’t invalidate the inference we’ve made. Do you have a counter-argument?

You’re assuming there was a designer around with no independent evidence for one.

That’s simply false. I’m making no such assumption. So I don’t need independent evidence for some assumption I’m not making.

Show me some physical evidence that a designer was present. Some artefacts. Some living quarters. Some lab equipment. Some documentation.

Those things are not causes. So how on earth would they provide independent proof?

I’ve got no physical evidence that you exist. You’ve produced no artifacts. I’ve never seen your living quarters. I’ve never seen your computer or any documentation proving you exist or even have an internet account.

The elimination of obvious necessity mechanisms is necessary to eliminate compressible strings, or any other ordered output that can be explained by necessity. It is the first step in KF’s algorithm, it is an essential part of Dembski’s explanatory filter.

My idea is that what can be explained by necessity is not complex. That’s why I put point 3 in the definition. KF puts it at the beginning. There is no difference.

This has nothing to do with the philosophical question that darwinists pose: but if one day a necessity mechanism were found…

For pseudo-random strings, that mechanism will never be found, because necessity cannot generate that kind of strings. And, as I have explained, if it were found it would falsify the whole dFSCI procedure.

The complexity in pseudo random strings is tied to the fact that they cannot be generated by any simple computation, and therefore a high number of bits is required to express the function.

I insist that a string of 500 heads does not exhibit dFSCI: it is highly compressible, and it can easily be generated in a natural system.

In the same way, a gene made of 300 identical nucleotides does not exhibit dFSCI: it can be easily generated in the lab, from a pool with only one nucleotide available.

So, necessity mechanisms must be excluded in the definition, because otherwise we are not sure of the complexity, and we cannot make the inference.

You’d certainly like them to be more likely. In fact you need them to be more likely. But what is your evidence that they are more likely? Please provide independent proof! You’re wishing it was so does not make it so.

Please answer a question or two and then please provide a viable, consistent, coherent, parismonious hypothesis instead of just bitching and moaning.

In which case there could have been multiple meteors and multiple aliens with lunchboxes, and life could have been seeded on earth at independent times and places, and evolutionary theory is perfectly consistent with that.

Yes. We know that. Now think about what that means for your argument.

It could have happened that way. The important thing is that a basic replicator got a foothold on earth.

You only infer design when RV and ‘necessity mechanisms’ have been ruled out. That will create some false negatives, but it’s better to have false negatives than false positives, as you have pointed out.

So every functional sequence falls into one of three categories:

1) simple enough to have been produced by RV (and, of course, design)
2) too complex for RV, but could have been produced by ‘necessity mechanisms’ (and, of course, design)
3) out of reach for RV and ‘necessity mechanisms’, so could only be produced by design

But what are you saying here? This is nonsense!

A thing is designed if, and only if, a conscious intelligent agent outputted his representation to the object to purposefully shape it! This is the definition of design.

A thing is not considered designed if it is “out of reach for RV and ‘necessity mechanisms’, so could only be produced by design”. What stupid definition is this?

A thing is designed if someone designs it. It is not designed if nobody has designed it. Period. Being designed has nothing to do with RV, necessity or anything else.

Design is only inferred if the tests for #1 and #2 are not satisfied, meaning we “fall through” to the default, which is #3 — design.

Absolutely not! Again, you must be mad. After all, you could not be a liar, and I eill have to apologize.

In the tests, the origin (design or not design) is not inferred: it is known before the tests, becaise the history of the strings is known.

So again, are you simply mad?

Now look at your criteria for establishing the presence of dFSCI:

a) High functional information in the string (excludes RV as an explanation)

b) No known necessity mechanism that can explain the string (excludes necessity explanation)

So dFSCI is attributed to a sequence if it can’t be explained by RV or necessity mechanisms.

That’s correct. But it must also be functional, let’s remind that too.

But we saw earlier that design is attributed to a sequence if it can’t be explained by RV or necessity mechanisms.

No. Absolutely not. You said those silly things, because apparently you have understood absolutely nothing of the discussion, and still go on patronizing everyone! I don’t know if you are mad, but you are certainly very arrogant.

It’s only you, in your confused mind, that attribute design in the same way that I attribute dFSCI. Of course you conclude that it is circular!

This is a farce. The origin of a string is a fact, that can be observed. In the testing phase, we only use strings whose origin has been observed to test the specificity of dFSCI (obviously, in blind).

It’s only in the application of dFSCI to strings whose origin is not known that we make an inference: from the property of dFSCI (observed in the object) to an inference of a design origin (inference of a fact). No circularity. And if you really don’t understand it now, what should I think of you?

In what sense was the protein correctly assessed? You thought there was no necessity mechanism and it turns out there was! How can this be a correct assessment?

Please, read again my post #20. And read all!

“I will be more clear. But I must say, before going on, what I consider a correct assesment of dFSCI:

a) The function must be defined explicitly, and must be objectively measurable.

b) The threshold must be appropriate for the sytstem being considered, and for its probabilistic resources.

c) The target space/search space ratio must be approximated as well as possible, and must be credible.

d) All strings that exhibit high regularity and compressibility should not be considered as exhibiting dFSCI, just to be cautious. Those output can be very likely explained by necessity mechanisms.

e) For all strings whose formal appearance is of the “pseudo-random” type, with no apparent order or regularity, we can usually infer dFSCI with safety, if all other conditions are present. However, a thorough consideration of the laws that act in the system must be done, and we must be reasonably sure that those laws have no special connection with the specific string we are considering.

If all these conditions are well satisfied, I consider the assessment of DFSCO as correct. As you can see, there is a lot of work to be done to assess a property that you label as “redundant” and “unnecessary”.

Now, I do believe that if all those properties are satisfied, no future explanation will ever be found for that dFSCI, except obviously design. That is not a defintion, nor an inference, nor a fact, just to be clear. Let’s call it “a prediction”. Empirical experience will confirm that prediction, or will falsify it.

It is not so strange, after all. Even Mark has admitted that he does not really believe that any future necessity mechanism will ever be found to explain that sonnet. Let’s say that I am as sure that no mechanism will ever explain protein domains, as Mark is that no mechanism will ever explain the sonnet.

So, let’s say that if such a mechanism is credibly shown, I will consider my theory falsified.”

This is the sense in which the protein is correctly assessed: we must follow the procedures as I have outlines them.

We are confident that, if those procedures are sollowed, no necessity mechanism will be discovered in the future that can explain the string that was assessed as exhibiting dFSCI. Obviously, if that should happen, it is a falsification of the concept or the procedure (or both).

And for DNA functional redundancy? And for transposons? And for redundant pseudogenes? And for ERVs? All those things are consistent with universal common descent. And that’s only part of the genetic evidence.

How do you know that is evidence for universal common descent? What part of UCD mandates protein functional redundancy?

I’m not about to spend post after post teaching you basic genetics. If you really want to know go do some reading.

What about the bio-geographic evidence? Why are lemurs naturally endemic to one island? I know you think lots and lots of life was pre-coded so . . . where is that coding and how does it limit lemurs to one island?

Your position can’t explain lemurs. So perhaps you should stop with your “Gish Gallop” and focus on that.

Absolutely my position can explain lemurs. A population of primates migrated to Madagascar and evolved in isolation from the mainland from which they were separated. Easy.

And if you don’t have any idea on the number of mutations it takes, then you don’t have science.

Do you know how many mutations it takes?

The point is there isn’t any evidence that any amount of mutational accumulation can account for the transformations required.

Hang on. You raised a challenge, I turned it back on you, and you punted.

You don’t have an answer either Joe. Best to just admit it really.

What about my other, simple question: if a signal from space was detected that was on a constant frequency at a constant interval from a single location would it be a candidate for being SCI?

If it was a nice sine wave, it would be a good candidate.

Thank you for that clear and direct answer. It conflicts with KF’s but reasonable people disagree at times.

I am ot sure I understand your points, that seem to derive from discussions you had with others, and that I had not the time to follow.

Your question:

“Is “dFSCI” a characteristic of “information” or is it a characteristic of its source?”

has no meaning for me, and no correspondence im ,y terminology. For me, dFSCI is a property od the object. Empirically, it is found only in designed objects. If you want to call that “a source”, be my guest, but I can’t see why.

Least of all I can understand what you mean with being “independent of the source”. Coud you explain, please?

Finally, I would say that dFSCI is a metric only of itself. It is a categorized assessment of the information necessary to express a function.

kairosfocus and Upright BiPed both claim “all possible strings of length x”. It would be interesting to see if gpuccio agrees with that.

Yes, I agree, but it is an approximation. In principle, shorter or longer strings can express the function. But we assume that length x represents well enough the target space/search space ratio. To evaluate all possible strings, of any length, that can express the funtion as a target space, and all possible strings of any length as search space would be rather intractable.

Only “a” depends on the string.

No. The function, the target space and the search space all depend on the string. The threshold essentially depends on the probabilistic resources of the system (the time span by te number of states that can be tested for unit of time).

Again, I don’t understand what you mean with “the source”.

So if two strings, both of them complex, specific and functional enough to qualify as “dFSCI” before their origins are specified, with the only difference being one’s source was a “designer” and the second was a result of a “necessity mechanism”, they would both qualify as having “dFSCI”, even though only one was the result of an “intelligent designer”.

They would both be assessed as having dFSCI. The first would be a true positive. The second (that has never happened) would be a false negative.

dFSCI is assessed form the object (and the system). If the two objects are the same, and they appear in the same system, the assessment of dFSCI must necessarily be the same for both. If independent facts can attest a different origin for the two objects, thatwould imply what I have said: one is a true positive, the other a false positive.

You just don’t get it. If your position could substantiate its claims then we wouldn’t be having this discussion as Newton’s Four Rules of Scientific Investigation say we do not add entities unnecessarily.

We can quit having this discussion any time. I’m happy to admit I’m right whenever you are.

Also we do not know that humans designed Stonehenge. We may infer it because allegedly they were around when it was built, but that doesn’t mean they did it. And if it wasn’t for Stonehenge’s existence we wouldn’t think that the people of that island could construct such a thing.

Are you serious? Really?

Joe, you really, really need to read up on archaeology and stone circles. Seriously. Before you embarrass yourself.

No one has found any plans, nor documentation nor lab experiment.

Pertaining to independent evidence ofr a designer- the evidence for a designer from biology is independent from the evidence for a designer from cosmology with is independent from the evidence for a designer in astronomy which is independent from the evidence for a designer from physics, etc (chemistry, geology)

But they all suffer from the same common fallacy: inferring a cause with has not been proven to been in existence at the time.

Joe! Guess what? I think alien astronauts designed the statues on Easter Island. I haven’t got any evidence of aliens being around at the time except for these big statues which i can’t explain. And I can’t say for sure that the local humans weren’t able to do it but I don’t personally know how they did.

What do you think? If we can’t explain it within so many years then can we say it’s ancient aliens?

Yes, how many times have we all asked them to define ID without resorting to a negative position on evolution.

Yet another great example of the sorts of things we have to deal with. ID claims to offer a better explanation for some features of living things. A better explanation than what? I’ll give you one guess.

So of course ID has to H into account. But P(H) is non-negative. Even keiths knows that!

We give Darwinian evolution credit for what it can do. It just doesn’t appear that it’s able to do all that much.

Now, just to see if you can get back on topic, in what way has gpuccio defined dFSCI in such a way as to resort to a negative position on evolution?

I keep forgetting. What is the procedure for correctly calculating or assessing dFSCI? What test or procedure rules out the possibility tha a protein domain started as a “random” string with a weak function and became optimized in just a handful of steps?

No problem, I keep reminding it to you. It’s not me who have to rule out an imaginary mechanism. It’s you who have to show a real mechanism. Show the random string, demobstrate that it is common enough in a random library, and give us the naturally selectable intermediates, each giving a reproductive advantage in some living system.

Do you disagree that a cause can be inferred by it’s effects? Have you ever seen gravity? How do you even know it exists?

Ah well, gravity abides by a set law, it never varies from it’s rule. What’s the rule, law for your designer? What limitations does ‘he’ respect? How can we define ‘him’?

You want to do science? Then do science. Define a law or formula or criteria that your designer is limited by? Heck, Mung, just give us an idea of what designer you are talking about ’cause you’ve not been particularly forthcoming in what your hypothesis is to be honest. What kind of designer are you talking about?

Is it really Jerad answering the question required to comment? Just wondering if he is human and what proof he might offer. The iPhone lady does a better job of conversing and exceeds his powers of reasoning I think.
Sorry I can’t award a smiley to him on this ether.
and thank you Mung for your ability to showcase the inability to reason of such an ideologue.

So, Jerad, you’ve never seen gravity. And yet you believe that it exists and that it’s controlled by some invisible set law, which you’ve also never seen. And can I assume then that from the existence of this invisible set law that you also infer an invisible lawgiver?

And your independent proof of this invisible lawgiver is?

What’s the rule, law for your designer?

What is the rule/law for your invisible lawgiver? And where did that rule/law come from? Is it invisible lawgivers all the way down for you?

Stonehenge is different than plain ole stone circles. And if it didn’t exist we would not think people from thousands of years ago were capable of building it.

Also if we had proof the designer was around then we wouldn’t have a design inference, design would be a given. IOW you are proving that you don’t understand how science works. We infer a designer existed because we observe design in nature. And seeing that natural processes only exist in nature, they cannot account for its orgin, which science says it had. So we infer it was something other than nature that gave us nature.

And in the end if your position had any evidence to support we wouldn’t be talking about proving a designer.

How many mutations to get a mammalian inner ear from a reptilian jaw? Any math, any formula, equation, landscape function algorithm, we can use to tell us?

Consider pulsars – stellar objects that flash light and radio waves into space with impressive regularity. Pulsars were briefly tagged with the moniker LGM (Little Green Men) upon their discovery in 1967. Of course, these little men didn’t have much to say. Regular pulses don’t convey any information–no more than the ticking of a clock. But the real kicker is something else: inefficiency. Pulsars flash over the entire spectrum. No matter where you tune your radio telescope, the pulsar can be heard. That’s bad design, because if the pulses were intended to convey some sort of message, it would be enormously more efficient (in terms of energy costs) to confine the signal to a very narrow band. Even the most efficient natural radio emitters, interstellar clouds of gas known as masers, are profligate. Their steady signals splash over hundreds of times more radio band than the type of transmissions sought by SETI.

No constant fequency with pulsars- they blast the spectrum.

If SETI were to announce that we’re not alone because it had detected a signal, it would be on the basis of artificiality. An endless, sinusoidal signal – a dead simple tone – is not complex; it’s artificial. Such a tone just doesn’t seem to be generated by natural astrophysical processes. In addition, and unlike other radio emissions produced by the cosmos, such a signal is devoid of the appendages and inefficiencies nature always seems to add – for example, DNA’s junk and redundancy.

And although Seth is mistaken, ID is looking for artificiality only. It’s just that complex specified information is artificial. So if we receive it we wouldn’t say- “Oh that ain’t no simple sine wave, so even though it matches everything else we are looking fer, because it is too complex, it ain’t from ET”

“dFSCI” relies on the premise that evolution cannot create the specific functional complexity required for living things, i.e., “evolution can’t do it”.

No, it doesn’t.

1. dFSCI as it is being presented and argued by gpuccio is not about OOL. He takes life as a given, just like evolution does.

2. When it come to the origin of life, evolution cannot explain it, because evolution requires living things. That has nothing to do with dFSCI.

3. dFSCI relies on how it is defined and measured, not on some premise about what evolution can or cannot do. It’s an open question as to whether or not evolution can create the specific functional complexity required for living things whatever specific instance of dFSCI you care to talk about.

So instead of arguing about how circular the definition of dFSCI is, why not pin gpuccio down on something he claims exhibits dFSCI, come to some agreement about whether given his definition of dFSCI it does in fact exhibit dFSCI, and if you get that far, demonstrate that it a false positive because it did in fact come about by Darwinian means.

Yes, how many times have we all asked them to define ID without resorting to a negative position on evolution.

Intelligent Design is the (detection and) study of design in nature.- Wm Dembski

And it just so happens that even if evolutionism did not exist, to get to the design inference we would still have to eliminate necessity and chance. And after doing that we would still have to see if the design criteria is met.

That said, the DESIGN INFERENCE depends/ relies on the premsie that bind and undirected chemical processes cannot create the specific functional complexity required for living things.

I would be willing to submit a 6000 word essay for posting at UD but I need to know that KF is also willing to submit a similar essay justifying empirical evidence of the “designer scheme for origins” from OOL on.

lol. Are you sure? kf has entire web site devoted to the topic. So, get to work on your 6,000 word essay!

Toronto:

Seriously, a design implies a designer.

ok, so?

Toronto:

You simply have to assume a designer existed if you claim you have something that was designed.

What is it with you people over there at TSZ and definitions?

onlooker doesn’t understand the meaning of arbitrary.

keiths doesn’t understand the meaning of not compatible.

mark doesn’t understand what constitutes a circular definition.

And you don’t understand the meaning of the word assume.

You’ve just said that given some design, a designer is implied. Do you know what that means. Given a design, by implication, a designer. That’s not an assumption.

I keep forgetting. What is the procedure for correctly calculating or assessing dFSCI? What test or procedure rules out the possibility that a protein domain started as a “random” string with a weak function and became optimized in just a handful of steps?

Ghost of charles darwin! Do you folks never come up with anything new?

Now if you could tell us how much initial functional specificity one of her random strings had, how much functional specificity one of her final strings had, and how much gain in functional specificity there was in going from her initial randomly generated string to one that meets her goal, and then show something similar from nature, we might believe that her GA has some relationship to something in nature.

Didn’t Lenski or Szostak do something like that using intelligent selection? It would be interesting to compare to those results as well.

There is no equivalency and an attempt to pretend that you cannot simply go to the IOSE intro-summary page as I have linked from the very beginning or — for every post I have ever made at UD — link a longstanding reference note through my handle is transparently insincere. It is a patent attempt to find any excuse not to provide a reasonable, empirically grounded case for the blind watchmaker thesis materialist model of origins.

The offer as long since made — over a month — is made in good faith, is a more than fair offer and stands on its own terms.

Remember, onlookers, every tub must stand on its own bottom.

So, Toronto, I suggest you provide your essay. And if you need more than 6,000 words, that would be fine within reason; noting that there is room for onward links.

G’day

GEM of TKI

PS: Mung, the summary was originally quite shorter, it has grown as I have had to respond to the twists and turns of the darwinist mindset and its incredible ability to strawmannise.

So, Jerad, you’ve never seen gravity. And yet you believe that it exists and that it’s controlled by some invisible set law, which you’ve also never seen. And can I assume then that from the existence of this invisible set law that you also infer an invisible lawgiver?

And your independent proof of this invisible lawgiver is?

Yeah, I think Newton figured out the law of universal gravitation. I recommend it as it beats the alternate theory that the earth sucks.

What’s the rule, law for your designer?

What is the rule/law for your invisible lawgiver? And where did that rule/law come from? Is it invisible lawgivers all the way down for you?

Stonehenge is different than plain ole stone circles. And if it didn’t exist we would not think people from thousands of years ago were capable of building it.

I guess you’ve never seen the circle at Avebury. Or some of the ones in Scotland.

Also if we had proof the designer was around then we wouldn’t have a design inference, design would be a given. IOW you are proving that you don’t understand how science works. We infer a designer existed because we observe design in nature. And seeing that natural processes only exist in nature, they cannot account for its orgin, which science says it had. So we infer it was something other than nature that gave us nature.

Natural processes only exist in nature, I like that one. So we infer it was something other than nature that gave us nature is good too.

And in the end if your position had any evidence to support we wouldn’t be talking about proving a designer.

I’m thinking I’m gonna stop talking about it pretty soon actually.

How many mutations to get a mammalian inner ear from a reptilian jaw? Any math, any formula, equation, landscape function algorithm, we can use to tell us?

The theory of Gravity, like many other ridiculous theories and notions, is a hoax that has been forced on the good and decent Republicans and repressed-Republicans of this nation by Democrats, New York intellectuals, and, of course, their Bear allies. Gravity, like evolution, has damaged the American way of life so badly, God sent the scientists who proved Intelligent Design, to teach everyone of the wonderful science of God’s Intelligent Falling.
As of June 19, 2007, Stephen takes back everything he has ever said about gravity and is now willing to entertain Isaac Newton’s theories, including the notion that particles of matter are attracted to one another in proportion to their mass, and not because they’re pulled together by angels.
It is not yet known why the Bears have perpetrated this hoax on the world, but it has something to do with keeping the people of the world down.

Jerad: A clean sustained sine wave source — const amplitude, phase steadily advances with time, no distortion of consequence, long duration — for radio would be quite specific and credibly functional, given issues like the difficulty of getting so narrow a bandwidth, ideally a line. Such a source is very hard — read that, complex — to do; natural oscillations strongly tend to be damped, or to not be clean — saturation effects, crossover distortion, intermod effects etc. Even a laser is not a clean sine wave source — we talk of a 50 nm bandwidth etc. And our lab sources have some harmonic distortion and often a bit of mixing due to nonlinearities. Pulsars are just that, pulsed [thus not clean sine sources], though quite steadily periodic. KF

>> . . . This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being. And if the fixed stars are the centres of other like systems, these, being formed by the like wise counsel, must be all subject to the dominion of One; especially since the light of the fixed stars is of the same nature with the light of the sun, and from every system light passes into all the other systems: and lest the systems of the fixed stars should, by their gravity, fall on each other mutually, he hath placed those systems at immense distances one from another.

This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called Lord God pantokrator , or Universal Ruler; for God is a relative word, and has a respect to servants; and Deity is the dominion of God not over his own body, as those imagine who fancy God to be the soul of the world, but over servants. The Supreme God is a Being eternal, infinite, absolutely perfect; but a being, however perfect, without dominion, cannot be said to be Lord God; for we say, my God, your God, the God of Israel, the God of Gods, and Lord of Lords; but we do not say, my Eternal, your Eternal, the Eternal of Israel, the Eternal of Gods; we do not say, my Infinite, or my Perfect: these are titles which have no respect to servants. The word God usually signifies Lord; but every lord is not a God. It is the dominion of a spiritual being which constitutes a God: a true, supreme, or imaginary dominion makes a true, supreme, or imaginary God. And from his true dominion it follows that the true God is a living, intelligent, and powerful Being; and, from his other perfections, that he is supreme, or most perfect. He is eternal and infinite, omnipotent and omniscient; that is, his duration reaches from eternity to eternity; his presence from infinity to infinity; he governs all things, and knows all things that are or can be done. He is not eternity or infinity, but eternal and infinite; he is not duration or space, but he endures and is present. He endures for ever, and is every where present; and by existing always and every where, he constitutes duration and space. Since every particle of space is always, and every indivisible moment of duration is every where, certainly the Maker and Lord of all things cannot be never and no where. Every soul that has perception is, though in different times and in different organs of sense and motion, still the same indivisible person. There are given successive parts in duration, co-existent puts in space, but neither the one nor the other in the person of a man, or his thinking principle; and much less can they be found in the thinking substance of God. Every man, so far as he is a thing that has perception, is one and the same man during his whole life, in all and each of his organs of sense. God is the same God, always and every where. He is omnipresent not virtually only, but also substantially; for virtue cannot subsist without substance. In him are all things contained and moved [i.e. cites Ac 17, where Paul evidently cites Cleanthes]; yet neither affects the other: God suffers nothing from the motion of bodies; bodies find no resistance from the omnipresence of God. It is allowed by all that the Supreme God exists necessarily; and by the same necessity he exists always, and every where. [i.e accepts the cosmological argument to God.] Whence also he is all similar, all eye, all ear, all brain, all arm, all power to perceive, to understand, and to act; but in a manner not at all human, in a manner not at all corporeal, in a manner utterly unknown to us. As a blind man has no idea of colours, so have we no idea of the manner by which the all-wise God perceives and understands all things. He is utterly void of all body and bodily figure, and can therefore neither be seen, nor heard, or touched; nor ought he to be worshipped under the representation of any corporeal thing. [Cites Exod 20.] We have ideas of his attributes, but what the real substance of any thing is we know not. In bodies, we see only their figures and colours, we hear only the sounds, we touch only their outward surfaces, we smell only the smells, and taste the savours; but their inward substances are not to be known either by our senses, or by any reflex act of our minds: much less, then, have we any idea of the substance of God. We know him only by his most wise and excellent contrivances of things, and final cause [i.e from his designs]: we admire him for his perfections; but we reverence and adore him on account of his dominion: for we adore him as his servants; and a god without dominion, providence, and final causes, is nothing else but Fate and Nature. Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [i.e necessity does not produce contingency] All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing. [That is, implicitly rejects chance, Plato’s third alternative and explicitly infers to the Designer of the Cosmos.] But, by way of allegory, God is said to see, to speak, to laugh, to love, to hate, to desire, to give, to receive, to rejoice, to be angry, to fight, to frame, to work, to build; for all our notions of God are taken from. the ways of mankind by a certain similitude, which, though not perfect, has some likeness, however. And thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy. >>

____________

This is of course the major work that presented the theory of gravity.

As in, the interferer — at many levels — does not know what he is talking about.

Thank you for your answer. However you are right, I should have written, to be more precise:

“3) No known necessity mechanism that can explain that apparent complexity”

Things work more or less this way:

a) we see that the string is functional

b) we see that it is long enough to have high total complexity (IOWs, we compute the search space)

c) we approximate the target space, and compute the functional complexity as the ratio target space/search space. If the functional complexity is high enough, we conclude that the string is potentially functionally complex.

d) Finally, before assessing dFSCI, we stop a moment to be sure that the string has no special regularity and order to suggest a necessity origin. That could probably be enough, but just because we are methodologically sound, we do our best to be sure that nobody has offered any convincing and detaile necessity, or mexed (RV + necessity) explanation for that string. Then we assess that the string exhibits dFSCI.

The final judgement, if all the procedure has been correctly followed, is a correct diagnosis for the presence of dFSCI in that object. And it brings us to a design inference.

As already explained, new facts can confirm it or falsify it. The whole utility of the dFSCI procedure for design inference in the bilogical context, however, relies on its extremely high specificity. Anything that falsifies that specificity, therefore, can be considered as a powerful falsification of the empirical utility of the procedure. As already explained.

IOWs, the design inference by dFSCI is a scientific theory that complies perfectly with the classic Popperian requirement.

b) In principle, a new and completely unexpected necessity mechanism could be found that explains the observed string, either by itself, or by lowering the probabilistic barriers for RV.

Both thing, IMO, will never happen for a string where dFSCI has been correctly assessed following my procedure. But that is just mt conviction, not a logical necessity.

However, if you accept these reservations, I essentially agree with this steatement of yours: dFSCI is conceived and defined with the explicit purpose of credibly, empirically rejecting RV and necessity as an origin of the observed string. That’s exactly its purpose.

I object, however, to the term “unguided evolution” in the definition. The definition makes no use of the term “unguided evolution”. It just refers to the much clearer terms of RV and necessity mechanism. So, please, stick to those clear concepts in your argumentation, whatever it is. RV and necessity are universal concepts in science. Evolution is an ambiguous concept, and the term “unguided” seems to refer to a designer, and is IMO purposefully inputted by you in my definition to suggest circularity.

I have never used those terms in my definition. So, please, refer to my definition and to nothing else.

b) In principle, a new and completely unexpected necessity mechanism could be found that explains the observed string, either by itself, or by lowering the probabilistic barriers for RV.

Both thing, IMO, will never happen for a string where dFSCI has been correctly assessed following my procedure. But that is just mt conviction, not a logical necessity.

However, if you accept these reservations, I essentially agree with this steatement of yours: dFSCI is conceived and defined with the explicit purpose of credibly, empirically rejecting RV and necessity as an origin of the observed string. That’s exactly its purpose.

I object, however, to the term “unguided evolution” in the definition. The definition makes no use of the term “unguided evolution”. It just refers to the much clearer terms of RV and necessity mechanism. So, please, stick to those clear concepts in your argumentation, whatever it is. RV and necessity are universal concepts in science. Evolution is an ambiguous concept, and the term “unguided” seems to refer to a designer, and is IMO purposefully inputted by you in my definition to suggest circularity.

I have never used those terms in my definition. So, please, refer to my definition and to nothing else.

So, let’s just see if the statement is really circular or not:

“Saying that RV and necessity cannot produce dFSCI is what we certainly expect in empirical observations, because dFSCI was defined exactly to get that result. So, if our work in defining dFSCI was good, we should obtain exactly that result, not because it is logically necessary, but because it is extremely likely empirically.”

This is it. It is still non circular logically. It just measn that dfSCI does what it was defined to do. If you agree, we can call that “empirical consistency” of the dFSCI definition. And still that has nothing to do with the design inference, and its supposed circularity.

If the answer is (b) then the no necessity mechanism is not part of the definition of dFSCI – because you have found something with a necessity mechanism and you are still calling it dFSCI

The answer is definitely b). And indeed, I have never made the strange requirement you and others seem to always attribute to me. I have never said that no necessity mechanism must exist in order to assess that an object has dFSCI. I have always said that we must verify that no known necessity mechanism exists that can explain that string. I don’t know why all of you seem to forget that simple word: “known”. Which is an obviouys recognition that we cannot ignore what is known. But that word has always been in my definition, for years, because obviously otherwise I could never assess dFSCI for anything, because it is in principle impossible to exclude any new, unexpected, necessity mechanism. It would be like saying that nobody could have reasoned scientifically in the past centuries without before excluding possible effetcs of quantum mechanics.

PS. You often write is if you believe that what you say is perfectly clear and obvious and therefore opponents are being stupid or dishonest. I promise you it is far from clear. Perhaps Joe’s confusion will convince you of this.

Neither of that is really true. I may believe that what I wrote is often clear, but I am aware that sometimes it is not. I make errors, and many times I don’t express myself clearly enough. But I am always available to clarify, if what is not clear is clearly pointed out to me.

The accusation if being stupid or dishonest, which, as you should known, is not my usual way to discuss, was done, with great personal unease, after a situation lasting days and days, where I had repeatedly made some fundamental points, maybe sometime less clearly, but certainly other times very clearly, and practically all of you were still evading them, and insisting in either just repeating wrong statement, clearly contradicted by my points, or sticking to other minor and non pertinent issues. This is not a fair way to communicate, and is very frustrating. That’s why I made the accusation. Perfectly justified.

The most important point of all is that a design origin is a fact, and cannot be implied in logical circularity.

I paste here my definition of design and of designed process:

“a) Design is the act by which conscious intelligent beings, such as humans, represent some intelligent form and purposefully output that form into some material system. We call the conscious intelligent being “designer”, and the act by which the conscious representation “models” the material system “design”. We call the material system, after the design, a “designed object”.”

Post numer 5, at the very beginning of all this discussion. September 17, one month ago. And you can find it anywhere in my past posts. I have never changed it, in years.

So, to have a design origin means to comply with that definition. It has nothing to do with RV. It has nothing to do with necessity. It has nothing to do with evolution. It has nothing to do with dFSCI.

Therefore, it is simply impossible that the dFSCI procedure have any circularity at all, when used empirically to infer a design origin.

It’s as simple as that. Please, answer! Is that right, or not? And if not, why?

For comparison, I quote here the circular example kindly offered by Zachriel:

Circular reasoning: ”Wellington is in New Zealand. Therefore, Wellington is in New Zealand.” — Douglas Walton

My reasoning:

a) Something has a design origin if its form comes from conscious representations an a cosncious agent.

b) Designed objects seem to exhibit, sometimes but not always, a singular property that is not observed in non designed objects (empirical observations: we don’t really know why that is the case).

c) We try, by observation and reasoning, to catch the essence of that property, so that we can recognize it in objects even if we are not aware ot their origin. We call that assessment of that property dFSCI, and we give explicit rules to assess its presence in an object.

d) We verify that the definition works: it can detect those strings that truly had their origin in a design process, and gives no false positives.

e) Therefore, we assume that dFSCI can be used as a credible tool to diagnose a design origin in objects whose origin is not known.

It’s simple. It is not circular. It is perfecly valid scientific methodology.

Could you please explain cleraly, if upou still don’t want to admit that the dFSCI procedure is not circular, where and why my reasoning has anything in common with the circular reasoning offered by Zachriel?

If the source of the information in the string, is a conscious designer, then I believe you claim the string would have “dFSCI”, provided it meets all complexity and functionality requirements.

If you have a second, completely different string which also meets all complexity and functionality requirements for a system it is part of, but the source of the information is a “necessity mechanism”, does that string have the attribute “dFSCI” also simply for being complex and functional enough?

The only difference here is the generator of the information, i.e. the source.

If “dFSCI” is only applied in cases of design, then the final determination is the source that generated the information, not the “specific functional complex information” in the string which still in both cases fulfills its functionality.

I really believe, ever more, that you at TSZ don’t read what I write in answer to you. From my post #45 here, to you:

“They would both be assessed as having dFSCI. The first would be a true positive. The second (that has never happened) would be a false negative.

dFSCI is assessed from the object (and the system). If the two objects are the same, and they appear in the same system, the assessment of dFSCI must necessarily be the same for both. If independent facts can attest a different origin for the two objects, thatwould imply what I have said: one is a true positive, the other a false positive.”

What is not clear in that?

So, does the label, “dFSCI” only apply depending on the originator of the information?

No. Absolutely not.

I’m trying here to see if this is what you mean by false positive where “dFSCI” is originally concluded, but then changed to “NOT dFSCI” if the originator is not design.

No. False positive means that the objects correctly exhibited dFSCI, but it was not designed. Our “gold standard” is the true origin of the string. Our diagnostic tool, that is being tested, is dFSCI. So, to be clear, at the cost of being pedantic:

1) True positives are those objects that are correctly assessed as exhibiting dFSCI AND had their origin in a design process.

2) False positives are those objects that are correctly assessed as exhibiting dFSCI AND did not have their origin in a design process.

3) False negatives are those objects that are correctly assessed as non exhibiting dFSCI AND had their origin in a design process.

4) True negatives are those objects that are correctly assessed as non exhibiting dFSCI AND did not have their origin in a design process.

It’s a standard two by two table for computing sensitivity and specificity. Specificity is:

Secondly, does a copy of a designed string still contain “dFSCI”, if copied by a “necessity mechanism”?

Obviously. The string is assessed as exhibiting dFSCI, but it can exist in one copy or in one billion copy. The dFSCI is the same. It does not increase with the copying. It does not decrease with the copying. A separate problem, if you want, would be to consider the copying process. But that’s another story.

We want to explain how that partucular string emerged in relity, not how many times it has been copied.

We want to explain how Shakespeare’s sonnet was generated, not how many times it has been published.

Ah, that’s what I like! Simple questions, simple answers.

I must say that accusing you lot of being stupid or dishonest seems to have an empirical good effect on your cognitive performance 🙂

GP: One of the things that comes across strongly to me is that the objectors do not seem to understand information, e.g. its property of being independent of particular expression once it has been initially expressed. Particular expression is a necessary but not sufficient condition of the existence of info — there must be one copy in some medium, but beyond that, any number of copies has not added to the existing info, save that it will be easier to save at least one copy — redundancy. (Hence the clever utility of DNA’s double helix with complementary copies.) BTW, excellent, patient work as usual, and your strictures at length are well warranted, as the objectors need to heed duties of care to truth and fairness in light of what continuing misrepresentation is. KF

Just an announcement: I will be away for a few days, and I will not be able to post.

Sorry to go whne the discussion is hot. I am aware I am breaking your hearts 🙂 , but I leave you a treasure trove of posts (you cannot deny that I have been rather active in the last times). I encourage you to study them thoroughly: you will find maybe some answers there, and certainly many things to criticize 🙂

But don’t despair! I will be back next week, and I will try to catch up (something tells me that it will not be easy: please, not more that 2000 posts in the two blogs…)

I may be able, I hope, to still write something in the next few hours: last occasions for arguments (or admissions? 🙂 ).

GP: Sometimes it amazes me that you are writing in a second language and many of the objectors are writing in their first. KF

Really? I suspected as much- well most likely I knew it and just forgot. Amazing indeed. But that does explain some things like the choice of wording.
______GP is a European, medical practitioner whose native language is not English. From some of his phrasings, I suspect he is not yet at the level of thinking in English, he seems to be internally translating what he says here. KF

I call them “pseudorandom” because they are formally random, but they convey a meaning. We could call them “formally random strings with a meaning (or function). This is a special use of the “pseudo”, so thank you for giving me the opportunity to explain my sense.

I am aware that pseudorandom strings can mean random strings generated by a computer by an algorithm that is not completely random. For my purposes, all these disctinctions are not necessary. What I mean is simply that the string we observe is not highly compressible, in the sense that it does not exhibit specila order or regularity. That is enough.

I will also explain why. If a pseudorandom string generator generates strins so that they may appear random, the result is the same for me: they have a form compatible with a random origin. There is no possibility that such an algorithm may generated complex information that points to a function (unless, obviously, you are cheating and have incorporated a weasel like algorithm in the pseudorandom string generator, with the string to be obtained already in).

Therefore, if a string has a random, non regular, form, and it has high functional complesity, it’s enough to assess the presence of dFSCI and infer design.

As I have said many times, we assess dFSCI only on the object, without knowing anything of the origin. dFSCI is a property of the object.

But, instead of repeating politely the interesting divagations of Olegt, why don’t you answer my last post to you?

I remind you that I only exclude from dFSCI highly compressible strings, those with evident order or regularity.

I am aware that most strings, including random ones, are in some measure compressible. But that compressibility is of no importance here.

Let’s imagine you have a true random string (originated by the tossing of a coin), and you zip it. You will probably get a somewhat shorter string. But the only effect, for our purposes, is that now you must compute the complesity of the compressed string, plus the complexity of the unzipping software, and hypothesize that both originated by RV in the system, and then the unzipping software generated the unzipped string (that we observe) from the randomly generated zipped string. Is that really of significance?

The true reason to exclude strings with high regularity is not really that some software could have generated them, but that some natural system could have generated them.

I have made many examples:

a) A string of 500 heads, generated by tossing a coin that can only give head.

b) A gene of 300 Gs, generated in a lab situation where only Gs were available.

c) A sequence of 100 HHHHT sequences, which is also a solution to Lizzie’s example, generated by the casual copying of a sequence of low complexity.

In all these cases, a necessity mechanism is a very admissible explanation.

But not for Shakespeare’s sonnet. Not for the human G6PD sequence. Not for Excel’s source code.

I believe that Shakespeare’s sonnet is in some measure compressible. Does that change you assessment about it?

The important point, that you guys are still trying to ignore, is that in true dFSCI there is a convergence of events that are apparently random, or at least not dictated by any law, towards the simple result of the final meaning or function.

That is specially obvious in DNA. The sequence of nucleotides in a protein coding gene is completely inert, it has no special biochemical properties linked to the specific sequence. The laws of biochemistry cannot, in any way, imply that specific sequence. And yet it has the meaning of conveying the sequence of a functional protein.

Even the sequence of AAs in a protein, in itself, does not predict the function, unless we can compute how the sequenc will fold, and what biochemical properties it will have: a task that, as Petrushka always reminds us, is not easy.

OK. That helps a lot. Given your clarification I will admit that the statement

“Everything with dFSCI is designed” (X)

is not circular! Is just false. Also it makes dFSCI into a weird concept.

Emphasis added by me.

As the purpose of all this discussion was to reject an accusation of circularity, I am completely satisfied. You are wonderful. Honest and wonderful.

Obviously, now that I am happy, I can also say one or two words about the accusation of falsy and weirdness! 🙂

Well, the accusation of weirdness is not really a problem: I like it!

But let’s consider it anyway:

Using your definition as I now understand it, something only has dFSCI relative to a current state of knowledge about its origins.

dFSCI is a diagnostic judgement made on an object at time t. It is obviously made at time t, with what we know at time t. But knowledge about its origins has nothing to do with it. We assess dFSCI without knowing the origin. So, you are wrong here. Even when “testing” the specificity of dFSCI with strings whose origin is known, the person who assesses dFSCI must not know anything of the origins. dFSCI is assessed exclusively on the object, and in reference to a specific system and time span. But knowledge of the historical origin is not required, otherwise dFSCI would be useless.

So if you and I have different knowledge about the origins of a string it may well be that it is dFSCI for you and not dFSCI for me (because I know of a necessity mechanism that you don’t).

I will ignore the reference ot origins, which as said is not correct. Let’s say that if you knwo an explanatory mechanism at time t, and I am not aware of that, I will assess dFSCI at tiome t. That will be recognized as a false positive, as soon as you make known the explanatory mechanism, and we agree that it can generate the string for which I have assessed dFSCI. Why is that weird?

Suppose we have a string which is long, digital, and functional – a protein will do nicely. At time t1 none of the world’s experts know of a necessity mechanism. So it has dFSCI at time t1 for them. Later at time t2 a mechanism is discovered, so now it does not have dFSCI at time t2 for them (by your definition).

As explained, this would be a flase positive. As dFSCI was correctly assessed at time 1, and a design inference was later falsified by the new mechanism, that is a false positive. There is no need to “reassess” dFSCI in the pèrotein. dFSCI has alredy failed for that protein.

Furthermore it may be that at time t1 in another country some other scientists knew of a mechanism that the world’s experts were unaware of – so for those scientists it did not have dFSCI even at time t1.

I does not matter. As I said, the assessment of dFSCI relies on simple observations.Either it works, or it does not work. If I assess dFSCI for a protein, and some some scientist who is my enemy already known a necessity explanation, but willfully bhides it to me, and I assess dFSCI, I am assessing it correctly, and it will be recognized as a false positive.

dFSCI is a tool, not an eternal substance. Why cannot you consider it as any sceintific tool? It works, or it does not work.

I suspect you will think this is irrelevant philosophising.

More or less…

You are so certain in the cases we are discussing that no necessity mechanism will be found.

Yes, I am.

But actually it is very relevant – because as I explain below – using this definition statement X i false.

So, let’s go to the falseness.

I will ignore all the discussion about the Fibonacci series. First of all, I am not a mathemathician, but I am not convinced at all that yur example has sense. And in any way, my faith in dFSCI does not get to the point of defending how someone would have applied the concept before 1200! You really ask too much.

But I answer you worries in a very simple way, and that answers both your preoccupations about known or unknown mechanisms and your worries about randomness assessment (to which, I believe, I have alredy answered in my previous post).

It is simple: dFSCI is a diagnostic tool. It is applied as it is. Either it works, or it does not work.

My statement X:

““Everything with dFSCI is designed” (X)

is not a logical deduction. It is not even an inference. It is simply the result of my testing phase. It is true, I believe, for any testing done as I have suggested.

I am available to repeat the testing anytime with you. Give me any number of strings of which you know for certain the origin. I will assess dFSCI in my way. If I give you a false positive, I lose. I will accept strings of a predetermined length (we can decide), so that at least the search space is fixed.

So, you cannot say that my statement X is false, unless you falsify it. It is an empirical statement: there is no other way to falsify it than to show that it is not empirically true.

I remind you, to avoid misunderstandings, that my statement X refers only to the specificity of dFSCI as measured in the testing phase. As I have said many times, for objects whose origin we don’t know for certain, we can only assume that dFSCI will have the same specificity. It is a very reasonable assumption, but it is not necessarily true.

I would say binary strings of 500bits. Or language strings of 150 characters. Or decimal strings of 150 digits. Something like that. Even a mix of them would be fine.

Some observations. I will literally apply my procedure. If I cannot easily see any function for the string, I will not go on in the evaluation, and I will not infer design. If you, or any other, wants to submit strings whose function you know, you are free to tell me what the function is, and I will evaluate it thoroughly.

I will be cautious, and I will not infer design if I have doubts about any of the points in the procedure.

Ah, and please don’t submit strings outputted by an algorithm, unless you are ready to consider them as designed if the algorithm is more than 150 bits long. We should anyway agree, before we start, on which type of system and what time span we are testing.

And anyway, I am afraid we have to wait next week for the test. My time is almost finished.

Please, don’t create further confusion. Number 5 does not say, as you state:

“#5) Therefore anything whose origin is known must not be deterministic.”

It says:

“#5) Any object whose origin is known that exhibits dFSCI is designed (without exception).”

You introduce a “must be” that changes completely the meaning, Please, no more lies. The meaning is:

#5: Any object whose origin is known, that exhibits dFSCI, is designed (without exception)

I have already explained the meaning in detail to you, in a previous post . The “objects whose origin is known” are strings of whiah we know the origin (desing or not design) and that are used to test the dFSCI procedure (let’s call them set “test”.

“that exhibits dFSCI” means those strings in set “test” for which dFSCI is assessed as present, obviously in blind (the person who assesses dFSCI is not aware of the origin of the string).

“is designed (without exception)” os the empirical result of the test, not a logical statement, as you try to imply.

No more lies, please. I have said those things a lot of times, only in the last few days. Stop inventing things that I have never said.

Notably, gpuccio is happy that you said it’s not circular without understanding your reasoning. It is enough that you agree apparently.

I am happy because a honest person made a honest admission that was difficult to do. It is not a question of agreement, but of intellectual integrity.

Of course for the vast majority of digital strings the process for generating them is known at the same time that the string is generated. Here there is circularity. Only the ones that have been designed can have dFSCI by definition.

Excuse me, but this is still wrong. Nothing has dFSCI “by definition”. The strings are examined in blind, and only those that are judged as exhibiting dFSCI are assesses as such. According, obviously, to the definition. In this process, the person who makes the assessment knows nothing of the origin of the string (just as we will do in our test). After the assessment is done, it is checked with the known origin of the string.

Any other strings that were generated by necessity mechanisms would not have dFSCI by definition.

Again, no. The person who evaluates dFSCI is npot aware of the origin. He must exclude possible necessity mechanisms on the basis of the string he observes. That’s all.

So it is invalid to use the correlation between dFSCI and design as evidence that future strings with dFSCI will be designed.

Why? If a property has such a high correlation with a type of origin in all known cases, why should it be “invalid” to use it to infer that type of origin in unknown cases? it’s perfectly sensible, and scientific, to do that. Obviously, it is not a necessity. It is an inference. It can be right. It can be wrong. Like any scientific inference. But it is perfeclty “valid”.

The only correlation that is acceptable is instances where a string has an unknown origin and then the origin is discovered.

Why?

There aren’t very many of those – but there is little doubt that some of them turned out not to be designed.

I am not aware of any of them.

Actually the more I think about this the more tangled it gets. Really we are only interested in strings where the origin is not known – once the origin is known then its dFSCI status is settled by definition.

No. Its origin is settled by definition. Not its dFSCI status. A lot of designed things do not exhibit dFSCI. In theory, we can find non designed things that exhibit dFSCI. Knowing the origin of a string does not prevent us from independently, blindedly assess dFSCI for it.

So what is the known necessity mechanism clause – which is clearly a reference to origins – doing? If we don’t the origin then we don’t know whether it was the result of a necessity mechanism or design, therefore we don’t know whether it was the result of a necessity mechanism. I think maybe he is getting at something like “could not imagine this string being generated by a known necessity mechanism” in the sense of that the mechanism is known but not that it is know to apply to this string.

But it is simple. I look at the string. With what I know of it, and of the system where it is found, and of the time span, I assess dFSCI. To exclude a necessity mechanism, what I have to do essentially is to exclude strings that have regularities. And be sure that the laws acting in the system have no connection with the specific function of the string. For instance, in a biochemical system where a protein coding gene emerges, I must be sure (and I am) that the laws of biochemicstri have no connextion with a specific string of nucleotides coding for a functional protein through an arbitrary code. I an sure of that. I am sure that the gene sequence shows no mathemathical regularity that could derive from a necessity mechanism. So I affirm dFSCI.

What a tangled web! If only he would define dFSCI without the necessity clause it would make everything simple and save hours of blogging.

If I defined dFSCI without the necessity clause, any series of heads generated by the tossing of an unfair coin would be a false positive. No, thank you. I am not that stupid.

______

GP enjoy y’self (assuming it is not a work trip . . . AND IF, FIND A BIT OF FUN TIME). In fact, none of us, almost, directly independently knows the source of the dFSCI strings in this thread. We routinely accept a design inference, and credit the announced identity. For instance I have never met you in person, nor Mung etc. But I have excellent reason to infer the above post is not blind chance and necessity but design. Add in a few details and I am confident it is really you, Dr GP of Italy. KF

From the very beginning it’s been clear that gpuccio admits that there are things which exhibit dFSCI for which the origin is not known.

Which is why we have to infer design in those cases, rather than just looking at the known historical facts of the origin of the thing in question.

On the other side of the coin, if the origin of something (say, the bacterial flagellum, for example) is not known, our committed materialistic friends would have to infer non-design if they wish to draw that conclusion, because the known historical facts of the origin of the thing in question do not directly answer the question.

The inference-to-the-best-explanation approach is necessary in the historical context. The question is simply whether the design inference is warranted in particular cases.

Now remind me, what point were our materialistic friends trying to make?

The absolute weakness of your case is showing through. People can see what you’ve been reduced to. It’s not a compelling argument.

Damn! I blame Drs Dawkins and Coyne and Meyers and Miller and Dennett and Wilson and 150 years of scientists who worked and published in the field of evolutionary science. And Carl Zimmer. Who can you trust these days eh?

By the way, did you ever apologise for accusing keiths of lying about P(T|H)? If you did I missed it.

Mung, that no 3 paper looks to be a doozy, on islands of function, no less, with a telling admission against interest by Dawkins. KF

I was hoping Jerad would take a look. I think they are very pertinent to the current debate going on in this thread. But having no counter-arguments to offer himself any longer, he is reduced to red herrings.

And yes, Jerad, people are watching, and they see.

I just heard a jet pass overhead. It’s really cloudy, so I couldn’t see it, and there was no visible contrail, so I had no independent evidence of it’s existence, but hey, I still made the inference.

I was in the Navy for a number of years, during which time I was stationed both on an aircraft carrier and at a Naval air station. I have a wealth of experience when it comes to the sounds that jets make. Why should I just disregard all that experience?

Upon what logical basis do you assert that I should not have made that inference without independent corroborating evidence?

So lets say I have a contact at the FAA who can confirm that indeed there was a jet flying over my area at that time. Why should I not be allowed to add that to my wealth of knowledge? In all case where I have been able to trace the effect to a cause, it’s been a jet aircraft.

Are you going to revisit the P(T|H) issue? Do I assume from your silence in this matter (when you called keiths a liar multiple times) that you now acknowledge that you were wrong? Or do you still think you’re right? And why won’t you address the issue?

I was hoping Jerad would take a look. I think they are very pertinent to the current debate going on in this thread. But having no counter-arguments to offer himself any longer, he is reduced to red herrings.

And yes, Jerad, people are watching, and they see.

I read the review of the paper on the Discovery Institute’s website. I tried to read the paper itself but I don’t have access on PubMed. I wanted to see if the authors addressed the problem which you and KF and the DI are pointing out: the seeming vast improbability of a cell to arise from a random soup of constituent parts. I don’t think anyone is hypothesising that a cell arose without precursors and I was wondering if the authors discussed that part of the issue. Without being able to read the actual paper I can’t come to a full conclusion but from what is reported it looks like evidence for your point of view. I’d like to see more.

I just heard a jet pass overhead. It’s really cloudy, so I couldn’t see it, and there was no visible contrail, so I had no independent evidence of it’s existence, but hey, I still made the inference.

As I would have done as well. Please don’t charicature my point of view. I agree with design and existence inference in many, many situations. In your example you KNOW jets exist, that they fly in your area, you have probably heard them in similar situations before, you have experience, etc, etc, etc.

My contention is: there is no evidence that an intelligent designer was around during earth’s ancient past. There is no supporting physical evidence that indicates highly technical work took place anyplace on earth, or any place else that we’ve found so far. Therefore, it’s not reasonable to infer such a designer.

It is ever more evident that the real root of some of your objections is that your a prioris make you think the possibility of a designer at the time in question is nil or essentially that.

That has long since been pointed out.

And on the point you are now objecting to Mung, it does seem that Mung per what I have seen has a point, if the odds of a chance hyp are zero, you cannot have a valid conditional prob on that.

P(A|B) = P(A AND B)/ P(B)

Set P (B) to zero . . .

So at least ex hypothesi, we have to accept that P(B) != 0. B must at minimum be a logically (and here, physically) possible state of affairs, as opposed to a plausible one.

What we have shown repeatedly is that while we may argue to bare logical and physical possibility that by chance we get any number of possible outcomes, such as the O2 molecules in the room where you read this all simultaneously rush to one end leaving you gasping, the balance of statistical weights of clusters of states is such that we have no reason to expect to observe such on the gamut of the observed cosmos, not even once in its lifespan. The same basic analysis applies to the proverbial warm little Darwinian pond or the like, to spontaneously form a gated, encapsulated, metabolising automaton with embedded vNSR, using homochiral molecular nanomachines that work by key-lock fitting controlled, algorithmic, code based mechanisms.

Whether all at once or incrementally.

Not, given what we know about physics and chemistry, including statistical thermodynamics and reaction kinetics. From simply observing such an entity as actually existing, we know it is highly contingent in a very special way that calls for an adequate cause. The only — and on billions of cases empirically reliable — known cause of FSCO/I (which the cell is chock full of) is design. Therefore the reasonable conclusion is that we have adequate evidence to infer to design from the signs we can see. At this stage, apart from institutional dominance, the shoe of warrant is plainly on the other foot.

That is why your demands to see separate evidence in addition to what is staring you in the face starting with the OOL question on — which you have steadfastly refused to address on the merits per showing how on empirical warrant blind chance and necessity are causally credibly adequate, rings so decidedly hollow.

In short, you need to look very seriously at whether you are falling into the Cliffordian/Saganian evidentialism trap, of an escalating demand for proofs beyond what is adequate, because one is disinclined to go with what is in front of one. The demand that “extraordinary claims require extraordinary evidence” is crucially driven by the perception of extraordinariness, and by a failure to see that all that is required of warrant for a case is that it should be adequate.

You are beginning to sound like Thomas, confronted by a mysteriously empty tomb, and by fellow disciples by the dozens who reported meeting, hugging, walking, talking and eating with their risen Lord, demanding to put his hand into the spear wounds, before he would accept the testimony of multiple known credible witnesses who were not at all expecting any such thing, had no cultural background that would lead them to come up with it and who were also looking with the rest of the city at the definitively and inexplicably empty tomb outside the city walls.

In short, you need to realise that skepticism, contrary to much modern self-congratulation, is not an intellectual virtue. It is proper to require that there be adequate warrant, on pain of admitting that so far we are all ignorant on a subject, but we do not have a right to demand arbitrary levels of evidence, once we are addressing a subject.

KF

PS: It might interest you to know that on the evening of July 18, 1995, many people here were looking for a low flying jet aircraft, as the sound made by a jet of steam driven ash etc is sufficiently similar to make one think so at first until one learns better. So, this is a case where the mere presence of jet-like sounds can be explained by any of several means. However, on balance and on subtler indicia, we have good reason to infer to aircraft in certain circumstances, and volcanoes etc in others. This is similar to the case of deer tracks and possible imitations. The point of comparison is that there are no credible, empirically warranted alternatives to account for FSCO/I, but the one warranted on billions of test cases and backed up by the needle in the haystack analysis, design. Which is exactly what the living cell is chock full of. So, there is adequate warrant that design is a process that can adequately cause what we see, and there is further adequate reason to see that there is no other credible possibility. Next, we know that, on the LGM possibility [brought forth for the specific reason of pointing out that Venter et al have shown good reason to infer that an adequate cause would be a molecular nanotech lab . . . i.e we have a comparative model that says, empirically feasible], in ages of time, the sort of direct evidence you demand can be erased by the sands of time. And that is only one possible case for what is beyond DESIGN the empirically warranted process in view, specifici designers.

It is ever more evident that the real root of some of your objections is that your a prioris make you think the possibility of a designer at the time in question is nil or essentially that.

Not at all. I’m saying you haven’t proved there was one. Show me the evidence and I’ll change my mind.

I just want Mung to apologise for calling keths a liar when he was doing no such thing. I think that’s what reasonable people do when discussing things in a collegiate manner. By not apologising he’s appearing to be petulant and petty. Would you let me get away with similar behaviour? I hope not.

That is why your demands to see separate evidence in addition to what is staring you in the face starting with the OOL question on — which you have steadfastly refused to address on the merits per showing how on empirical warrant blind chance and necessity are causally credibly adequate, rings so decidedly hollow.

You don’t get a designer because evolutionary theory has not answered all the questions yet. It’s not one or the other. Evolutionary theory may fail but Intelligent Design would still have to prove its case.

You are beginning to sound like Thomas, confronted by a mysteriously empty tomb, and by fellow disciples by the dozens who reported meeting, hugging, walking, talking and eating with their risen Lord, demanding to put his hand into the spear wounds, before he would accept the testimony of multiple known credible witnesses who were not at all expecting any such thing, had no cultural background that would lead them to come up with it and who were also looking with the rest of the city at the definitively and inexplicably empty tomb outside the city walls.

Extraordinary claims require extraordinary proof. I like Thomas, he was skeptical. But, given the evidence, he changed his mind. Makes sense to me.

And before you draw a comparison to evolutionary theory not having shown us explicitly all the answers I will just say that, at this point, it’s a much, much better model than any other alternative for reasons I’ve elucidated many, many times.

In short, you need to realise that skepticism, contrary to much modern self-congratulation, is not an intellectual virtue. It is proper to require that there be adequate warrant, on pain of admitting that so far we are all ignorant on a subject, but we do not have a right to demand arbitrary levels of evidence, once we are addressing a subject.

I think skepticism is absolutely essential these days. What with all the dross and tripe foisted on us via the internet and scams and people willing to take our money just to line their own pockets. Politicians need to be listened to with extreme skepticism, their statements need to be fact checked and mulled over. Homeopaths and chiropractors and all that ilk are always making statements with no real basis in good, hard science. Telephone salesmen . . . do you take what some stranger says to you on the phone seriously?

You are very, very skeptical of lots of things. You demand much more of evolutionary theory than you do of ID regarding proof of ability to deliver the goods.

I don’t even trust myself to get it right all the time. Life is complicated now. I haven’t got all the answers and I don’t trust someone who says they do.

I fully admit, and have done many times, that inference to design is completely warranted in many situations. You don’t need to bring up more examples.

Jerad: That is just he problem, again. You are staring adequate evidence in the face and are denying that it exists or is cogent. If you think that FSCO/I is not a reliable sign of design as causal process, kindly show us an empirically warranted adequate alternative cause. And, we do not hold our views hostage to your skepticism, once we have good reason to conclude we have a reasonable case. KF

That is just he problem, again. You are staring adequate evidence in the face and are denying that it exists or is cogent. If you think that FSCO/I is not a reliable sign of design as causal process, kindly show us an empirically warranted adequate alternative cause. And, we do not hold our views hostage to your skepticism, once we have good reason to conclude we have a reasonable case.

We disagree on what’s adequate evidence. And, it is true, I’m in the vast majority on this. I don’t mind you disagreeing with me on it. You don’t have to convince me or even pay attention to what I have to say. But you keep trying to convince me for some reason.

I think the evidence points to universal common descent with modification and so the development of DNA is explained via those natural processes.

And, again, I do think design can be inferred in many, if not most, cases. Especially where there is adequate supporting evidence of the presence of an intelligent cause at the pertinent time. Which ID has not yet established. I’d stop arguing with me and work on that if I were you. Talking to me is just wasting time you could be using to prove your case.

In fact, I’d be really interested in a more fleshed out intelligent design hypothesis. Like, for example, when you think design was implemented? I would think you’d have enough evidence to at least have a guess at that.

On the evidence of having been repeatedly shown adequate evidence, sadly, it seems not.

2: I just want Mung to apologise

Mung will be able to handle his own issues, mine is that there is a talking point that says in effect there is a design assertion that chance is not a possible explanation.

This, I underscore is not so, necessity, then chance are the successive defaults, defeated by first high contingency, then by high complexity joined to high specificity especially by function.

3: You don’t get a designer because evolutionary theory has not answered all the questions yet . . . . before you draw a comparison to evolutionary theory not having shown us explicitly all the answers I will just say that, at this point, it’s a much, much better model than any other alternative for reasons I’ve elucidated many, many times

By the very nature of the case, OOL is not a matter of evolution by chance variation plus differential reproductive success. That you repeatedly miss this, is telling.

It is about how we get from chemistry and physics in a warm little pond, or the like, to a gated, encapsulated metabolic automaton with a vNSR using code and homochiral informational polymers as implementing machines for coded algors and data structures.

Until you account for this, you have no start point for the Darwinist three of life so evolutionary explanations are precluded from the outset. By the inherent nature of the case.

That you are trying to suggest that you have answers to most questions and a bit of tidying up to do in the teeth of the repeated underscoring of this is telling.

I repeat, the cell is chock full of FSCO/I. Its origin antedates evolutionary explanations, and has to address the gap between what blind chemistry and physics can do and the observed information based automaton.

And, once we see that on good warrant design is a serious candidate and in fact best explanation at OOL, then the obvious point that common design is at least as good an explanation as common descent decisively shifts the weight of how we evaluate all across the world of life.

Going further, we then see that what we have is a priori imposition of materialism, at worldview level, and as a methodological postulate, even in the definition of science — historically inaccurate and philosophically suspect — being taught to students and the general public.

On those a priori commitments, the evidence is then used to illustrate an a priori, sometimes seen as self evidently true, by those who do not realise that this is diagnostic of an imposed worldview level question-begging a priori.

Then, we see that there is simply no good evidence to warrant that body plan level origins is credibly a simple accumulation of micro changes, and that the fossil record we actually have as opposed to the one presented is one of sudden appearance, stasis and of mosaics rather than a dominant and obvious pattern of transitionals incrementing their way across the span of the tree of life.

4: Evolutionary theory may fail but Intelligent Design would still have to prove [WARRANT] its case.

I strike and replace to highlight the key problem here. we are dealing with an empirical matter, and with explaining on best empirically warranted explanation. We have a clear case that he only warranted explanation for FSCO/I is design, with billions of supportive cases and no clear counter examples. Repeated objections consistently turn out to be design behind the curtain of what is obvious.

And, Thomas was properly rebuked for the precise reason of refusing to face adequate warrant and respond appropriately.

6: skepticism [clear, well warranted thinking based on understanding induction, abduction, deduction and warrant of knowledge claims in a world of experience and limitations on what we can know, how certainly] is absolutely essential these days. What with all the dross and tripe foisted on us via the internet and scams and people willing to take our money just to line their own pockets.

The strike and replace speaks for itself.

7: statements need to be fact checked

The self appointed fact checkers need to be tested and the focus needs to shift to the gap between persuasion and warrant.

8: You are very, very skeptical of lots of things. You demand much more of evolutionary theory than you do of ID regarding proof of ability to deliver the goods.

False.

I simply ask that we recognise that we are seeking to scientifically investigate the remote, unobserved past. Accordingly I accept that the uniformity principle and explanation on signs in a context of inference to best explanation, are relevant.

We are dealing with causal models so the first requisite is that traces of what happens in teh past must be explained relative to known reliably adequate causes.

So, the demand is that the phenomena in the traces from the past have known, empirically reliable tested and observed causal explanations that are adequate to account for the effect.

Chance variation and differential reproductive success etc are adequate to account for variations within and regulatory adaptations of a body plan. They do not have warrant to account for OOL or OO body plans.

We observe that life from the cell up is chock full of FSCO/I.

I therefore insist that the known adequate cause of FSCO/I is a relevant explanation.

That happens to be design.

9: Life is complicated now. I haven’t got all the answers and I don’t trust someone who says they do.

I have never claimed to have all the answers, nor have others who represent UD, so the suggestion is inappropriate. Especially as the precise reason why design theory does not claim to identify specific designers, is that there is not adequate evidence on the relevant signs to do so as a scientific inference. So design theory is much like the stage of investigations that identified arson not accident. Other techniques and tools will go on to establish whodunit.

Life is indeed complex, and it has been so from the very first living cell. Not just complex but functionally specific and complex beyond the reasonable reach of blind chance and mechanical necessity on the gamut of the solar system or observable cosmos.

Hence the relevance of design as the best explanation of the FSCO/I in life.

On the evidence of having been repeatedly shown adequate evidence, sadly, it seems not.

The evidence for the presence of a designer aside from the ‘objects’ you assert were designed. You know what I mean!!

2: I just want Mung to apologise

Mung will be able to handle his own issues, mine is that there is a talking point that says in effect there is a design assertion that chance is not a possible explanation.

But he’s not handling it maturely at all. He’s just ignoring the issue. You and he pointed to a new research paper you were interested in hearing my comments about. I read what was available and gave you my comments. And, I think, I was pretty honest in admitting that, on the face of it, there seemed to be a lot of support for your point of view. I didn’t ignore the issue or decry the work or dismiss it as being misinterpreted. I’m trying to take the dialogue seriously and I just want everyone to be held to the same standard.

By the very nature of the case, OOL is not a matter of evolution by chance variation plus differential reproductive success. That you repeatedly miss this, is telling.

It is about how we get from chemistry and physics in a warm little pond, or the like, to a gated, encapsulated metabolic automaton with a vNSR using code and homochiral informational polymers as implementing machines for coded algors and data structures.

Until you account for this, you have no start point for the Darwinist three of life so evolutionary explanations are precluded from the outset. By the inherent nature of the case.

I have always admitted that I have no explanation for the generation of the first basic replicator. If you choose to use that as a reason to reject universal common descent with modification that’s up to you.

You keep implying that I’m missing the point when I’ve always, consistently addressed it honestly and truthfully.

Going further, we then see that what we have is a priori imposition of materialism, at worldview level, and as a methodological postulate, even in the definition of science — historically inaccurate and philosophically suspect — being taught to students and the general public.

On thsoe a priori commitments, the evidence is then used to illustrate an a priori, sometimes seen as self evidently true, by those who do not realise that this is diagnostic of an imposed worldview level question-begging a priori.

Uh huh. Let’s just stick to the science.

4: Evolutionary theory may fail but Intelligent Design would still have to prove [WARRANT] its case.

I strike and replace to highlight the key problem here. we are dealing with an empirical matter, and with explaining on best empirically warranted explanation. We have a clear case that he only warranted explanation for FSCO/I is design, with billions of supportive cases and no clear counter examples. Repeated objections consistently turn out to be design behind the curtain of what is obvious.

Which do you think is the weaker term: prove or warrant?

Billions of cases? Really? I don’t think you get to count each event as a separate case. I think once you’ve brought up computer programs that just counts as one example. I think we’re talking about classes of objects.

And, Thomas was properly rebuked for the precise reason of refusing to face adequate warrant and respond appropriately.

But he got his evidence. Nor was he really punished Not a great example for you really. I don’t mind being rebuked, as long as I get the evidence.

6: skepticism [clear, well warranted thinking based on understanding induction, abduction, deduction and warrant of knowledge claims in a world of experience and limitations on what we can know, how certainly] is absolutely essential these days. What with all the dross and tripe foisted on us via the internet and scams and people willing to take our money just to line their own pockets.

The strike and replace speaks for itself.

Sounds like a pretty good definition of skepticism to me!!

7: statements need to be fact checked

The self appointed fact checkers need to be tested and the focus needs to shift to the gap between persuasion and warrant.

The whole point is to NOT have some central authority telling everyone else how to think or focus. Your suggestion smacks of authoritarianism.

I therefore insist that the known adequate cause of FSCO/I is a relevant explanation.

That happens to be design.

You can insist all you want. Doesn’t make it true. Or get people to agree with you. I’d look for some more evidence if I were you.

9: Life is complicated now. I haven’t got all the answers and I don’t trust someone who says they do.

I have never claimed to have all the answers, nor have others who represent UD, so the suggestion is inappropriate. Especially as the precise reason why design theory does not claim to identify specific designers, is that there is not adequate evidence on the relevant signs to do so as a scientific inference. So design theory is much like the stage of investigations that identified arson not accident. Other techniques and tools will go on to establish whodunit.

I didn’t mean to cast aspersions on you. I was just trying to justify my ‘hyper’skepticism.

I do think though if you’re sure you’ve established the design inference then it’s time to stop arguing about it and flesh out the hypothesis a bit more. AND look for more evidence to convince the critics.

PS: Had to wait to get where I could send, busy now.

No worries. Did the hurricane miss you altogether then? Sounds like Cuba is getting the brunt of it now.

JERAD and company: You have the ability to justify “hyper” skepticism – consider it a gift. Your “position” provides insight into the inability of the natural man (NM) to “see” and provides insight in the free will issue also – dispute the vase evidence for a designer / the NM is not only unable to see he is unwilling to see = definition of NM = nature of the Sin nature = can’t and won’t choose God (THE Proven one) / You say and “believe” there is no evidence, but I do think it is obvious you have not looked for it very hard.
So – question to all: Is it reasonable to believe that if one takes a serious look at fulfilled prophecy (= Proof of a mind beyond space and time), cosmological constants, design inference, cause then result laws etc. (natural proofs – from the creation we “see” the Creator) that there is not only no evidence of a Designer, there is ABSOLUTE PROOF of one? Is it true that true belief is based on true objective reason? (+ Election re. Theology – different “level” than this discussion)

You say and “believe” there is no evidence, but I do think it is obvious you have not looked for it very hard.

Well, you don’t know what journey I have been down in my life and I’m not prepared to discuss it with some stranger on a forum. Just because I disagree with you doesn’t mean I haven’t looked at and considered everything.

Let’s just talk about ID vs universal common descent with modification and the evidence therein.

Let us not forget when it comes to physical evidence, you have refused to engage in arguments which you do not consider “settled”. By settled, it is meant: already found to comport (or substantially comport) to your belief system. The origin of information is such a case. If the origin of information is not found to substantially (or arguably) comport to your belief system, then you dismiss it as an OoL mystery which you will not engage in. Consequently, you give yourself an intellectual pass on the subject, without allowing yourself to admit that the physical evidence actually supports an alternate theory. You have no material basis to deny that we may already have the data required to produce a valid claim on the matter. Instead, you derive an unsupported conclusion despite that evidence. This is referred to as ‘selective confirmation bias’.

Let us not forget when it comes to physical evidence, you have refused to engage in arguments which you do not consider “settled”.

I have only refused to enter into discussions of OoL issues because of my own ignorance about the chemistry and hypothesis already put forward.

By settled, it is meant: already found to comport (or substantially comport) to your belief system. The origin of information is such a case. If the origin of information is not found to substantially (or arguably) comport to your belief system, then you dismiss it as an OoL mystery which you will not engage in. Consequently, you give yourself an intellectual pass on the subject, without allowing yourself to admit that the physical evidence actually supports an alternate theory. You have no material basis to deny that we may already have the data required to produce a valid claim on the matter. Instead, you derive an unsupported conclusion despite that evidence. This is referred to as ‘selective confirmation bias’.

Or are you now agnostic on the matter?

I believe the physical evidence points to a first basic replicator and that after that you’ve got universal common descent with modification.

And I have said before that the first basic replicator could have arrived via a meteor. It could have fallen out of an alien astronaut’s lunch bag. Maybe some time travelling human dropped it by accident. I think it’s most likely we will eventually find a plausible natural development path but I can’t deny the other possibilities.

I reject the idea that the physical evidence supports an alternate theory. And, I would like to point out that there is no coherent laid out alternate ‘theory’. People in the ID community don’t even agree on that.

Why are you so willing to give up on natural processes and accept that there was some ancient designer when a) there’s no independent evidence of one and b) we’ve only been looking for less than 60 years? (Taking the discovery of DNA and the beginning of the real ability to search for the first basic replicator.)

Was keiths right about P(H) or not? If you answer I’ll shut up about it.

(127):

I’m in the vast majority on this.

You should be skeptical about that.

And yet another logical fallacy: argumentum ad populum.

I agree, it’s a logical fallacy to use that as an argument. I was merely pointing it out. In my mind if the vast majority of scientists who have worked in any field association with evolutionary theory have come to a similar conclusion then there’s a fair bet it’s correct. But, not necessarily, I agree.

I have only refused to enter into discussions of OoL issues because of my own ignorance about the chemistry and hypothesis already put forward.

I reject the idea that the physical evidence supports an alternate theory.

I have said before that the first basic replicator could have arrived via a meteor.

A clearer case of confirmation bias would be hard to imagine.

And, I would like to point out that there is no coherent laid out alternate ‘theory’.

It would be easy to goad you into debating the validity of this comment, but we’ve already been there. So what’s the point in it? There is none.

Why are you so willing to give up on natural processes and accept that there was some ancient designer

This is about material evidence, not wishes. ‘Willingness to give up on natural processes’ has nothing to do with it.

a) there’s no independent evidence of one and b) we’ve only been looking for less than 60 years?

Again, the evidence is material, so I am not certain what being “independent” of that would hope to mean. If you are talking about having to see the designer with my own two eyes, then I would ask you the same. You have admitted to placing your belief in recorded information arising from inanimate matter (by some unknown process). But have you seen it? If not, then on what do you place your belief? I can very easily and coherently tell you exactly what supports mine, but as already discussed, you refuse to engage in that. But the question is not why you refuse to engage. The question is why you continue to engage in not engaging. Why are you here to only talk about what doesn’t hinder you belief system? What purpose does that serve?

Again, the evidence is material, so I am not certain what being “independent” of that would hope to mean.

Something other than that which you are asserting was designed. As we find with ancient human species.

If you are talking about having to see the designer with my own two eyes, then I would ask you the same.

Oh gosh no, that would be ridiculous. I’m talking about something like archaeological evidence.

You have admitted to placing your belief in recorded information arising from inanimate matter (by some unknown process). But have you seen it? If not, then on what do you place your belief?

I’ve seen a lot yes. And I’ve found some. And I know lots of archaeologists. And a few paleontologists. And I’ve read a lot about ancient sites. Most of the evidence found on such sites is available for scrutiny by other researchers. Much is on public display.

I am also aware of how various dating techniques work, what their limitations and strengths are.

I can very easily and coherently tell you exactly what supports mine, but as already discussed, you refuse to engage in that. But the question is not why you refuse to engage.

I thought I was engaging? I would very much like to hear what supports your belief.

The question is why you continue to engage in not engaging. Why are you here to only talk about what doesn’t hinder you belief system? What purpose does that serve?

Initially I came to UD to find out what ID proponents were thinking. I seem to have fallen into the token Darwinist slot as I find myself answering many more questions than I ask now.

I think if we understand each other then we can work towards a future with more cordial discussions. I think the whole issue is imbibed with too much rancour and ill feelings. So I’d like to try and help that situation.

Was keiths right about P(H) or not? If you answer I’ll shut up about it.

This has been asked and answered. If you like I’ll even find the link to where it was answered, if that will help.

But to answer your question, again, I cannot tell if keiths was right or not because he contradicted himself.

In one breath he says we have to know that something could not possibly [please assign the probability] have evolved in order to infer design. Then he says P(H) is very low, but not 0.

Now if he says that what he meant to say was not that it was not possible [what’s the probability on that, again, just for the record], just very unlikely, I think we can say he corrected himself. Do you consider that a retraction of his first statement? As far as I know he never retracted it, he merely said he didn’t mean what he said (I guess you could call that a retraction.)

But if he corrected or retracted his statement, I think it’s safe to say he was wrong. Don’t you?

Your response failed to address the confirmation bias in your previous post; where you first said you were ignorant of evidence, then suddenly knew enough to reject that evidence, then submitted your own conclusions instead.

– – – – – – – – – – – – – –

Something other than that which you are asserting was designed. As we find with ancient human species.

We know of ancient peoples because of material things. Nothing else. ID is no different.

Oh gosh no, that would be ridiculous. I’m talking about something like archaeological evidence.

Again, you have no material evidence to refute the (sufficient and necessary) system of recorded biological information which is being offered to you as a material artifact of design. Recorded information doesn’t just happen, it requires specific material conditions which must be met in order for it to exist.

UB: You have admitted to placing your belief in recorded information arising from inanimate matter (by some unknown process). But have you seen it? If not, then on what do you place your belief?

Jerad: I’ve seen a lot yes. And I’ve found some. And I know lots of archaeologists. And a few paleontologists. And I’ve read a lot about ancient sites. Most of the evidence found on such sites is available for scrutiny by other researchers. Much is on public display. I am also aware of how various dating techniques work, what their limitations and strengths are.

This is almost a shameless non-sequitur. The topic being discussed was ‘recorded information arising from inanimate matter’. Not archaeological sites.

I thought I was engaging? I would very much like to hear what supports your belief.

Each time I have tried to engage you, you have immediately stated that you consider all OoL issues a mystery and prefer only to talk about other things. It is disingenuous to say otherwise. If you have changed your mind, then you know the thread here and you are welcome to participate.

Initially I came to UD to find out what ID proponents were thinking. I seem to have fallen into the token Darwinist slot as I find myself answering many more questions than I ask now.

I think if we understand each other then we can work towards a future with more cordial discussions. I think the whole issue is imbibed with too much rancour and ill feelings. So I’d like to try and help that situation.

Justifying your actions thusly does little to change the practical result; which in my estimation has been little more than a demonstration of confirmation bias. I am unsure how you can reduce rancor by ignoring the evidence presented by your conversation partners. It’s rather surprising that you think those actions would be successful.

If you did address the issue after all the discussing then I truly apologise for picking on you about it. So, yeah, I think you’d better give me the link ’cause I can’t remember. I can be very annoying just out of spite but that wasn’t the case this time. I really don’t remember what your conclusion was.

Just got a minute so pardon my brevity but, no, I thought what keiths said was correct. I DID somewhat misinterpret what he said but he and I both agreed he was NOT saying P(H) = 0.

If you still disagree then we can leave it there. There’s no need to have the same discussion again. I just was trying to be sure and, again, if I missed something then I do apologise.

More later . . . I’ll look over your points more closely. No time at the minute.

Doing this for the second time since the website spit up blood the first time:

This has been asked and answered. If you like I’ll even find the link to where it was answered, if that will help.

Yes please since I don’t recall the answer.

But to answer your question, again, I cannot tell if keiths was right or not because he contradicted himself.

In one breath he says we have to know that something could not possibly [please assign the probability] have evolved in order to infer design. Then he says P(H) is very low, but not 0.

Now if he says that what he meant to say was not that it was not possible [what’s the probability on that, again, just for the record], just very unlikely, I think we can say he corrected himself. Do you consider that a retraction of his first statement? As far as I know he never retracted it, he merely said he didn’t mean what he said (I guess you could call that a retraction.)

I think he was very consistent all the way through. I think he never said P(H) = 0 as you claimed. I think he said P(T|H) had to be low based on Dr Dembski’s criteria. And this is an essential point: keiths was just trying to explain Dr Dembski’s position. What he and I found particularly galling was that you were complaining about us trying to explain what Dr Dembski was saying. When it seemed that you were arguing based on what you THOUGHT Dr Dembski said but not what he actually said.

But if he corrected or retracted his statement, I think it’s safe to say he was wrong. Don’t you?

But he didn’t retract or correct his statements nor did he need to..
_____Jerad, Mung has said he is gone for a few days; we do know that he earlier reported he was on vacation. I am busy with issues over aircraft and access policy etc intensified and polarised by the case of a fatal crash, and not at all of my choice or desire. Multiplied by issues over development policy trajectories, capacity building and project cycle management, and more. So, I do not have time to track down specific posts, but I can state that in looking some days back at threads where the exchange was in focus, I saw where Mung made a case that KS stated something that did imply that design theory as a premise demands that p(H) = 0 [blind chance/necessity — remember, high contingency comes form chance/choice, and necessity is going to lead to regularities not wide variation under similar start points, think F = m*a etc — is impossible as an explanation], in an attempt to critique Dembski. Whilst KS also had to admit or imply that Dembski was looking at config space analyses that imply extremely low probabilities on blind chance plus mechanical necessity. Where also, you need to realise that you and others have side tracked discussion to long since poisoned debates over Dembski’s formulations of general models, when we have all along had on the table a simpler framework that returns us tot he starting issues in NFL, and which is directly testable. Where if the expression Chi_500 = I*S – 500 bits beyond the threshold, is such that if you have a good empirical counter example counter example that is valid, the whole construct of not only FSCO/I but also with it CSI wold collapse as a sign of design. The consistent side tracking goes strongly to show that you do not have such empirical counter examples in the teeth of billions of test cases that show that the criterion is a reliable sign. So, the test of FSCO/I is quite specific, an empirically reliable tested sign of design. KF.

You just keep repeating your assertions without engaging the arguments against them.

How do we know they were ancient human species? By the designed artifacts they left behind.

Mung if you are really drawing into question the existence of ancient human species like Home erectus and Neanderthals then I think it would be best to stop even having a conversation. If that’s your position then I do not understand your criteria for evidence. As something that can be applied in any meaningful way to historic sciences.

Are you confused with my use of the tern human species? It’s a way of referring to species in the genus Homo.
__________

Jerad, I think you need to look again. The issue is, how are you inferring to the activity of such, but by inferring on signs from their traces in light of observed causal factors and their signs. In short, Mung and I think others are challenging you to look at cases you accept and then think about why you reject materially similar cases that do not fit the scheme you have accepted. I raised this in the respect of geochronology for instance, where you obviously accept inferences on far less reliable sign that those relating to design such as FSCO/I KF

We know of ancient peoples because of material things. Nothing else. ID is no different.

But you assert and ancient designer based on the evidence of living forms several hundred times descendents of what you are claimed was designed. This is not analogous to material remains like hearths or pots or spears or ruins or bodies.

Again, you have no material evidence to refute the (sufficient and necessary) system of recorded biological information which is being offered to you as a material artifact of design. Recorded information doesn’t just happen, it requires specific material conditions which must be met in order for it to exist.

Yes, fossils must be in the right place at the right time to get made into fossils.

Evolutionary theory offers an explanation for life on earth which explains the data, is consistent with other branches of science and which involves no special pleading or assumption of anything other than observed natural forces.

Id wants to infer a cause not proven to be in existence at the time in question.

This is almost a shameless non-sequitur. The topic being discussed was ‘recorded information arising from inanimate matter’. Not archaeological sites.

I can see where I got that wrong. In which case I would refer to the stratigraphic data recorded in geologic layers, ice and sediment cores and geologic processes like plate tectonics.

Each time I have tried to engage you, you have immediately stated that you consider all OoL issues a mystery and prefer only to talk about other things. It is disingenuous to say otherwise. If you have changed your mind, then you know the thread here and you are welcome to participate.

I have not said they are a mystery, only that I don’t understand the issues. There is a difference and you are miscatagorizing my comments.

And it’s pretty silly for you to be refusing to offer your opinion just because I’m holding back. If you’ve got a hypothesis then why not offer it up?

I find that it happens a lot when I ask ID proponents for their ideas or notions they pull back and find some reason not to offer them up. And, really, what difference does it make what my opinion is? You’re not going to put your view forward because of what I do or do not think? Is that the way science progresses? Oh gosh, I’ll tell you my view but you have to go first? Really? I don’t think so.

I DON’T KNOW how the first basic replicator arose on earth. But what that has to do with anyone else offering their opinion I can’t say or begin to understand.

Justifying your actions thusly does little to change the practical result; which in my estimation has been little more than a demonstration of confirmation bias. I am unsure how you can reduce rancor by ignoring the evidence presented by your conversation partners. It’s rather surprising that you think those actions would be successful.

Since I don’t think I have ignored any issues or data are you saying that I should just quit if I don’t agree with you? Do you not allow for a dissenting viewpoint that doesn’t include design?

But you assert and ancient designer based on the evidence of living forms several hundred times descendents of what you are claimed was designed. This is not analogous to material remains like hearths or pots or spears or ruins or bodies.

Uh, okay. What principle is at work to say that because living things have the property of Life, the application of our knowledge regarding material regularities and processes are subsequently invalid?

Evolutionary theory offers an explanation for life on earth which explains the data, is consistent with other branches of science and which involves no special pleading or assumption of anything other than observed natural forces.

You must be joking, right? Evolutionary theory offers absolutely no explanation whatsoever for the existence of life on Earth. What in the world makes you think otherwise? Frankly, you seem to be completely unaware of the data. Darwin himself assumed life, and Darwinist (a term I rarely use) have been assuming life ever since. Hello?

Id wants to infer a cause not proven to be in existence at the time in question.

Your blind spot is, ahem, large.

In virtually the same way in which you previously stepped over your demonstrated confirmation bias, you now simply want to step over your assumptions. So let’s be clear. You point to the ID advocate and say “you believe in a thing which you have no evidence exist” and then you turn right around and believe in a thing which you have no evidence exist. Understand?

You have absolutely no process or mechanism to point to as the cause of the necessary symbol system or the biological information which is required to organize a living thing. You believe in a thing that you have no evidence exists.

The distinction between us, of course, is that I have material evidence (confirmed as both a universal empirical observation as well as a logical necessity) which intractably demonstrates the artifact of an agent … while you have nothing of the kind.

Uh, okay. What principle is at work to say that because living things have the property of Life, the application of our knowledge regarding material regularities and processes are subsequently invalid?

They change and evolve on their own whereas non-living things will be substantially the same as they were when last modified by an intelligent cause.

You must be joking, right? Evolutionary theory offers absolutely no explanation whatsoever for the existence of life on Earth. What in the world makes you think otherwise? Frankly, you seem to be completely unaware of the data. Darwin himself assumed life, and Darwinist (a term I rarely use) have been assuming life ever since. Hello?

Darwinian theory only hypothesises the first basic replicator. The rest comes from universal common descent with modification.

Your blind spot is, ahem, large.

In virtually the same way in which you previously stepped over your demonstrated confirmation bias, you now simply want to step over your assumptions. So let’s be clear. You point to the ID advocate and say “you believe in a thing which you have no evidence exist” and then you turn right around and believe in a thing which you have no evidence exist. Understand?

Except universal common descent with modification has several lines of evidence which all point to a common ancestor. Not just one maybe class of evidence. Fossils + genetics + morphology + geographic distributions. The case is much stronger.

You have absolutely no process or mechanism to point to as the cause of the necessary symbol system or the biological information which is required to organize a living thing. You believe in a thing that you have no evidence exists.

We’ve observed in the lab how mutations in DNA lead to new features/abilities. Given the first basic replicator and knowing there’d be mutations/copying errors/duplications/etc we can get the variety of life we see now. We have centuries of breeding experience that shows cumulative selection working on a base of mutational variation can introduce great changes in appearance and abilities.

The distinction between us, of course, is that I have material evidence (confirmed as both a universal empirical observation as well as a logical necessity) which intractably demonstrates the artifact of an agent … while you have nothing of the kind.

We’ve observed in the lab how mutations in DNA lead to new features/abilities. Given the first basic replicator and knowing there’d be mutations/copying errors/duplications/etc we can get the variety of life we see now. We have centuries of breeding experience that shows cumulative selection working on a base of mutational variation can introduce great changes in appearance and abilities.

1: We have never observed the origin of significant features by chance variation and natural selection that would suggest the possibilities of body plan origin by same.

2: We are not given the first replicator. That is the pivotal challenge that decisively exposes the emptiness of the chance hyp. By now you know there is no credible, empirically well grounded theory of OOL driven by chance and necessity. So you are begging the question of the root of the tree of life.

3: You have no empirical evidence that accumulated errors filtered by differential reproductive success in light of chance and environmental constraints, can originate novel body plans. This too is a hugely begged question that has no shown adequate cause.

4: By substituting breeding, you make multiple errors. First, Breeding is mostly about reshuffling already existing genetic capacities and moving to extremes within an existing genome, so it is well known that breeding exercises frequently hit hard limits beyond which the variety will not go. Next, it is an exercise in ARTIFICIAL selection, i.e. intelligent design. And to see whether it passes the environmental fitness advantage test, observe that such domestic varieties typically cannot compete against wild ones in natural environments. Think, crops vs weeds etc.

Overall, you are highlighting the gaps in not the achievements of your view.

In relation to ID, what does the hypothesis “therefore design” actually explain?

It explains that the thing in question arose via agency involvement. And that alone changes the investigation. IOW it makes a huge difference, Alan. And if you had any investigative experience you would have known that.

That said, what does the hypothesis “it just happened” (your position, Alan) actually explain?

UB: Uh, okay. What principle is at work to say that because living things have the property of Life, the application of our knowledge regarding material regularities and processes are subsequently invalid?

Jerad: They change and evolve on their own whereas non-living things will be substantially the same as they were when last modified by an intelligent cause.

This says nothing, and answers nothing. Our knowledge of material applies just as much to one as the other, and are just as valid.

UB: You must be joking, right? Evolutionary theory offers absolutely no explanation whatsoever for the existence of life on Earth. What in the world makes you think otherwise? Frankly, you seem to be completely unaware of the data. Darwin himself assumed life, and Darwinist (a term I rarely use) have been assuming life ever since. Hello?

Jerad: Darwinian theory only hypothesises the first basic replicator. The rest comes from universal common descent with modification.

Exactly what I said. Darwinian evolution simple assumes life, and therefore it is not an explanation of it. So your statement that Darwinian evolution explains life is 100% incorrect. Period.

UB: <Your blind spot is, ahem, large.

In virtually the same way in which you previously stepped over your demonstrated confirmation bias, you now simply want to step over your assumptions. So let’s be clear. You point to the ID advocate and say “you believe in a thing which you have no evidence exist” and then you turn right around and believe in a thing which you have no evidence exist. Understand?

Jerad: Except universal common descent with modification has several lines of evidence which all point to a common ancestor. Not just one maybe class of evidence. Fossils + genetics + morphology + geographic distributions. The case is much stronger.

lol. Darwinian evolution does not explain the existence of life. Do you not understand this? Fossils do not explain the existence of life. Genetics do not explain the existence of life. Morphology does not explain the existence of life. Geographic distributions do not explain the existence of life. Your case (that Darwinian evolution explains life on earth) in not made one iota stronger. Simply chanting a list of things that have no impact whatsoever on your claim does nothing to make that claim stronger, or even valid. Is it even possible that you not understand this?

Ah yes, and I love how you say “not just one maybe class of evidence’. Are you perhaps referring to that one little bitty observation that nothing happens without the recorded information which Darwinian evolution is 100% dependent upon?

Did I mention that you demonstrate a great deal of confirmation bias? 🙂

We’ve observed in the lab how mutations in DNA lead to new features/abilities. Given the first basic replicator and knowing there’d be mutations/copying errors/duplications/etc we can get the variety of life we see now. We have centuries of breeding experience that shows cumulative selection working on a base of mutational variation can introduce great changes in appearance and abilities.

Once again, none of this even begins to explain the existence of life. It simply assumes life, but offers nothing whatsoever to explain its existence. Give it a rest already.

UB: The distinction between us, of course, is that I have material evidence (confirmed as both a universal empirical observation as well as a logical necessity) which intractably demonstrates the artifact of an agent … while you have nothing of the kind.

Jerad: Guess we’ll just have to disagree then!!

This isn’t a disagreement – you’ve brought nothing to the table.

You first state that Darwinian evolution explains Life on Earth (which Darwin himself disagreed with), then to support your claim, you repeat a list of items which offer no explanation for the existence of Life on Earth.

And all the while, you disregard evidence which bring this flaw in your position to light. As a simple observation of your words, you live in a self-sustained, self-affirming, self-isolating cocoon.

1: We have never observed the origin of significant features by chance variation and natural selection that would suggest the possibilities of body plan origin by same.

And you call me a hyper-skeptic! The fossil, genetic, morphologic, geographic and breeding data all points to it being able to happen.

2: We are not given the first replicator. That is the pivotal challenge that decisively exposes the emptiness of the chance hyp. By now you know there is no credible, empirically well grounded theory of OOL driven by chance and necessity. So you are begging the question of the root of the tree of life.

Well then you’d best argue with someone else. I think that all the lines of data point to a common first replicator.

3: You have no empirical evidence that accumulated errors filtered by differential reproductive success in light of chance and environmental constraints, can originate novel body plans. This too is a hugely begged question that has no shown adequate cause.

Sure I do; I’ve got the fossil, genetic, morphologic, geographic and breeding data which all points to that happening. Is there an echo in here?

4: By substituting breeding, you make multiple errors. First, Breeding is mostly about reshuffling already existing genetic capacities and moving to extremes within an existing genome, so it is well known that breeding exercises frequently hit hard limits beyond which the variety will not go. Next, it is an exercise in ARTIFICIAL selection, i.e. intelligent design. And to see whether it passes the environmental fitness advantage test, observe that such domestic varieties typically cannot compete against wild ones in natural environments. Think, crops vs weeds etc.

Artificial selection works with the same basic raw materials and processes as natural selection. It’s going to be faster but it shows that cumulatie selection operating on descent with variation can radically alter morphology. Dog breeds, brassicas, rose varieties all show what can be done in just a few centuries. Rutabaga, turnips, kohlrabi, cabbage, kale, cauliflower, broccoli and Brussel sprouts were all cultivated from the same wild plant stock mostly in the last 1000 years. There’s some pretty impressive ‘body plan’ changes in that group.

Overall, you are highlighting the gaps in not the achievements of your view.

Just out of curiosity . . if my position is so week why do you continue to argue with me?

They change and evolve on their own whereas non-living things will be substantially the same as they were when last modified by an intelligent cause.

This says nothing, and answers nothing. Our knowledge of material applies just as much to one as the other, and are just as valid.

You don’t think that life forms being able to descend with modification affects the way we look at them from a processes point of view? Wow.

Darwinian theory only hypothesises the first basic replicator. The rest comes from universal common descent with modification.

Exactly what I said. Darwinian evolution simple assumes life, and therefore it is not an explanation of it. So your statement that Darwinian evolution explains life is 100% incorrect. Period.

Okay, how about I change my statement to universal common descent with modification explains the development of life since the first basic replicator? Is that better? I kind of figure you know what I mean since I’ve said the same thing many, many times.

lol. Darwinian evolution does not explain the existence of life. Do you not understand this? Fossils do not explain the existence of life. Genetics do not explain the existence of life. Morphology does not explain the existence of life. Geographic distributions do not explain the existence of life. Your case (that Darwinian evolution explains life on earth) in not made one iota stronger. Simply chanting a list of things that have no impact whatsoever on your claim does nothing to make that claim stronger, or even valid. Is it even possible that you not understand this?

Ah yes, and I love how you say “not just one maybe class of evidence’. Are you perhaps referring to that one little bitty observation that nothing happens without the recorded information which Darwinian evolution is 100% dependent upon?

Did I mention that you demonstrate a great deal of confirmation bias?

Most humans do exhibit at least some confirmation bias. It’s hard to avoid.

The ‘one class’ of evidence I was referring to was DNA. I hope my changed statement above addresses your list of ‘do not explain’s.

This isn’t a disagreement – you’ve brought nothing to the table.

You first state that Darwinian evolution explains Life on Earth (which Darwin himself disagreed with), then to support your claim, you repeat a list of items which offer no explanation for the existence of Life on Earth.

And all the while, you disregard evidence which bring this flaw in your position to light. As a simple observation of your words, you live in a self-sustained, self-affirming, self-isolating cocoon.

I shall attempt to be more specific in the future regarding what I think evolutionary theory explains.

Jared:Okay, how about I change my statement to universal common descent with modification explains the development of life since the first basic replicator?

What does it “explain” exactly?

As an engineer I find the way Darwinists (i.e, believers in the Blind Watchmaker Thesis) throw around the term “explain” to be very puzzling. It’s kind of like this:

Let’s say we visit a factory where pottery is being made. Raw materials go in the front door, and we can see how the humans mold, form, and bake the pottery. The finished product goes out the back door.

Now, we come across a factory that makes airplanes. We have no access to the inside of the factory. We see raw materials go in the front door and finished products go out the backdoor. We don’t know exactly what is going on inside, but by extrapolation we feel confident that what is going on inside the airplane factory is essentially an extension of the same process as what is going on in the pottery factory.

Yeah right.

You guys see little micro changes in genomes and their small effects and somehow in your thinking this is catapulted across all the huge gaps into an “explanation” for the creation of novel cell types, tissue types, organs and body plans.

I cry foul.

Show us how the known processes can generate novel cell types, tissue types, organs and body plans. Prove your concept to the scale you claim.

P.S. please demonstrate that even the known types of genomic variation existed 500 millions years ago.

You don’t think that life forms being able to descend with modification affects the way we look at them from a processes point of view?

It’s a non-sequitur. You’ve lost your place. The question is about the existence of living things on earth, and what that existence entails. Your objection was that you wanted “independent” evidence for the existence of a designer. I returned that the evidence we have is purely material, just exactly like any other we have for anything else in the deep past, and it is therefore is just as valid. You disagreed because living things replicate, and you’ve now added the “process” of evolution. But the fact that living things evolve by a process does not explain their existence in the first place – no more than the process of combustion explains the existence of your car. The simple fact remains that we have material evidence that points to a material event in the deep past (the onset of recorded information at the origin of life) and that event dictates the sufficient and necessary condition of recorded information, which intractably infers the act of an agent.

Wow.

Your feigned indignation doesn’t impact the evidence.

Okay, how about I change my statement to universal common descent with modification explains the development of life since the first basic replicator? Is that better? I kind of figure you know what I mean since I’ve said the same thing many, many times.

I understand perfectly what you are saying, and I have understood you from the first time you said it. However, what I am saying is that Darwinian evolution explains nothing whatsoever about the existence of life, and I have made it perfectly clear that you rely on the fact of evolution (i.e. that things change over time) as the intellectual means to ignore the larger issue that Darwinian evolution does nothing whatsoever to explain the existence of life – the very thing that needs to be explained (i.e. the ID thing which you deny).

The ‘one class’ of evidence I was referring to was DNA. I hope my changed statement above addresses your list of ‘do not explain’s.

Not in the slightest. First off, DNA is not a “maybe class” of evidence; it is a concrete reality that is the distinction between living things and inanimate matter. Secondly, the information recorded in DNA requires very special and unique material conditions in order to exist (and function), which none of the things on your list even begins to explain.

I shall attempt to be more specific in the future regarding what I think evolutionary theory explains.

If you find my position so derisible why are you arguing with me?

It’s rather simple, actually. You come here to deride ID while hiding behind a process which cannot even exist without the evidence which supports ID, and consequently your process does nothing whatsoever to impact it. When you say you came here to find out what ID people think, is this not what you wanted to hear?

No need to answer, I will drop out from the conversation, given that evidence for ID does not matter anyway.

You guys see little micro changes in genomes and their small effects and somehow in your thinking this is catapulted across all the huge gaps into an “explanation” for the creation of novel cell types, tissue types, organs and body plans.

I cry foul.

As is your right, even if you do not hold yourself to the same criteria. But I do have the fossil, genetic, morphologic, geographic and breeding records to back me up.

I do not know your particular flavour of ID but does it exhibit the same level of detail and explanation you are asking of the modern evolutionary synthesis?

Show us how the known processes can generate novel cell types, tissue types, organs and body plans. Prove your concept to the scale you claim.

I do not claim to be able to elucidate the exact molecular pathway that occurred to produce any modern life form. But I’ve got a lot of consistent and coherent evidence which points in that direction.

Does your hypothesis generate answers to the questions you ask? Is it fair to ask you questions about how, when and where the designers did their work? I’m always told it’s not cricket yet I get asked even more specific questions.

P.S. please demonstrate that even the known types of genomic variation existed 500 millions years ago.

Without assuming uniformity you can’t really ‘do’ historical science. If you throw away that assumption then everything is unknown and nothing can be established. You might as well go back to multiple gods and their local shrines.

You don’t think that life forms being able to descend with modification affects the way we look at them from a processes point of view?

It’s a non-sequitur. You’ve lost your place. The question is about the existence of living things on earth, and what that existence entails. Your objection was that you wanted “independent” evidence for the existence of a designer. I returned that the evidence we have is purely material, just exactly like any other we have for anything else in the deep past, and it is therefore is just as valid. You disagreed because living things replicate, and you’ve now added the “process” of evolution. But the fact that living things evolve by a process does not explain their existence in the first place – no more than the process of combustion explains the existence of your car.

No but the fact that there is descent with modification implies that looking at modern lifeforms without considering the fossil, genetic, morphologic and geographic records means you have to be very, very cautious about claiming their origin is due to design.

You are really convinced that not knowing the nature of the first basic replicator chops down the whole Darwinian tree. I’ve said before that the first basic replicator could have ridden to earth on an asteroid or fallen out of an ancient astronauts lunch box. It doesn’t change the evolutionary argument. Nor does it explain the initial replicator. You seem determined to accept no conclusion other than design. Are you sure you’re not biased?

The simple fact remains that we have material evidence that points to a material event in the deep past (the onset of recorded information at the origin of life) and that event dictates the sufficient and necessary condition of recorded information, which intractably infers the act of an agent.

I disagree. I think at the very least you have to say: we don’t know. But you are very sure and that makes me very suspicious.

I understand perfectly what you are saying, and I have understood you from the first time you said it. However, what I am saying is that Darwinian evolution explains nothing whatsoever about the existence of life, and I have made it perfectly clear that you rely on the fact of evolution (i.e. that things change over time) as the intellectual means to ignore the larger issue that Darwinian evolution does nothing whatsoever to explain the existence of life – the very thing that needs to be explained (i.e. the ID thing which you deny).

Why don’t you just say the origins of life if that’s what you mean? And if that’s not what you mean then you’d better explain yourself more fully ’cause then I think I’m missing something.

I’m not denying anything. I just don’t find the need to bring in any ’causes’ other than those natural, undirected processes we have observed and measured and defined already.

Not in the slightest. First off, DNA is not a “maybe class” of evidence; it is a concrete reality that is the distinction between living things and inanimate matter. Secondly, the information recorded in DNA requires very special and unique material conditions in order to exist (and function), which none of the things on your list even begins to explain.

I think the evolutionary paradigm explains the ‘information’ in DNA nicely. The environmental pressures ‘favour’ certain life forms or DNA sequences, those ‘favoured’ individuals leave proportionally more offspring thereby shifting the allele balance in the population and this process continues. Eventually you have life forms which have been ‘tailored’ to suit the environment and the ‘information’ in their DNA contains instructions on how to build a well-adapted life form for that environment. Cumulative selection acting on random variation. Powerful stuff.

It’s rather simple, actually. You come here to deride ID while hiding behind a process which cannot even exist without the evidence which supports ID, and consequently your process does nothing whatsoever to impact it. When you say you came here to find out what ID people think, is this not what you wanted to hear?

That is simply not true. I have not come here to deride ID. I have tried to be respectful and behave in an objective manner. I have not called names or made fun of anyone unlike some of the UD commentators it has to be said.

I get asked questions and so I try and answer them. If you feel my answers deride ID then that is your interpretation.

No need to answer, I will drop out from the conversation, given that evidence for ID does not matter anyway.

-best regards

That is your call. I don’t think we have to agree to understand each other’s opinions. I don’t expect to convert anyone. But I keep getting the feeling that my not being converted to the ID point of view is offensive in some way. Why do you think that is?

No but the fact that there is descent with modification implies that looking at modern lifeforms without considering the fossil, genetic, morphologic and geographic records means you have to be very, very cautious about claiming their origin is due to design.

The claim I am making regarding the (logically and empirically validated) material conditions required for recorded information are not impacted by the fossil record. The simple fact is that the fossil record would not even exist without those material conditions (i.e. Darwinian evolution is entirely dependent on them). Why is this so hard for you to grasp? Perhaps your lack of understanding is tied to the fact that you choose not to engage the argument, preferring to shield your views from any evidence to the contrary.

You are really convinced that not knowing the nature of the first basic replicator chops down the whole Darwinian tree.

“chops down the whole Darwinian tree”?

Would you mind trying to apply yourself a little more to the topic? First, I have no need to chop down the Darwinian tree. Secondly, there are certain characteristics of the first replicator that are generally understood. I am arguing for those required characteristics, and you are ignoring that argument.

I’ve said before that the first basic replicator could have ridden to earth on an asteroid or fallen out of an ancient astronauts lunch box.

So please allow me to take you at face value. If I infer the act of an agent from material evidence and logical necessity, then you demand I show you evidence of an agent. But in the effort to brush aside that same material evidence, you are happy to posit things you don’t even believe, like ancient astronauts with lunch boxes. Great.

You seem determined to accept no conclusion other than design. Are you sure you’re not biased?

You are welcome to attack me after you address the evidence, not before. Otherwise, it’s a fallacy.

I disagree. I think at the very least you have to say: we don’t know. But you are very sure and that makes me very suspicious.

What exactly is it that you could disagree with, given that you are unwilling to engage the evidence?

I’m not denying anything. I just don’t find the need to bring in any ’causes’ other than those natural, undirected processes we have observed and measured and defined already.

Good grief. Do you hear yourself? You do not have a cause to “bring in” that can explain what must be explained, but you apparently don’t know this because you refuse to engage the evidence. So instead, you bring in the causes that don’t work – and simply assert they do. Does this not embarrass you at all?

I think the evolutionary paradigm explains the ‘information’ in DNA nicely.

You simply do not know what you are talking about, and I think you may prefer it that way. Darwinian evolution requires the existence of recorded information. As a simple matter of fact, it is the information that does the evolving. If there is no recorded information, then there is no Darwinian evolution. And there can be no recorded information without the existence of specific material conditions. These material conditions are unique among material processes. Darwinian evolution cannot be the source of these conditions, because it (itself) is entirely dependent upon them. To say otherwise is to say that a thing that does not exist can cause something to happen.

Let me ask you a question: Do you think a thing that does not exist can cause something to happen?

The claim I am making regarding the (logically and empirically validated) material conditions required for recorded information are not impacted by the fossil record. The simple fact is that the fossil record would not even exist without those material conditions (i.e. Darwinian evolution is entirely dependent on them). Why is this so hard for you to grasp? Perhaps your lack of understanding is tied to the fact that you choose not to engage the argument, preferring to shield your views from any evidence to the contrary.

I guess I’ll have to read at least part of ‘your’ thread laying out your argument. I remember skimming parts of it but by the time I had a look the discussion had gotten quite convoluted and I decided not to stick my oar in without having done the work of reading stuff first. Anyway, it’s not fair of me to make anymore comments without first having a look.

“chops down the whole Darwinian tree”?

Would you mind trying to apply yourself a little more to the topic? First, I have no need to chop down the Darwinian tree. Secondly, there are certain characteristics of the first replicator that are generally understood. I am arguing for those required characteristics, and you are ignoring that argument.

I will try and find some time to read ‘your’ thread before I comment further.

So please allow me to take you at face value. If I infer the act of an agent from material evidence and logical necessity, then you demand I show you evidence of an agent. But in the effort to brush aside that same material evidence, you are happy to posit things you don’t even believe, like ancient astronauts with lunch boxes. Great.

I was just pointing out that from my point of view, and for what I am arguing, there are multiple possible sources of the first basic replicator. My argument is completely separate from yours obviously.

Good grief. Do you hear yourself? You do not have a cause to “bring in” that can explain what must be explained, but you apparently don’t know this because you refuse to engage the evidence. So instead, you bring in the causes that don’t work – and simply assert they do. Does this not embarrass you at all?

I think undirected natural causes are adequate so it doesn’t embarrass me at all.

I will make an effort to look at your argument as laid out in your thread.

You simply do not know what you are talking about, and I think you may prefer it that way. Darwinian evolution requires the existence of recorded information. As a simple matter of fact, it is the information that does the evolving. If there is no recorded information, then there is no Darwinian evolution. And there can be no recorded information without the existence of specific material conditions. These material conditions are unique among material processes. Darwinian evolution cannot be the source of these conditions, because it (itself) is entirely dependent upon them. To say otherwise is to say that a thing that does not exist can cause something to happen.

Let me ask you a question: Do you think a thing that does not exist can cause something to happen?

Unless I’m misinterpreting what you’re getting at then no, I do not believe that something that does not exist can cause something to happen. I think it takes material or energy causes to affect material or energy.

Opinions mean nothing. Only science matters. And your position only has opinions and no science.

In our past discussions you have hypothesised an extra source of information in the cell/genome which accounts for adaptation and . . . lots of other stuff. What science or data have you got to bolster your opinion?

(157)

I think undirected natural causes are adequate so it doesn’t embarrass me at all.

Adequate for what, exactly? And what is the evidence that supports it?

The fossil, genetic, morphologic, geologic and breeding records are good evidence to suport the contention that universal common descent with modification from a common ancestor is true.

What is your better model? Seriously. You stand on the sidelines and bitch and run but you never really stick your neck out and cough up a well thought out model which works better. I’d be really happy to consider such a model if you proposed one. But you haven’t. Science is about coming up with explanations. Okay, let’s hear yours.

Joe, Jerad likes those undirected natural causes. He can’t point to which ones they are. Or how they operate. Or a single example of them actually doing the required work of creation.

But, hey, let’s not get in the way of a good a priori commitment to materialistic causes. Wouldn’t want to shake the faith now would we?

Okay Eric, if I’m wrong then tell me your alternate hypothesis which does a better job of explaining the data, is consistent with known science and requires no special pleading. Seriously. Time to put your money where your mouth is. Give us all an alternative that does the job better.

We’ll just stop all this fussing about my opinion and cut to the chase: what have you got that works better? In all ways?

Well, I am back. don’t know how much time I will be able to dedicate to the topic, but I will try.

I am happy that not too much discussion went on during my absence. That makes my catch up easier.

First of all, I would like to say that I agree with what some of you have said, that the purpose of the discussion is not to win or lose a challenge, but to clarify the dFSCI procedure with examples. In that sense, there is in principle no reason why we should be antagonists in that. If the procedure can be applied, why should you object to that?

So, let’s work together and constructively.

Another point that is maybe not so clear is the nature of the “challenge” I paste here:

“Give me any number of strings of which you know for certain the origin. I will assess dFSCI in my way. If I give you a false positive, I lose. I will accept strings of a predetermined length (we can decide), so that at least the search space is fixed.”

The important part is, as I have consistently said in all the previous discussion, that the test needs “strings of which you know for certain the origin”. That means that you should propose strings that were:

a) designed

or

b) not designed.

That’s why I objected to GAs: not because I would have any problems in applying the procedure to any string produced by a GA. As sais many times, when we apply the procedure we know nothing of the origin of the string. The problem is, how would you comsider a string outputted by a GA? I would obviously consider it as a string that has a design origin. Some of you would probably try to affirm that it has not a designed origin, but on what basis? There can be no doubt that the origin of the string is from design.

You may ask: waht if I use a Random String Generator? I think we can accept that as an algorithm producing random strings, if it really work only as a RSG. Well, the algorithm would still be designed, but I think we can agree to accept that as a reasonable substitute for a slower random system, such as a coin tossing system. So, I would certainly accept the output of such a software as “non designed strings”.

Now I have not much time, so we can clarify better these points later.

Well, let’s start with what is simple. I make a reasonable assumption that the above string uses the english alphabet, or a very similar one, as basic alphabet, including the space character. That would give 27 letters, but indeed, if I am not wrong, 5 of them are not present in the string, so I would cautiously say that the alphabet used here is of at least 21 letters. The string is 180 characters long, so the search space is about 790 bits. I suppose that should not offer any problem to anyone.

The second point would be: is there a functional specification recognizable here?

Well, Petrushka has not offered any help. At first site, I cannot see any recognizable function in the string. It has some of the formal aspects of language, and obviously some similitude to existing words possibly of different existing languages. If it were language, the functional specification would be a meaning. At present, I can detect no meaning in the string.

I wondered if that could be some artificial language, like Esperanto, but a very quick google search does not seem to support that hypothesis.

The single words, however, seem to have meaning: verwarten is german, mystiness is english, holones would be spanish, and so on.

My best guess is that it is a sequence of words in different languages, not connected in a phrase. I would ask Petrushka, please, if he can confirm that. If the wrods form a phrase with meaning, my reasoning would be different, but frankly I don’t want to spend a lotof time trying to “translate” form a non existing language.

So, I will go on according to my assumption: indeed, I have not considered all the words, for brevity. Some of them, like valateria, don’t seem to be words, but could be names.

However, the sequence could at most correspond to a generic specification: any sequence (that long) of existing words, in any known language.

Now, if the phrase were a phrase, with a recognizable, well expressed meaning, I really would have no problems in attributing dFSCI to it. The reasoning would be as follows:

a) The search space is extremely big. Much greater than any proposed threshold for CSI. Much greater than 500 bits.

b) Calculating an exactl target space for language is not an easy task. However, I have shon elsewhere that it is possible to demonstrate, for language, and in particular for compact phrases without big redundancies, that the dFSI necessarily increases when the length of the sequence increases. That result is probably valid for all digital sequences, but is particularly obviousl for meaningful language. I will not repeat the demonstration here, but if someone is interested, we can discuss it.

So, I am perfectly confident that any meaningful and compact phrase as long as the one proposed is certainly beyond 500 bits of dFSI.

Moreover, I am aware of no natural mechanism that can output meaningful language beyond a minimal complexity. I am also perfectly confident that none will ever be found, but that is not strictly necessary for the reasoning.

So, very briefly, if I were aware that the above phrase has a good compact meaning (even if expressed through words of different languages) I would definitely assess it as exhibiting dFSCI.

However, as a simple sequence of existing words, it is more difficult to give a quick answer. The evaluation of the target space is more difficult. Indeed, at present I have no idea of how to approximate it. So, while intuitively I woul think that opribably, if a god approximation of the target space could be obtained, the sequence could still be considered as exhibiting dFSCI, for the moment I would cautiosly abstain from that conclusion, because I have developed no reasonable way to evaluate the target space of all possible sequences, of a certain length, of existing words, of any length, in any language.

You proposed a string for wehich I have found at lòeast one interesting function: as soon as I try to paste it in this form, everything crashes 🙂

So, I will not post it here. You can find it in post #111.

Well, this is a 150 characters string. The potential search space, with the whole english alphabet plus space, would be 713 bits.

It clearly appears to be source code. At this point, I would kindly ask Mung if he can offer the following information:

a) The language

b) If it is a complete source code, or just a piece of it

c) If it can be compiled as it is

d) What would the compiled software do, and in what environment?

Those informations would be usefule for a more detailed analysis, and to correctly define the function.

In general, if we can correctly define a function for the source code, I would say that we are probably in a condition that can allow to assess dFSCI as present, because the string is long enough to comply according to the principles I have suggested for language, and I believe that source code obeys the same rules as meaningful language where the length/dFSI relationship is implied.

In the same way, I am aware of no natural way to generate working software beyond a minimal complexity.

So, with some help from Mung, we could probably classify this as a positive.

Maybe I don’t understand your point. I have tried the first set of papaers, but they do not seem to have anything in common. So, what is your specification? Any sequence of numbers that can correspond as PMID to any generic pubmed paper?

It clearly appears to be source code. At this point, I would kindly ask Mung if he can offer the following information:

a) The language

b) If it is a complete source code, or just a piece of it

c) If it can be compiled as it is

d) What would the compiled software do, and in what environment?

a) Ruby
b) The code defines a function (aka method). In that sense it is complete. (see d)
c) Ruby is an interpreted language, so no compilation is required.
d) The function accepts a binary string and returns an ascii string by scanning the input string and taking each sequence of 7 bits and converting the seven bits to an ascii character.

Conditions? What do you mean? Are you referring to my request not to use the output of GAs?

As I have explained in my post #162:

“That’s why I objected to GAs: not because I would have any problems in applying the procedure to any string produced by a GA. As sais many times, when we apply the procedure we know nothing of the origin of the string. The problem is, how would you comsider a string outputted by a GA? I would obviously consider it as a string that has a design origin. Some of you would probably try to affirm that it has not a designed origin, but on what basis? There can be no doubt that the origin of the string is from design.

You may ask: waht if I use a Random String Generator? I think we can accept that as an algorithm producing random strings, if it really work only as a RSG. Well, the algorithm would still be designed, but I think we can agree to accept that as a reasonable substitute for a slower random system, such as a coin tossing system. So, I would certainly accept the output of such a software as “non designed strings”.”

No, now I understand waht you mean. You can certainly predefine a list of papers. That would be a pre-specification, because the papers have nothing in common, and cannot be defined in any other way than listing them.

So, if you predefine a list of papers before the string is generated, then you are right, the string that is generated exhibits dFSCI. The situation would be similar to specifying a definite sequence of a deck of cards, and then having it coming out. A very strange event, that would suggest design in the form of cheat!

But if you define the deck of cards after it was obtained, you are obviously simply “post-specifying” a random event that alreadt occurred. That is not a valid specification.

Pre-specification is a valid specification (indeed, not a functional one in the proper sense, but I can accept it as a “stretched” form of function). But it is of no practical use.

But yours is not a prespecification. You are saying: “I give you a list of numbers that correspond to certain papers. They are specified and complex because they correspnd to the papers to which they correspond”. That makes obviously no sense.

I will remind here that a true functional specification, while being certainly a post-specification (we recognize the function in th object and define it), is an objective kind of specification, and therefore is valid as a post-specification. When we define the function of an enzyme, we are objectively describing and measuring what theb protein can do, but we are not, in any way, defining the protein as: “a protein that has the following sequence of AAs”. IOWs, our definition is objective, and completely independent from the sequence of the string, and from the events that should generate that sequence.

So, your definition: “any sequence of IDs, that corresponds to the papers to which it corresponds” is the same as saying “any protein which has the sequence that it has”. They are valid specifications, but they are not certainly complex. They don’t define objectively a small target space.

Any protein has the sequence that it has (complexity zero). And practically all the numbers under a certain value are valid PMIDs (extremely low complexity).

I eagerly await his objective method of detecting design that does not involve first calculating dFSCI. Isn’t that the part where the argument goes circular?

And you will wait forever. I have no “objective method of detecting design that does not involve first calculating dFSCI”. Where did you take that strange idea?

I just asked for strings whose origin is known. To you. IOWs, I suppose that, if you yourself wrote a string of language or a piece of software, you certainly know that its origin is from design. I accept that.

In the same way, if you generated a string by tossing a coin, you know that it was generated in a random system, without any design intervention. The same is true, as I have explained, for a string generated in a RSG.

IOWs, you who propose the string must know its origin, I have nothing to “detect”. I only assess dFSCI, and in some cases infer design.

Now, please tell us the truth: was it written by you (or somebody else), or was it generated in a random system (or by natural laws)? IOWs, was it designed or not? Is it a true positive, or a false positive?

These questions could seem trivial, but they are not. I am just showing how the testing works.

c) Some true negatives, IMO (Mark’s examples, if they were randomly generated), or some false negatives if Mark purposefully wrote the sequences (by the way, Mark, the strings were lacking a separator, in that way they are useless).

Pre-specification is a very special case of specification. Dembski has dealt with that explicitly. If you pre specify an output, and then the output comes, then you have a strange event. If you specify the output after you look at it, simply defining the outpu, not because it has an objective function, then you are only joking.

Let’s make it more clear. Let’s take the classical example of an arrow that hits a wall. If it hits a target that was pre-existing in the wall, that is a sing of design. If you design the target after the arrow was thrown, what does that mean? Nothing.

You are doing the same thing, You look at an arrow in the wall, and then say: “Well, I define a function for this arrow as being exactly at the point that is such and such centimeters from the floor and from the left angle. So, the position of the arrow is functional”. That is nonsense, and has nothing to do with a functional specification. The correct way to describe your specification is:

“An arrow on the wall that is exactly where it is”. The complexity of such a definition is extremely low: only arrowa that are not in the wall will not comply.

But let’s say that the arrow is in the center of a target drwan on the wall, and that you know very well that the target was not drawn there because the arrow was alredy there. You see the arrow after it reached the target (post-specification), but the target was there independently. And it is the only target in the wall.

Or still, you may have 10 targets on the wall, on a very bign wall, and in 5 of them you see an arrow. That is functional spèecification: you define a small subset among all possible arrows in the wall.

I am very amazed that you are confused about these very simple aspects of design theory. As I said, Dembski has analyzed them very well in his first works.

So, to sum up:

Correct functional specifications:

a) I give you a list of papers. After that, a string is generated, and it correspinds to the list I had given before (pre-specification).

b) I give a list of papers that can be objectively defined: for instance, all the papers dealing with cystic fibrosis. That defines a very objective subset of all papers, and of all valid PMIDs. If you give me a string whise numbers, correctly separated, all correspond to that subset of papers, I will have to evaluate dFSCI for it. And, in this case, it will be specially easy, because the functional subset can be easily measured by a search (but we should also consider the probability of having numbers correctly spaced so that all of them are below the highest of PMIDs).

I am afraid, Mark, that you are only creating unnecessary confusion. dFSCI measures the improbability of a string arising by chance, by evaluating the complexity tied to the functional definition. If you observe the string, and then look for some way to give it a function (for example building an appropriate list of papers that correspond to the random string), then it is not then complexity of the string that is functionally linked to the list: it is rather the complexity of the designed list (you selected the appropriate papers amonf all the possible ones, just with the purpose to have them correspong to tyhe random string) that corresponds to the random string. IOWs, you designed a list of papers that has the function of corresponding to an already existing random string.

As you can see, design theory, if correctly understood and applied, can explain many different situations.

Independent of the string – the function is to list a set of papers in order. This list of papers is independent of the string.

Absolutely not. You created the list of papers from the string: you took the numbers, looked for the corresponding papers, and created the list. IOWs the list of papers was designed from the string. How can it be independent?

The other possibility is that you first created the list of papers, and then designed the string to fit it. That would be a correct pre-specification. And I could possibly infer design, if you guarantee that the list was specified before the string was generated, and if the dFSI is high enough.

I didn’t design the function. I identified it from all the many things that could be done with that string. In fact the exact process was I took parts of the string and entered them into Google to see what they might be used for. I started with the whole string and progressively broke it down into smaller parts. It only took about 20 minutes. The function of representing that list of papers was a property of that string even if I had never engaged in the search. I expect there are very many other such functions should I stumble across them.

So, the answer is very simple: let’s say that, given a numerical string, it is very easy to find some use for those numbers, whatever they are, just by using google.

So, the only function that I see defined here is:

“A string of numbers such that we can find any use or function for it, sfter we see it, by using google”.

OK, this is the only function that I can see in your string. You did not give me a specific list of papers. You did not explain how such a list was found.

So I look at your string, and I can find no complex function for it. As you say, almost any random string can be used for something, a posteriori. So, that is a function, but it is in no way complex.

It would be complex to generate a string that points to a pre defined list of papers. It would be complex to generate a string that point only to papers about cystic fibrosis. It would be complex to generate a string that is the exact key to a specific case.

It is not complex to generate a string that point to any generic list of papers. Almost all strings of a certain type, or interpreted in a certain way, will do.

It is not complex to generate a key that can be used as a key for a generic safe, by setting it as a key: any string of the correct length will do.

But it is always complex to generate a key that points only to the papers in Pubmed which deal with cystic fibrosis.

Tha definition, and only that one, popints to an objective function, that objectively defines a subset of all Pubmed papers.

Let’s say that Pubmed has about 20 million IDs. I searched “cystic fibrosis”, and it gave me 36969 results. So, let’s say that the probability for a number under 20 x 10^6 of pointing to a paper about cystic fibrosis is about 0.00184845. Let’s say that we have a list of five numbers under 20 x 10^6, all of them pointing to a paper about cystic fibrosis. Using the binomial distribution, the probability of having 5 successes in 5 events with such a p is of the order of 45 bits, if I am not wrong.

So, we are not at any high threshold here, not even the 150 bits threshold. But it is quite an unlikely result just the same.

For a simple system, like a RSG with limited resources, 45 bits could be enough to affirm dFSCI and infer design. That’s where we need to define better the system and the time span, as I have always argued.

I’m disappointed that no theists have shown up here to defend their God.

As if-

1- As If God needs defending
2- As if humans could
3- As if anyone cares what keiths sez

But anyway, being brought up in a Christian family and having attended catholic schools, it is clear to anyone with an IQ over 50, that pain and suffering are the result of the fall of man. We brought it upon ourselves, with a lttle help from below. Now we have to deal with it.

Individual salvation can be had, as can individual damnation- equal opportunity. The choice is yours.

So that is how Christians explain and accept the world, keiths- unless they have changed in the past thirty + years.

haha, keiths is at it again. Another OP that has nothing to do with demonstrating that ID is not compatible with the evidence for common descent. Yes, keiths, we’re still waiting for Part II.

Nice to know though that he doesn’t think the problem of evil has anything to do with ID.

keiths:

This is The Skeptical Zone, so it’s only fitting that we turn our attention to topics other than ID from time to time.

His OP has the title A specific instance of the problem of evil.

But where does he say what that specific instance of the problem of evil is? Maybe the specific instance of the problem of evil is him not being able to ascertain whether some guy means a rape or a pregnancy when he says it was intended by God. Yeah, that’s probably it.

Or maybe the specific instance of the problem of evil he is referring to is people being able to make choices. I guess that he thinks that’s somehow not compatible with the idea of God.

Maybe he should have used a different title. Intelligent Design is not compatible with the evidence for evil. After all, evil is explained trillions and trillions of times better on the theory of unguided evolution. As is rape.

As is your right, even if you do not hold yourself to the same criteria. But I do have the fossil, genetic, morphologic, geographic and breeding records to back me up.

Back up what exactly? That the micro mutations we see today are sufficient to account for the known varieties of cell types, tissue types, organs and body plans? How so?

I do not know your particular flavour of ID but does it exhibit the same level of detail and explanation you are asking of the modern evolutionary synthesis?

I assert no ID at all presently for this discussion.

CS: Show us how the known processes can generate novel cell types, tissue types, organs and body plans. Prove your concept to the scale you claim.

Jared: I do not claim to be able to elucidate the exact molecular pathway that occurred to produce any modern life form.

You can throw out “exact” and you’d still be right. At any rate, thank you for the admission. Now, what pathways can you demonstrate?

But I’ve got a lot of consistent and coherent evidence which points in that direction.

Such as?

CS: P.S. please demonstrate that even the known types of genomic variation existed 500 millions years ago.

Jared:Without assuming uniformity you can’t really ‘do’ historical science. If you throw away that assumption then everything is unknown and nothing can be established.

Not true. Without uniformity in *physics* you cannot really do historical science. However, when talking about putative controversial processes, such as what you propose, that are not basic to physics, you must demonstrate your uniformity.

At any rate, before uniformity is even a viable lynchpin to your thesis, you have to establish that the known sources of genomic mutation that exist *today* are sufficient for such a development of the known variety of cell types, tissie types, organs and body plans. Can you do that?

Setting aside the issue of whether free will exists, this argument has always seemed bogus to me. Suppose that tomorrow I decide to blow up the entire earth. Does the mere fact that I’m incapable of carrying out my plan mean that my free will has been denied? I don’t think so. If it did, it would mean that God is constantly denying our free will, because there are always things that we want to do but can’t. If that’s permissible, then why isn’t it okay for God to prevent us from raping?

So keiths puts forth a free will response to the problem of evil, then immediately says we should set that argument aside. Yes, this is the level of intelligence we are dealing with.

So say tomorrow keiths decides to blow up the whole earth. He’s obviously deluded. He lacks the capacity to blow up the whole earth.

Then he asks if his free will has been denied. This while we’re supposed to be setting aside the question of whether free will exists. Yes, it’s true.

So then he asks, if God isn’t denying his [keiths’s] free will by preventing him from carrying out some act which he is incapable of carrying out, then why isn’t it ok for God to prevent him from raping some unspecified something.

I have to ask, who did you [keiths] decide to rape, and can you please define rape for us?

Mark Frank:

I too would love to see a response from a theist as your argument seems pretty watertight to me.

lol. REALLY? WATERTIGHT?

He can barely form an intelligible sentence, much less a watertight argument.

All I will say about my string is that it is a subset of a highly useful and lucrative set of strings. What I want to see is gpuccio’s methodology for determining whether the string can be produced by necessity mechanisms.

It cannot. If it could, it would not be lucrative.

And you need to talk to onlooker, who doesn’t understand the meaning of arbitrary.

I might say that one of the folks often cited by ID advocates –Hubert Yockey–is on record saying evolution can “compute” any string.

Your specific decision to “blow up the entire earth,” otoh, might not qualify.

If we’re not talking specifics, please, let’s get that out of the way right now. I don’t want to get way down the road just to find out that you didn’t really mean what you said in the title of your OP, like last time.

Why is gpuccio putting these restrictions on his challenge? Any protocol for measuring stuff has its operational limits; to cite the first example that comes to mind, C14-based radiometric dating doesn’t work so good on specimens that are 50,000+ years old.

All I will say about my string is that it is a subset of a highly useful and lucrative set of strings. What I want to see is gpuccio’s methodology for determining whether the string can be produced by necessity mechanisms.

gpuccio hasn’t claimed to be in possession of a methodology to identify strings generated by a necessity mechanisms.

Sorry to disappoint.

The article I quoted indicates that 75 percent of bases are noise — any value is equivalent to any other value.

That’s not noise.

So the search space to be considered is not the number of possible strings of length x; it is significantly less than the length of string x.

But evolution can compute any string, right? Including strings of length x + 1 and strings of length x + x.

Amazingly, even Mung agrees that the search space might be less than 2**X in an analogy he gave where one bit in a string is directly related to another bit in that string, thus showing that “information” in a string is not completely “arbitrary”

What I said has nothing to do with the size of some search space.

There is nothing “amazing” about it. There was no “analogy” involved.

In my example, there was one bit that was determined by the state of two bits and a rule. Your assertion that “one bit in a string is directly related to another bit in that string” doesn’t accurately capture what I said. I could even argue that it’s false.

…thus showing that “information” in a string is not completely “arbitrary”

I don’t even know what that means. Do you?

Assume there exists a rule which states that if the first bit is 0 and the second bit is 1, then the third bit shall be 0.

010

Assume there exists a rule which states that if the first bit is 1 and the second bit is 0, then the third bit shall be 0.

100

Can we, by the fact that the third bit is 0, determine the value of the first bit? The answer should be obvious. NO.

Defining the terms of this challenge is not straightforward. If I decide on a function and then find a short algorithm to create a string that performs it – have I designed it or not? It is after all a deterministic process.

First you must decide on a function.

Then you must find a string that performs the function.

Then you must find an algorithm to create a string that performs the function.

I thought this was sufficient to decide that dFSCI is present. Are you now saying that you also need to know something about how the function was arrived at?

It is very simple. Your objectively defined function is as follows:

“A digital string that points to a list of papers that I have designed so that it is pointed to by the pre-existing string.”

That is the correct definition of the function you define.

This function is correct. And its compèlexity is low. Therefore, there is no evident dFSCI in the string. This is my answer. I suppose it is a true negative, if the string was generated in a random system.

You cannot just say:

“A digital string that points to the following list of papers”.

That is not a good definition of the function. Why?

Because it ios ambiguous. It can point to two completely different cases:

a) “A digital string that points to the following list of papers, that was defined by me before the string came into existence”.

This is a valid case of pre-specification, and has some complexity, which has to be evaluated with more precision (probably, it is not enough to affirm dFSCI for any generic system and time span).

Or:

b) “A digital string that points to a list of papers that I have designed so that it is pointed to by the pre-existing string.”

As already said, this definition is valid, but has no special complexity. A very big subset of all possible strings would be defined by that definition, as you yourself have admitted. Therefore, the complexity is very low, and no dFSCI can be affirmed.

My problem with gpuccio’s insistence on “someone” knowing the source or history of the string is that this is precisely what we are trying to find out. The number of bits is not interesting in and of itself.

If GP’s method doesn’t tell us anything about the history of the string, what is its value?

I thought I had been clear enough, but let’s say it another time.

The computation of the specificity (and sensitivity) of any diagnostic method is done as follows:

a) We need a “gold standard” according to which we affirm if the condition is present or absent. The gold standard is considered the “truth”. In our case, the gold standard is the known, observed history of the string: whether it was generated by a design process, or not.

b) We have a “diagnostic test”: something that is trying to detect the condition. It’s exactly the efficiency of the diagnostic test that we want to evaluate, in terms of sensitivity and specificity. That’s why we apply the test, in this context, only to “patients” whose condition is independently known (by the gold standard).

c) In this way, we can classify all the results of our test in our test patients as: true positives, false positives, false negatives, true negatives. So, we build a two by two table, from which the various parameters, including specificity, are easily computed.

It should be ovious, therefore (but I will probably have to repeat it again many times) that here we are just “testing” the procedure against strings whose origin (design or not) is independently know. In particular, we are verifying (you are verifying) my statement that the specificity of the procedure is 100% when tested in this way.

The application of a test to a situation where the condition is not independently known, instead, is the application of the test. A test would be useless if we never applied it to detect the condition.

So, after we are reasonably sure that the test is good (in this case, that it has a measure specificity of 100%), then we can confidently apply it to cases where the origin (design or not design) is not known. IOWs, we use the test now to “detect” the condition. That is the design inference.

So, to sum up: One thing is the testing of the procedure; amother thing is the application of a good procedure to new cases, whose “condition” is not known.

The first thing is the measurement of dFSCI’s specificity for the design condition. The second thing is the application of dFSCI to make a design inference in unknown cases.

Defining the terms of this challenge is not straightforward. If I decide on a function and then find a short algorithm to create a string that performs it – have I designed it or not? It is after all a deterministic process.

The origin of the string is obviously a design process: you designed the algorithm to produce the string with the function.

The assessment of dFSCI would obviously depend only on the string itself.

So it appears that all functions are ambiguous unless you know how they are derived!

If you look at my definition of dFSCI, you will see that the function is “recognized and explicitly defined” by the observer (ususally, the same person who measures its dFSI). I must bring your attention to the word “explicitly”. In this case. you defined the function. I, having to measure the dFSI of the string, don’t think that it is explicit in the way you defined it. So I ask you further information about your definition.

That is perfectly correct. You defined the function, so I have all the right to ask for clarifications about the definition.

The clarification is simple: why did you choose that particular list of papers, and when?

Given that due clarification, the evaluation of dFSI is simple, as I have shown in my post #203.

In one case, (pre-specification), some dFSI could be affirmed, but it must be measured with greater detail.

In the other case (a list designed to match a string) there is no reason to affirm complexity linked to the function: any random string can be linked, a posteriori, with ad hoc functions designed exactly for the string as it is.

This is the correct procedure. There is absolutely no problem in it. The function is defined by the observer, and it must be completely explicit. I am not saying that your function ion the second case is not valid. I am just saying that it is not complex, and that is the simnple truth.

The rules for defining an acceptable function for dFSCI appear to be more complicated than they first appeared!

No. They are not. And again, any explicitly defined function is fine. Even yours. But it is not complex.

Do you really believe that Adam and Eve sinned, therefore earthquakes, tsunamis, and hurricanes? Can you explain the mechanism by which sin caused the tectonic plates to start moving?

Nice hissy-fit seeing that your original “argument” was shot down by reality. Do you really think that your ignorance means soimething keiths? Really?

Earthquakes haven’t killed me. Tsunamis haven’t killed me and hurricanes haven’t killed me. So what’s the beef? More people survive those then doe from them. And with earthquakes we can actually study the earth. Tsunamis we can prevent and with hurricanes we can move.

What purpose does all the suffering serve, and why doesn’t God prevent it?

Already covered that.

And the most interesting question of all: How can anyone take this idea — that the Fall is the cause of all of the suffering and evil in the world — seriously?

Because it is in scripture.

Look, moron, YOU asked an asnine question about evil- a question that has been answered many, many, many times. And just because you, a known liar and moron, does not like the answer doesn’t mean anything to me.

Geez keiths- you are so ignorant that you still think that unguided evolution predicts an objective nested hierarchy, even though there isn’t such a thing for prokaryotes and gradual evolution predicts a smooth blending of characteristics which would ruin any objective nested hierarchy.

dFSCI is, and will be, an excellent indicator of design until someone steps up and demonstrates that dFSCI (or what IDists say is dFSCI) can arise via some other mechanism than intentional, intelligent design.

Mark, please! In the way I have formulated it, with full explanation of the role of the paper list, it is explicit. And, if the case is the second, it is not complex.

You formulation is very obviously not explicit. It introduces an arbitrary list of papers, and it does not explain how that list was generated.

So, it is not possible to say how much functional information is in your string, unless you clarift what is the true role of your arbitary list. Once you have done that, the reasoning is simple.

This all turns on the word “explicit”. Obviously you could go on asking for more and more detail about a function indefinitely. If the dFSCI procedure is to be clear than it needs to be clear about when a function has been explicitly defined. It can’t just be the tester’s opinion that they would like more information.

You are really losing yourself in the desperate attempt to show that you can create a false positive for dFSCI, or simply confusion about its concepts. You have not. You can’t.

The function is defined in order to compute the functional complexity linked to that definition. You simply tried to cheat (not in a bad sense, I am not saying that you were intentionally deceiving anyone: you are only dec eiving yourself!), and gave a definition that is ambiguous and can refer to two completely different scenarios. Your trick was simply to introdece in the definition an arbitrary list, without explaining the logical connection with your definition.

As anyone would have done, I simply asked: “Hey, just a moment! What is this list? Why did you take these particular papers? Please, explain!”

Itìs very simple. We are not stupid. You must explain what you mean.

As I have shown, if, as you have admitted, you just picked up any possible paper that could match the list according to any possible connection, well, that’s fine! It’s a correct, explicit functional definition. But the definition is as follows:

“Any string for whcih an observer can find any possible functional definition, using google or any other mean, after he knows exactly the string’s sequence”.

That is a valis functional definition, but ubfortunately, as you yourself admitted, it can be applied to almost any possible string. Therefore, the defined function is not complex.

IOWs. your initial definition would be: “Any string that can point exactly to the following papers from Pubmed: …. Hehm, wait a moment, please, let me see the string in advance, and I will tell you what papers I mean!

I have been reading these TSZ threads for many hours. Amazing how people can prevaricate… The definition of functionally specified information is clear and simple and yet they decide to question it even though it was used in a paper in Nature as early as in 2003, by Szostak if I am not mistaken.

It is also clear that messages detected in biology necessarily require a protocol to encode/decode them, as is the case elsewhere. And there is such a protocol! Messages are artefacts because they actually carry detectable semantic cargo (as is clear from communications between animals, animals and people, between people, and people and machines). Did anyone demonstrate how a message can arise prior to its interpretation protocol being uploaded into the information processing system? No.

Also, what seems to have difficulties in getting across to people’s minds is that any algorithm needs tuning. And what tuning does is implicitly bias search towards exploring areas where the algorithm designers think/hope/believe/expect to find most solutions. Dead simple.

Also, any algorithm is a formalism. Are there any examples of spontaneous/law-like generation of a formalism? No.

Suppose I had supplied the strings and the functions and then no longer been available for further questions – maybe I died of frustration in the interim . So you don’t know if the papers were:

a) prespecified

b) all had something in common you were not aware of e.g. they were on my desk in that order

c) were post specified as I explained

Has the string got dFSCI or not?

Mark, the answer is simple. I will never affirm dFSCI for the string. I (like anybody else) cannot assess dFSCI if the function is not clearly specified and the dFSI linked to the function cannot therefore be calculated.

So, the answewr is no. For me, in those circumstances, I cannot affirm that the string exhibits dFSCI.

dFSCI is, and will be, an excellent indicator of design until someone steps up and demonstrates that dFSCI (or what IDists say is dFSCI) can arise via some other mechanism than intentional, intelligent design.

Alan Fox:

If gpuccio agrees with Joe then I guess the matter is settled! We know something is designed because everything is designed! Joe and Sherlock Holmes say so!

1- gpuccio has already agreed with me

2- That does NOT mean everything is designed. Only a moron would make that leap. And here is Alan Fox…

I think I am going into “watchful waiting mode” in the doubtful expectation that, one day there may be something useful that will eventually emerge from the ID camp.

Well Alan the entire world has been waiting for something useful to come from your camp and it ain’t happenin’. So perhaps you should focus on that…

You said that if the papers had something in common then you would not have felt the need to investigate how I had chosen the function. What is the rule you are illustrating here? Something on the lines of “the function must be expressible as a general rule”?

But it is rather obvious. If I define as function:

“A string whose components all point to papers about cystic fibrosis in the Pubmed database”

My definition is objectively defining a subset of strings. I need not know any specific string. I can give the definition after I have seen a specific string, because I have norices that its parts all pointed to that subject. Or I can give it just out of the blue. Nothing changes. If any string points to that kind of papers, it is objectively part of an objectively defined subset of strings.

But that is not the case with your procedure. You procedure means:

“Any string that points to a set of papers that I will choose after I see the string, using the exact sequence of the string to look at the corresponding papers”.

As I have explained. this function is correct, but is not complex. Almost any string of a certain length can be interpreted as pointing to a contingent list of papers. So, the subset defined by your definition is extremely big, and the complexity extremely low. I don’t know how I can say it more clearly.

If you define the subset on the basis of a particular string, it is fine too. Then the subset will have the maximum complexity available for that search space (1 : search space). OK. that will be a good definition to test any new string that emerges in a system. But not the original string. The original string is the template on which the definition was created, and it cannot obviouslt be used to prove that that function can arise in a random system. What happened is the exact contrary: the function itself was defined for a string emerged in a random system, that alredy existed.

How can you still insist on this false concept?

Let’s try another way. I create by a random coin tossing the following string:

HHTHTHTTTHHTHTTTTHHTHHHTHTT

and I define for it the function:

“Any string that has the following sequence:

HHTHTHTTTHHTHTTTTHHTHHHTHTT”

As the complexity of the string is 27 bits, I have 1: 2^27 probabilities of getting that string in a random attempt by tossing a fair coin. That is about 1 : 10^8.

If I try to get a new sgtring like that by tossing a coin, I will need a lot of time to get it! That is the true complexity of the string, obviously. If the “function” is to have exactly that sequence, there is no doubt that the probability of having it by random search is that low.

And yet, according to your “reasoning”, we alredy got the result, in a random system, at our first attempt! As you can see, your reasoning is simply wrong.

The fact that the list of papers is not the sequence of the string does not change anything. It is just a derived way to describe the exact sequence.

Consider instead my definition of having a string that points to papers about cystic fibrosis. The subset here is defined by an independent property. To give that definition, I need not know any specific string, any specific number. I can completely ignore what IDs correspond to papers about that argument. I can ignore how many papers correspond to that argument. And still I can give the definition in a completely explicit way, and a perfectly correct way to verify if any string has the defined property.

Let’s go to a functional protein, an enzyme that accelerates, say, of 1000 times a biochemical reaction.

The function is there. I can observe it in any lab, without knowing anything of the specific protein that is performing it: not its sequence, not its structure. I can objectively measure it. I can even define an enzymatic fucntion for which no real protein can be shown with that function. Proteins databases all list the same function for the same protein, without ambiguities.

So, can you see how the concept of “defining a function” works?

I had no problems at all in recognizing the function for Mung’s source code, with his cooperation. You cannot deny that function. Anybody can verify it. Mung could have expressed that function before writing the code, without knowning the code at all. Other codes could have the same function.

And I easily affirmed dFSCI for that string. And it was a true positive.

You guys could have offered easily thousands of designed strings that would have been easily identified by me as exhibiting dFSCI. All true positives.

Strangely, nobody among you has done that.

You say that the function has to be changed to “any set of papers”. But why stop there? Why not any function discoverable through Google? After all that is what I did. Again what is the rule?

There is no problem at all. The function:

“Any possible string that can be read as a list of valid PMIDs”

is a valid functional definition. It has very low complexity. For example, if we assume that Pubmed has at present 20 x 10^6 IDs, and if we deal with a string of 35 decimal digits, and we set a rule that any string of that length must be read as a set of 5 numbers, each of 7 decimal digits (which frees us from the necessity of a separator), that any string will give us 5 numbers under 100 x 10^6, and each number will have 1 : 5 probabilities (p = 0.2) of being a valid PMID. Therefore, the probability of having a random string with 5 valid IDs is, according to the binomial distribution: 0.00032. That is certainly not low enough to affirm dFSCI, at any reasonable threshold.

If, on the other hand, we define our function as follows:

“Any string for which a function can be defined using a google search”

this function, too, is perfectly valid. And its complexity is probably zero. It is very likely that we can find some functional definition for any possible string, just using a google search.

You may think that all this only arises when the function is chosen because it is something the string can perform and so it is a bit a sideshow.

No. I just think that “all this” is not a problem, and can be easily solved if we define correctly each function. The only requisite is: the function must be objectively defined, so that anyone can agree on what it means, and anyone can objectively measure (at least in principle: obviously, there will be practical difficulties in many cases) the complexity linked to the function itself (IOWs, the target space / search space ratio).

However, the process I used is close to how evolution works. Mutation creates a gene or protein and then if there is any function it can perform that adds to the organism’s fitness it is preserved. Using your approach the choice of function for the calculation of dFSCI for a protein should be “can make any contribution to the organism’s fitness” or something like that.

Neo darwinian evolution assumes that RV creates new arrangements that increase the reproductive fitness of a replicator in a certain environment, enough so that the new arrangement is expanded in the original population.

That is certainly possible, and can be evaluated. Microevolutionary events are well known that illustrate the principle.

A computation of dFSCI for any explicit transition is rather easy (the only controversial point being the calculation of the target space, as we all know).

Let’s say that we can show a new arrangement that differs from the previous one of only one aminoacid (in the final protein). That would be a single mutation. If we reason at the protein level, that mutation, if highly specific (let’s assume for the moment that only one aminoacid will give the desired result) will have a probability of 1 : 20 mutations at that specific site. If we have enough information about the system and the time span (for example, the mean rate of mutations per site in that replicator, the population, and the available time) then we can easily compute the probability of the event. If the event can be shown to give a true reproductive advantage in that replicator (that can easily be shown in the lab for bacteria), then it is a selectable event, and if the probability of getting the new arrangement by RV is high enough, the whole sequence (the emergence of the new arrngement, and its expansion by NS) can be easily explained by the neodarwinian algorithm.

The problem arises when the new arrangement is very distant at sequence level from averything that already exists in the replicator (like in the case of the emergence of a new protein domain), and no naturally selectable intermediate is known. In that case, the dFSI of the new state is extremely high, and NA cannot be considered as a facilitating factor, because no naturally selectable intermediate is know (and, indeed, it is perfectly possible, I would say extremely likely, that no naturally selectable intermediate exists!).

In those condition, the neodarwinian algorithm is irrelevant: it explains nothing.

But we can agree that, in evaluating the neo darwinian algorithm, there is no ambiguity about the functional specification that each supposed state that emerges by RV must have: it must be naturally selectable.

“dFSCI is, and will be, an excellent indicator of design until someone steps up and demonstrates that dFSCI (or what IDists say is dFSCI) can arise via some other mechanism than intentional, intelligent design.”

That seems a perfectly reasonable statement. I certainly agree with you. That is exactly the same as saying, as I have said:

“dFSCI has 100% empirical specificity when used to infer a design origin. Hoever, that simple observation can be easily be falsified by showing that it can give false positives. That would be a serious falsification of the empirical utility of dFSCI as an indicator of design”.

So, we agree.

It is very said that Alan Fox comments that with such a stupid statement:

“If gpuccio agrees with Joe then I guess the matter is settled! We know something is designed because everything is designed!”

Yes. Szostak et al have defined function information in a biological context. They make it very clear that this number is relative to a particular function and do not make any attempt to correlate this with design.

Well they do NOT correlate it with the blind watchmaker.

However, they do not go near the question which I am raising which is how you choose the function.

Function is something we OBSERVE. And designers choose the function depending on the needs of the design.

BTW I applaud your resilience, even though I sometimes have problems understanding why you are doing what you are.

There are many reasons.

a) First and foremost, I respect the ideas of our adversaries (when they have ideas 😉 ), and I feel that it is my duty to have them express their ideas, and to carefully and openmindedly consider them.

b) Second, I believe that a public discussion between us and them is the best way to illustrate our poits to all who read.

c) And finally, it is really the best way for me to understand better what I believe, and to refine my personal thinking. Thanks to all, friends and enemies alike, for giving me that precious opportunity.

I have never neither supported nor condemned UD’s policies of moderation. I am just not interested. Personally, I am not in favour of banning anyone, but I understand that in a blog it may be sometimes necessary.

Anyway, as a rule, I respect the decisions of those who have the responsibilities of moderating a blog where I am only a guest.

My decision to post here, and not at TSZ, has many reasons, none of which has to do with supporting or not supporting the moderation policy:

a) It is easier for me to post here, because it is the place where I like to post. UD is my place, TSZ is your place.

b) While I can appreciate the general atmosphere of TSZ as one that is not extreme, still I am not at ease in that environment, which is too different from my personal attitude under many respects. the same word “skeptical” is for me rather a word of offense.

c) I really want that people at UD, who after all are my own people, may have the opportunity of following the things I debate without having to go to TSZ. That’s why I always try to comment after diligently quoting the statement I am commenting about.

d) I believe that, after all, this “parallel posting” is working. It also has some peculiar positive aspects: for instance, it is easier for me to ignore what has to be ignored, and follow what is interesting. And yes, many of the things that come out at TSZ are better ignored (well, I am sure you think the same of UD, so you should be happy of the circumstance too!).

So, I don’t understand this emotional response from some of you because I, and others, are posting “at home” rather than at your site. It really seems not so important. We are not “shouting at each other from across a street”. We are on the Internet, I believe.

Well, I can admit that someone is shouting, occasionally, but the distance is not certainly physical, or informational…

I’m not sure they understand what an objective definition would look like.

I had no problems at all in recognizing the function for Mung’s source code, with his cooperation. You cannot deny that function. Anybody can verify it. Mung could have expressed that function before writing the code, without knowning the code at all. Other codes could have the same function.

Precisely. In fact, there is a software practice which consists of writing functional tests before writing the code.

I could have different people write a test that tests for the function performed by my code. They could write those tests without ever seeing my code.

No definition of functuion can be used to compute dFSI if it does not point unequivocally to a specific subset of the search space.

This is the first rule you violated: your definition could point both to a subset of one string (if it was a pre-specification, or to a subset of 20*10^6 strings (if it was a function derived from the string itself).

Rule 1:

You ask:

When confronted with a proposed function how do I decide if I need to know how the proposer came up with the function?

I will use your template:

“If a proposed function does not point unequivocally to a specific subset of the search space, for instance if it introduces contingent elements (like a paper list) whose logical relationship with the emergence of the function itself in the system is not clear, then it is necessary to find out how the proposer came up with the unexplained elements in the definition”.

Rule 2:

You ask:

Having investigated how the proposer came up with the function and determined that it was by inspecting the digital string and then finding a function it could perform (I assume this is right so far – feel free to correct), how do you decide what function to replace it with?

I will use your template:

“The replacement function has features: “Any function with which an observer can perform a similar procedure, and obtain similar results (for instance, a function of the same form that differs only for the list of contingent elements).

All that is really useless. Rule 0 should be enough. Any intelligent person, in the presence of an ambiguous functional definition, will naturally ask the definer the correct questions to solve the problem as I have done with you.

Anyway, you asked for those unnecessary rules, and I tried to give them.

The problem is really with the list of papers. A list of papers that have nothing in common is not a logical categoy. It is just a contingent list of elements, whose only logical connection is that they were derived from the existing string.

The biochemical function of a protein has nothing contingent. Indeed, we can observe the function work in reality, and we did nothing for it, except observing and defining: but the function was already working before our observing and our defining. the biochemical reaction has been accelerated for million of yeras, before human observers even existed.

Now, I am not saying that to create new rules: I just want to show you why your trick is so different from a true functional definition.

However, the real rule is rule 0. And the reason should be obvious. The whole purpose of measuring dFSI is to measure the probability of a string arising in the system by RV with the defined function. If we define the function to match an existing random string, and the function has no universla meaning except the correspondence between the existin sring and some use we have defined for iots specific sequence, then we are measuring nothin, except the generic property that a string can be used that way.

We could even do that for a protein sequence. We can, for instance, define a rule that transforms the sequence into a sequence of numbers, and then use that sequence as a key to an electronic safe.

And so? We can do that with any protein sequence. But we can have a specific enzymatic activity only with certain sequences, and not with others.

And please. take notice that all your “problems” in no way have created any difficulty to my assessment of dFSCI in all the strings that have been proposed. I have given a specific judgement for all of them, including yours. I have even given a specific judgement for your string in its original, ambiguous form: no dFSCI can be affirmed, because the defined function is ambiguous.

I don’t have a problem with your source code as an example of a function. Is there anyone who does? It is just that my definition “represents these papers” is equally testable and equally indepedent of the string – as I discussed above it could be performed in many different ways without the string ever coming into it. Gpuccio’s example of “Any string that has the following sequence: HHTHTHTTTHHTHTTTTHHTHHHTHTT” is not even a function and not at all similar to my example.

I haven’t seen any complaining about my string. I wrote it specifically as an example of a string with objective functionality. You can even plug it in to a page on a web site and validate that it performs the stated function.

I haven’t really looked at your string(s) yet. Was there just one string, and it’s composed of sub-strings, with each sub-string being a reference to a document?

A suppose we can say it then has a function. And thus the question becomes is that function specific and complex enough.

I’m trying to think of analogies that might be useful.

We probably have differing levels of complexity and functionality, so the question is probably not easy to answer. For example, needing to divide the string into sub-strings of a specific length.

But numbers can represent a very great many documents. So if we were to find some strings of numbers that made reference to other documents, then we’d say the string lacks specificity.

If we change the “reading frame” will we get different and yet still valid pubmed documents?

Given a set of PubMed IDs (PMIDs) you can use this converter to obtain the corresponding PMCIDs and/or NIHMS IDs if they exist. A PMCID will be available if the article is in PubMed Central (PMC). An NIHMS ID will be available if the manuscript has been deposited via the NIH Manuscript Submission (NIHMS) system.

I will not go on forever with this. I don’t agree with you. With my disagreement, strangely, I can infer design correctly. With your loigcal fallacies, strangely, you would affirm false positives everywhere, which seems to be your hidden dream.

You say:

It is utterly clear which strings can be used to represent that list of names whether the function be prespecified or not.

What a pity that the list of names cannot be known until the string is there. Try to specify a list of names, and then try to get a string that specifies them in a random system.

In fact it is clearer than a rule such as “papers about CHD” as one could argue about which papers are about CHD.

That’s really silly. You can give a very simple rule for measuring that: you go to the Pubmed site, and just perform a search with “CHD” as keyword. You immediately get a specific number.

The function you want to substitute if I did not “prespecify” points to a different subset – but is clearly a different function. It is the function of representing any list of papers.

Yes. And it is the only function that makes sense if the list is not pre-specified. Because, you see, if the list is not pre-specified, you matching a specific list to a specific string, and you can do that with any string.

You seem to just play (and not well) with logic. You seem to forget that we are dealing, here, with empirical science. We need a functional specification that points to a specific subset of strings in a search space. but why are we doing that? Because we want to know the probability of getting a string with that function by RV.

Now, just answer this simple question: if your function (and list) is not pre-specified, of which subset is it measuring the probability in a random system?

You seem to affirm that it is specifying the probability of getting by RV a string that exactly matches that list. OK, I can accept that. But then it is the probability of getting such a string in a new search, not certainly in the search that gave you a string to define the list from!

Obviously, that string is already available. Does that contradict the ID procedure? No, because that string has nothing special: it is a random string, that cannot be distinguished in any way from any other random string. It is one item in an extremely large subset: purely random strings, with no special function. It’s you who have built a function for that random string, with a procedure by which you could have built a function for any other random string.

Your argument is silly and useless. I am amazed that you still stick to it. It must really be cognitive desperation. I will not go on with this “argument” any more. Please, find other tricks, or let’s stop it here.

This would greatly decrease the complexity as by your argument the real function should be “can represent some set of papers”. The fact it was a CHD set was determined by the string. But you do not feel it necessary to ask the how the CHD function was arrived at. I can see no logical difference in the situations except a matter of degree.

But I can. Try this. We can define a function this way:

a) Do a search on Pubmed with the keyword “disease”. You will get 2836651 results. Always assuming 20×10^6 voices in the database, the probability of getting one item in the subset “disease” is 0,14183255. The probability of getting a list pointing to 5 such items (form value under 20×10^6) is therefore 5.739573e-05 (a perfectly likely event, in any decent random system).

b) Now, do a search with the keyword “elaprase”. You get 49 results. The probability of getting one item is now 0.00000245. The probability of getting 5 such items is now
8.827352e-29.

You see no logical difference. As you can see, when the subset is objectively defined, there is huge empirical difference according to the definition. Here, I see 79 bits of empirical difference. Can your “logic” explain that?

You seem to be the only one still outputting arguments, although wrong. I have no reason to ignore them.

a) You say:

As interesting as all this ‘string theory’ is, I feel it completely misses the bus, certainly in terms of proteins, which are nothing without their 3D structure.

But you seem to forhet that RV acts at the sequence level, and in the gene, not in the protein. It knows nothing of protein sequence level. The search space for RV is only the sequence space in the genes. This is a very serious mistake in your reasoning.

b) You say:

Two protein domains can bear no sequence similarity yet have a high degree of structural congruence.

That’s absolutely correct.

c) You say:

And they can still derive from a common ancestor by stepwise substitution of every single part.

If the structure and function are maintained, that is certainly possible. Indeed, the great variety of primary structure in simnilar proteins with similar functions can easily be explained by neutral variation. Negative selection can certainly allow sequenc variation that does not change structure and function. That’s the whole ppoint in the model of “prtoein big bang theory”.

What this mechanism cannot do is to create a new structure with a new function. Exactly our problem when we have to explain the emergence of new basic protein domains.

c) You say:

Each amino-acid ‘letter’ is taken to be an equal distance from all others, and this is simply not the case

Yes, it is. Because, as said, RV acts on nucleotides, not on aminoacids. The variation in the genome knows nothing of the effect in the protein.

It isn’t the case in general, because amino acids cluster on properties, nor in specific instances, where the ‘distance’ between two substitutions, as determined by the 3D effect, is entirely dependent on the position in the broader matrix.

Again, you make the same mistake: you are reasoning in terms of the protein, that is in terms of NS. But the new arrnagements are created in the genome, by RV. dFSCI has to do with RV, not with NS, as many times explained.

If dFCSI takes no account of higher dimensionality, it is not likely to be a useful tool for determining protein ‘design’, even with a clear methodology for applying it to 1D structure.

Again, see previous points. dFSCI measures the probability of new arrangements by RV in the genome. The “natural selectability” of proposed intermediates must instead be verified in the lab, and then, and only then, added to the computation model.

Elements many bits apart come together in a manner vital for ‘function’.

Not certainly by unguided RV.

d) You say:

A further point relates to ‘function’. Function is frequently partitioned protein by protein, but the very modularity of protein domains means that the same domain can appear in proteins of widely different ‘function’. And the ‘function’ of the domain in each protein may itself be widely different, yet retaining the same 3D structure. So we have these sub-protein elements that display substantial phylogenetic congruence, some on sequence, some on structure, and some on both, scattered about the proteome.

That’s more or less true. That’s why I refer to the origin of basic protein domains, as you may have noticed. Let’s postpone the discussion about multi-domain proteins to when we have explained single domains.

Their integration from one protein to another is entirely within the capacity of ‘RM + NS’.

How do you know that?

Such practical, chemical considerations, of course, part of the general “things that come out at TSZ […] better ignored”.

It is just that my definition “represents these papers” is equally testable and equally independent of the string – as I discussed above it could be performed in many different ways without the string ever coming into it.

I think there are at least three questions raised:

1.) How many other strings could perform the same function?

2.) How many different functions can be defined for your string?

3.) Is the function of the string distinct and separate from the string itself.

It’s not my fault if you use silly and useless arguments. You take the responsibility of what you do. I simply state what I think of your output.

Now, if you agree, let’s play a little game. That will show why your argument is silly and useless.

I appeal to your Bayesian heart.

Now let’s say that I give you three strings that all look alike (apparently random). And I give you three functional definitions, according to the concepts we have already discussed (please, reread also my computations in my post #235).

The three string are as follows:

a) The string you proposed, whose function can be defined as follows:

“It points to 5 entries in the PubMed databases. We can explicitly list the entries, if and only if the string is already known.”

b) A string whose function can be defined as follows:

“It points to 5 entries in the PubMed databases, all of them indexed by the keyword: “disease” “.

c) A string whose function can be defined as follows:

“It points to 5 entries in the PubMed databases, all of them indexed by the keyword: “elaprase” “.

OK with that?

Now, let’s say that you have 1000 euros and you must bet.

The bet is as follows. We have three different statements:

1) String a) was generated in a Random String Generator, in one single attempt.

2) String b) was generated in a Random String Generator, in one single attempt.

3) String c) was generated in a Random String Generator, in one single attempt.

I tell you that only one of these three statements is true. You have to bet your 1000 euros on one of them. If you guess the one that is true, you win 2000 euros. If you bet on a false one, you lose your 1000 euros.

Bayesian, isn’t it?

Now, my question is simple:

On what statement will you bet? And why?

Please, answer that.

(By the way: it is not virtual money. Let’s say that it is real money, that you earned through hard work. And you really want to keep your money, and win more).

What you say has no sense. You have given a long list of words. None of them obviously exhibits dFSCI. What else do you want to know from me?

You know nothing about how DNA sequences were generated or selected, and yet you speak with authority about it.

This is simply not true. I speak with “authority” (just to use your senseless word) about a specific explanation that has been proposed, that is the neo darwinian algorithm, and its only available alternative. In both explanations, it is very clear how “DNA sequences were generated or selected” (according to the explanation, I mean).

In the neodarwinian algorithm, “DNA sequences are generated by RV and selected by NS”.

In the design theory, “DNA sequences are generated and/or selected by an intelligent designer”.

As you can see, we all know those things. Those are the explanations that we test against known facts. With or without “authority” (whatever it means).

The whole purpose of all this discussion is to test dFSCI’s specificity. That’s exactly what you folks have doubts about.

I think you are really desperate now. The intellectual level of posts at TSZ has never been so low.

At this point, I feel confident in making a prediction about the results of gpuccio’s challenge, assuming he manages to disgorge any results thereto: For any string X, the answer to the question “has gpuccio determined that string X has dFCSI?”, and the answer to the question “has gpuccio been told, up front, that string X was Designed?”, will always be the same answer.

And I make a very simple statement: you are a liar. I have determined dFSCI for all the strings that have been proposed here. And for none of them I have been told, up front, if the string was designed. So, you are a liar. It’s as simple as that.

The process I have modeled is not limited to eight or ten characters It can reasonably be extended to hundreds. It does not require resources beyond those of the universe. It scarcely requires a fast computer.

I would disagree with that statement. NS has no visibility of DNA sequences. DNA sequences are not functional and cannot be selected.

Well, I would say that, according to the proposed algorithm, they are selected for their phenotypic effects (through the proteins they encode). In the simple form of antibiotic resistance explained by the darwinian algorithm, the genetic variant is selected necause of its phenotypic effects, and so in all known forms of microevolution.

Still, I don’t think you have caught the meaning of my question. Indeed, you say:

I don’t recognise function (a) and can’t make much sense of it. My function was “refers to 5 specific papers in the PubMed database” – as I have said they can easily be listed even if the string was never thought of. In fact I don’t see how you could create a list such that “We can explicitly list the entries, if and only if the string is already known”. There will always be other ways of listing them – such as the titles or the URLs.

That is the simple problem. If you don’t understand function (a), and change it, my question has no more sense.

Let’s see. As I defined it in my question, function a) was:

a) The string you proposed, whose function can be defined as follows:

“It points to 5 entries in the PubMed databases. We can explicitly list the entries, if and only if the string is already known.”

Why cannot you make sense of it?

You have given:

– a string

– a list of papers in the PubMed database.

Now, please, there is no difference if the list is given in the form of the numbers in the string, or of the titles.

The only important point is: the numbers in the string correspond to the papers.

But, when I asked, you admitted that you chose the papers by looking at the numbers in the string. Therefore, the correspondence of the numbers in the string to those 5 papers is a consequence of your post-specification.

Is that clear?

Now, how likely is it to have, in a RSG, a string for which we can create such a scenario? It is extremely likely. Therefore, we are observing a scenario (string + post-defined function) that is extremely likely. IOWs, there is no dFSCI, no complexity. If the original string emerged really in a RSG, no probability law was violated, no extremely unlikely event happened.

Can you agree on that?

Let’s go, instead, for instance, to string c). My definition:

c) A string whose function can be defined as follows:

“It points to 5 entries in the PubMed databases, all of them indexed by the keyword: “elaprase” “.

So, here the scenario is: we see a string, and we read the five parts of it as PMIDs. We check the 5 corresponding papers (up to now, no differences with the previous case).

Now comes the difference: we see that all 5 papers are referenced by the keyword “elaprase”, a rather rare keyword in the database.

Now, that is strange. As you can see, the problem here is not if the function is pre-specified or post-specified. It is post-specified here too, because we observe it in the string. But we are not “creating” that strange property of the string by choosing an ad hoc list of papers, as we could have done for any random string.

Here, we jusr observe a strange fact that requires some inquiry; the 5 papers have a remarkable property in common, and it was not us who acted to crerate that situation. It was the string itself that pointed to 5 papers with a common property.

So, the fact here is: either the function is pre-specified, or post-specified (because observed in an already existing string), the probability of such an event remains extremely low in a random system. That’s exactly the reason why it is perfectly reasonable to suspect design in the case of string c).

It is very simple, and I really can’t understand why you cannot see it.

Your “argument” is, unfortunately, a new version of the old, and rtaher infamous, argument of the “deck of cards”.

So, let’s say that we observe a deck of cards sequence which is apparently perfectly random. We rae not surprised at all. Although that particular sequence is certainly as unlikely as any other, it is perfectly natural that we observe it, because it is in no way special, it is not recognizable from any other random sequence. We can, just the same, find some post-specification for the sequence, either just by giving the sequence itself as a specification, or by finding some connection as you did with the Pubmed papers. But, if we do that as a post-specification, it remains perfectly compatible with the random origin of the sequence.

But let’s say that the sequence we observe is perfeclty oredered: 4 1s of Spades, Hearts, Diamonds, Clubs, then 4 2s in the same order, and so on.

Can you dany that, even though the order is observed after we see the sequence, and is therefore post-specified, we would correctly have a lot of doubts about the random origin of the sequence? We would naturally look for some other explanation: a necessity mechanism here could be considered, and certainly we could consider design (a designer ordered the deck).

Why? Because the ordered sequence is extremely unlikely as a random result. It is not importan if we pre-specify the sequence and then get it, or if we just get the sequence and observe that it is strangely ordered. The result remains extremely unlikely in a random system.

On the contrary, a random sequence is perfectly likely because a lot of random sequences exist, and there is no special way to distinguish one from another.

For each of those random sequences, however, we can post-specify some “function” that can only be defined after the string is explicitly know. Such a procedure does not change the probability of having a string of that kind (a string for which an ad hoc post-specification can be easily created). Obviously, if we use that ad hoc post-specification as a pre-specification, everything changes: the probability of having the same string a second time becomes, naturally, an almost impossible event in a random system. If we really observe it, we are prefectly justified in suspecting design.

Please, consider what I have said here. And comment on that. Don’t change the cards.

And, in the light of what I have said, please answer simply my question in post #238.

I will certainly bet on statement 1) as the true statement. And you?

You say:

I don’t see the relevance of your Bayesian challenge – I thought we were trying to define a process with 100% specificity – not estimate which hypothesis is most likely.

But it is the same thing! When we evaluate dFSI in a string, we are just answering this simple question: is this string objectively unlikely as the output of a random system? How unlikely is it?

The string for which you post-specified that kind of function is not unlikely at all. That’s because the function you post-specified makes that result “unlikely” only as a seocn result (IOWs,only if used as a pre-specification).

On the contrary, the functions defined in b) and c) meausre the probability of that kind of result, either as pre-specification, or as post-specifications.

If I say: let’s try to get a string that points to 5 papers reference by the keyword “elaprase”. How unlikely is that event in a random system?

Or if I say: I observe a string that points to 5 papers referenced by the keyword “elaprase”. How unlikely is it to oberve that string as an event in a random system?

OK, maybe we don’t really disagree at this point. My simple argument is that I asked you if your definition was prespecified or postspecified, because the evaluation of dFSI in this particular case would have been completely different in the two cases, because of the special nature of the definition (relying on a contingent list). As I have explained, dFSI would be rather high in the case of a prespecified function (target space equal to 1), almost zero in the case of a postspecified one (target space extremely big). That’s all.

With the other two functions, instead, relying only on an explicit, non contingent property, the computation of dFSI would not change in the prespecified or postspecified case. The target space and the search space remain the same in both cases.

“The string identifies for each month over a period of 120 months whether the London monthly mean high temperature is above or below long-term average.”

Yes, I think the function is acceptable. But, to be complete, I would obviously ask what the period is (and in particular, if it is a future period or a past period whose values are already known), and what the long term average reference is. Just to have a completely explicit definition. That said, I am ready to follow your reasoning.

That’s false. For the other two examples if they were post-specified this would be something like taking the string, studying the papers it points to, and seeing what you can find that they had in common. As all papers have something in common (even if it is just a distinctive phrase somewhere in the text) then the probability of success is 100%. That’s why I suggest you simply amend the process to say no post-specified functions. Any function could potentially be post-specified.

No. That is wrong.

First of all, we must obviously stick to keywords, and not to any possible word in the papers, otherwise ot is obvious that any paper has something in common with any other, and there would be no complexity in a definition “a string pointing to 5 paper that have at least one word in common”. That is trivial.

We could give a definition this way: “a string pointing to 5 papers that have a keyword in common”. That would be more restricted, but still the tyarget space would be very big. As I have shown, some keywords, like “disease”, are very common. we are distant from a high complexity result.

But when I give the definition “a string pointing to 5 papers who share the keyword “elaprase”, the situation is much different. That keyword is very rare. the probability of having a string that points only to 5 papers indexed by that word, as I have shown, is rather low. Maybe not so low that we can in any case affirm dFSCI (here we should discuss the problem of the threshold for this particular problem), but we would anyway observe high compexity.

And you are wrong that the situation is the same as in your original definition. It is not.

As I have alredy said, if I observe that a string points to 5 papers indexed by the keyword “elaprase”, the factc itself is very strange.

I believe that you are confounded about the real meaning of the word “postspecifiction” as applied to the two cases.

The definition “a string pointing to 5 papers indexed bu the keyword “elaprase” ” is postspecified only in the sense that we observe the property in the string, a property that is in itself surprising, and we just define it. But we could have well defined the same property as a prespecification without knowing any special string. The keywords for PM are publicly known, we could have just looked for a rare keyword and defined the property (indeed, that’s exactly what I did, and I had no particular string available). Here, even if we had first observed the property in a string of which we are assessing dFSCI, and then defined the property, the complexity would be rather high. Indeed, the question we are trying to answer is: we observe here a string that has a property that defines a tiny subset of a search space. How likely is for that string to emerge in a random system?

Instead, in your original definition, the definition itself is postspecified not only because you define it after you oberve the string, but also because you define it from the string sequence. You could have never defined this particular definition, with this contingent list of papers, in prefernec to any other similar definition with any other contingent list of papers, if you had not known in advance the sequence of the string. I hate to say this, but I am afraid that your definition smells a little bit of circularity 🙂 .

Without knowing the exact sequenc of an already existing string (the one about which we should assess complexity) you could have done only two things:

a) Give a general definition, as I have suggested, of !any string that can point to 5 papers, that can easily be listed after we observe the string. That definition, as I have repeatedly pointed out, is correct, but is not complex.

b) Give a huge set of different definitions, each with one of all the combinatorial lists of papers you can extract from a database of 20×10^6 papers. Not a satisfying alternative!

So, you are wrong. The situation is not the same. There is a definite logical difference between the two cases, and I am surprised that you, who are so well acquainted with logic, still can’t see it.

I was thinking of the last 10 years – 2002 to 2012 – I could do a longer period but it would be tedious. I was going to use http://www.holiday-weather.com/london/averages/ for the averages. Although the values for 2002 to 2012 are known I was not going to use them to generate the string. That’s why I said “identify” rather than “predict”. I will not even look at the actual temperatures until after I have generated the string – although I won’t be able to resist checking it has worked when I have finished. The string will simply be a string of 120 bits with 1 for above average and 0 for below average. I realise you want 500 bits but that would be really tedious to look up all the data, so I hope 120 will be sufficient to prove the case.

You haven’t thought this through. An omniscient and omnipotent God could prevent rapes from happening, and he could even prevent the desire to rape from happening, all without controlling anyone’s thoughts and desires.

You have not:

1.) provided a definition of rape.

2.) provided an argument for why rape is evil.

It’s clear to me that you have no argument.

So here’s what you have to do. Explain how you can make an argument about rape being a specific instance of the problem of evil without either defining rape or explaining why rape is evil.

Then try to make your argument without begging the question of OUGHT and FREE WILL. You can’t. That’s why your “argument” is so obviously amateurish.

Your latest god is not the christian God that your OP is intended to mock. He/she/it is an ad hoc god you invented to support your flailing attempts at reason, so I could care less about your special pleading. I could with as much force of reason argue that this latest ad hoc god you’ve described is not be compatible with the god in your OP.

You need to meet your obligations with regard to your original claim. You haven’t. Until you do, you have no argument.

So here are the three strings I would like to apply your procedure to:

gpuccio, mark,

Continuing the analysis of your strings. You were kind enough to add blanks to separate out the individual PMIDs, but I’m not sure that was necessary.

I decided to ignore your scheme of separation and devise my own, just to see what would happen. I used your first string, but took each sequence of six numbers. They also identified specific PubMed papers:

Thank you for the interesting contribution. I paste here your post, before commenting on it:

Gpuccio

I am going to have to abandon my attempt to produce a binary string which idenitifies when London temperatures were above average. I can’t get the data I need consistently and accurately enough.

It might be interesting to explain what I was trying to do.

I was looking for two events A and B which satisfy these properties:

A happens if and only if B happens
A (and therefore B) happen on an unpredictable schedule
No living thing in involved with either A nor B
The schedule for A and B is publically available
Under those conditions the string of when A happens (if long enough) would appear to have the function of identifying B, be complex, incompressible, digital and prespecified.

I thought being above average temperature in London and being above average temperature in somewhere else very close woud satisfy these conditions but it is vital to have temperature records to high degree of accuracy and averages taken over the same periods. I can’t seem to find that data.

Nevertheless I wonder if you agree that the conditions I set out would be a case of dFSCI which is not designed?

Well, this is really interesting because it allows me to clarify some aspects of the dFSCI reasoning. It requires, IMO, no “refinement” of the procedure, but certainly a good understanding of the concepts.

I must say that I has in some way anticipated your example, and already given it some thought.

I am not really sure that I understand correctly in detail what you wanted to do, so I will make some assumptions and some general reasoning to clarify my views.

First of all, I believe we are dealing here with data that are derived from natural phenomena, and that can be read in some digital string.

Now, let’s say that we have a measurin system that registers the highest temperature in London each day. After a long enough time, we will have a string of values complex enough.

Now, it is rather obvious that the whole system that produces the string is designed, but that is not the point here. We could in principle imagine that some natural object can store some record of the highest daily temperature for us. The interesting point is that the specific sequence of values is obviously not designed. And it is complex.

It has not, in principle, a specific function, but you could say that it is functional because it can give us information about past temperatures. I can agree on that.

So, has it dFSCI? No. Why? because it is perfectly explained by necessity mechanisms.

Given the temperature, and the meausirng system, be it some analogic natural system, or a designed digital measurement, the string is determined by the necessary measurement if the temperature.

So, in general, a complex string of data about some natural phenomena is a string complex, certainly functional, but does not exhibit dFSCI because it has a complete necessary explanation given the natural phenomena. which, I believe, are supposed to be explained also by necessity, or random, mechanisms.

I believe that, in your general formulation, the original data string would be “B”. So, my first point is that B is complex, in a sense functional, but does not exhibit dFSCI.

What about “A”? If I understand well, A would be a string derived from B, through some form of simple computation. It could be some mathemathical derivation of the data in B, or, as I believe was your initial proposal, a comparison of two sets of data.

Now, the important point here is: some complexity is implied by the procedure of derivation, whatever it is. But most complexity would be still derived from the original complexity of the data in B. So, again, the new string would probably not exhibit dFSCI. Obviously, if the derivation procedure is complex enough for the system and the time span, dFSCI could be affirmed.

Three important points:

a) In a data string, or in a string derived form a data string, the origin of the data complexity is alresy known, because it is implied by the definition. We have to infer nothing. For example, if you give me a complex random string, and you tell me that it is the registration of highest daily temperatures in London in a certain period, I already know how the complexity of the string arose: by measuring the temperature in London. So, I already know that the complexity in the data srtring is explained by a necessity mechanism. I have not to infer that information.

b) Any process of derivation from a data string by a necessity mechanism still retains a complexity that can be explained by necessity mechanisms. In a way, the situation is not very different from a copy of the original information, like in DNA duplication, only here the necessity mechanism does not imply simple copying, but some form of computation.

c) The original data string is impredictable because the original natural events can be described as a mixed system: necessity laws, and random variation. In the case of meteorology, we known that the original natural system can have the properties of a chaotic system, and therefore be specially impredictable. However, there is no doubt that we all agree that those events are anyway explained as the result of random configurations plus necessity laws. So, the impredictability of the original events is perfectly natural and explained. The derivation of B from the events, and of A from B, instead, is usually perfectly explained by strict necessity mechanisms.

Well, that’s all for the moment. I hpe I have interpreted your points correctly. If not, please clarify better what you think.

First of all, I believe we are dealing here with data that are derived from natural phenomena, and that can be read in some digital string.

Or analog data represented with a digital string. Which raises the interesting question of how and where did the representation arise.

And then the representation needs to be stored so that it can be recalled/transmitted. Which raises the question of how information can be store/transmitted in a material system.

So even if Mark did come up with a string it would still beg the questions that Darwinists are completely unable or unwilling to address.

gpuccio:

We could in principle imagine that some natural object can store some record of the highest daily temperature for us.

In a digital string? We know of only two such systems, those created by humans and living organisms themselves. And these are the two things Mark wanted to exclude.

It looks to me like Mark is attempting to incorporate two aspects, a random aspect and a necessity aspect. By analogy, if the string contains dFSCI evolution can generate dFSCI.

There are many problems with this approach, imo. It might be an interesting topic to explore on it’s own. The first problem would be defining an objective function for the string. Do we find function apart from human artifacts and living organisms?

I agree with all that you say. Indeed, if you read carefully my post, I had already anticipated many of your observation. And I agree that the digital form smells of design in any case.

But I was really interested in evaluating correctly the complexity tied to the “recording” of natural events. That is an interesting form of complexity, because it is complex, it is in some way functional (because it gives information about real events), but still it is not dFSCI because it is explained by necessity (and because it is not digital, but it could still be CSI if it were not explained by necessity).

So, I am very happy that we have some good example here of how important is the “necessity clause” in the evaluation of dFSCI (or simply CSI).

In a sense, that reminds me of the debate we had some time ago about the information in shadows, tracks, and so on. Those are all examples of what we could call “data information”. I agree, however, that “natural” data are usually in analogic form.

Thanks for you response. I must say I am surprised by what you wrote. You seem to be saying that the reason that the “above average temperature record” is not dFSCI is because you know its origin (natural vartion + necessity mechanism). This leaves us straight into the circularity argument again – because the whole point of dFSCI was to determine the origin. Imagine I was to present you the string without telling you the origin. That is the scenario we are talking about. You would then need to determine whether there is dFSCI and if it has conclude it was designed. If you cannot tell whether something has dFSCI without first knowing the origin its not much use for determining the origin!

I don’t think the “above average temperature record” string has dFSCI for a completely different reason. It needs a prespecified function and I haven’t found one yet. As you say you can always find a postspecified function (that is why you need to rule them out). “B” – the second string of somewhere physically close was intended to provide that prespecified function – the one string could be used to predict the other. This is an empirical relationship based on our empirical knowledge that temperatures in locations that are physically close are very similar. However, as I say, I can’t get good enough temperature records.

No. I don’t agree with you. What I said is that the definitioon of the function itself (IOWs, the simple fact that it points to natural data) tells us that the origin is a necessity mechanism, and therefore allows us not to affirm dFSCI. There is no circularity here. You have some strange obsession for circularities that do not exist!

If you had simply givan me the string, without saying what it was, I would simply have recognized no function, and still I would have not affirmed dFSCI, for a different reason. IOWs, either I know the only function recognizable in the string, and therefore I know that it is a consequence of necessity, or I just don’t know the function.

In both cases, I cannot affirm dFSCI.

You are also wrong when you say that:

“because the whole point of dFSCI was to determine the origin.”

Again, you still don’t understand dFSCI. The whole point od dFSCI is to infer a design origin in positives. As it is a tool with many false negatives, it is of no utility to infer the origin in other cases. Please, reflect on that.

But I was really interested in evaluating correctly the complexity tied to the “recording” of natural events.

gpuccio,

But to be totally honest with you, I think you need to consider what this may mean for your overall argument.

Darwinists will assert that the linear digital sequences in DNA are simply a recording of random variation plus environmental necessity. Therefore, there is no dFSCI in living organisms, according to your criteria.

They may be right or wrong, but you need to exercise care that you don’t cut the legs out from under your own argument. You’ll need to explain why they are wrong and/or why living organisms are different.

The string identifies for each month over a period of 120 months whether the London monthly mean high temperature is above or below long-term average…The string will simply be a string of 120 bits with 1 for above average and 0 for below average.

In what sense would that string have or perform a function?

Basically you propose to take some information and encode it into a string. Do you think it then follows that the string performs a function?

What function does this string perform that is not already present in the information being encoded into the string?

A happens if and only if B happens
A (and therefore B) happen on an unpredictable schedule

Is B the cause of A or merely some event or condition that must be satisfied before it is possible for A to occur?

I don’t see how it follows that if A is unpredictable (happens on an unpredictable schedule) that B is also unpredictable (happens on an unpredictable schedule).

I don’t see how your temperature readings meet your criteria.

There must always be a mean high temperature for a month.

There must always be a long term average temperature for a month.

The mean high temperature for a month must always be above or below or equal to the long term average.

Set aside for now whether the data you need is reliably available, in your temperature reading example what is A and what is B?

What about rainfall measurements? Rainfall meets or exceeds a certain level if and only if it actually rains. Is that too predictable?

So, has it dFSCI? No. Why? because it is perfectly explained by necessity mechanisms.

Wait, dFSCI is based on some criteria, independent of cause.

Given the temperature, and the meausirng system, be it some analogic natural system, or a designed digital measurement, the string is determined by the necessary measurement if the temperature.

So, in general, a complex string of data about some natural phenomena is a string complex, certainly functional, but does not exhibit dFSCI because it has a complete necessary explanation given the natural phenomena. which, I believe, are supposed to be explained also by necessity, or random, mechanisms.

1- Data is only functional if there is some agency around to gather and interpret it.

2- Data is only information if there is someone is around to interpret and add meaning to it

keiths: The following asymmetry explains why: the discovery of an objective nested hierarchy implies common descent, but the converse is not true; common descent does not imply that we will be able to discover an objective nested hierarchy.

Umm if the discovery of an objective nested hierarchy implies common descent then that would be because common descent implies that we would be able to discover an objective nested herarchy.

There is just no way out of that. However I would LOVE to see you explain yourself- but I am sure that you won’t…

gpuccio’s argument is that a string with all the attributes of “dFSCI” loses that designation strictly because of its origin, in other words, if nature can generate DNA with a necessity mechanism, then DNA has no “dFSCI”.

That makes no sense since that is the whole reason for this debate, to determine the *origin* of the “information” that results in life.

You can’t dismiss the “origin” of “dFSCI” simply because you don’t like the source.

If DNA is the result of a necessity mechanism, then a necessity mechanism is its cause, period.

Do you have problems following along? I addressed your concerns in comment 269- I say gpuccio is wrong or misspoke as we had already agreed that dFSCI exists independent of cause.

That said, no one has demonstrated that DNA is the result of any necessity mechanism.

Darwinists will assert that the linear digital sequences in DNA are simply a recording of random variation plus environmental necessity. Therefore, there is no dFSCI in living organisms, according to your criteria.

And so? I have always said that, if darwinists suceed in shoing that RV + NS can really generate the functional complexity in living beings, then the ID argument fails. I an perfectly aware of that. And I am in no way worried. They can’t.

They may be right or wrong, but you need to exercise care that you don’t cut the legs out from under your own argument. You’ll need to explain why they are wrong and/or why living organisms are different.

No. I believe in my argument. The only “legs” on which my argument must stay are the legs of truth. If my argument is wrong, darwinists are absolutely welcome to falsify it. I have nothing to be worried about, except truth.

If darwinists are right, they are right. And I will happily admit it.

On the other hand, if darwinists are wrong, they are simply wrong. And I am perfectly confident that they are wrong.

ID (and dFSCI), like any scientific theory, is falsifiable. Let them falsify it, if they can.

If you red again my definition of dFSCI, you will see that an integral part of it is what Mark calls the “necessity clause”: IOWs, we must know no necessity mechanism that can explain what we observe, before we can affirm dFSCI.

Now, that has nothing to do with “cause”. The simple existence of a necessity explanation rules out dFSCI. It does not necessarily rule out design, or any kind of “origin”. As I have said, when the assessment of dFSCI is negative, we cannot say anything about the true origin of the object. It could have been designed, or it could have been produced by a necessity mechanism, or even by RV.

What we know, for a data string, is that the complexity in it can be prefectly explained by a necessity mechanism (such as the measuring/storage of natural events). Again, please remember that here I am not speaking of the complexity of the measuring/storing mechanism. That is a separate problem, that needs evaluation.

I am speaking of the complexity in the string sequence, its “correspondence” to natural events. That is certainly a form of useful information. It can certainly be complex (if long enough). But is is perfectly explained by a necessity mechanism (the measuring and measure storing of natural events). Therefore, it is not dFSCI.

1- Data is only functional if there is some agency around to gather and interpret it.

2- Data is only information if there is someone is around to interpret and add meaning to it

OK, I agree with that. But that is true for any function. A sequence in a gene is only functional because we recognize its function. In my definition, the observer if completely free to recognize and define any function. The important point is, the function, as defined, becomes the object of our dFSCI evaluation.

Any function defined for data is connected to the events they represent, or from which they are derived. Therefore, the function itslef is connected to a possible (indeed, extremely likely) explanation of the data string by a necessity mechanism that can realte the string itself to natural events.

This is very interesting, because it shows that the function of data is very different from the function of a machine. Data can only represent natural events, and they are derived from them.

A machine does something, and need specific information to do that something. That sepcific information is not the recording of natural event, but rather an intelligent arrangement of matter to implement a purpose.

The definition of dFSCI is complete, and can easily recognize those two different situation, thanks to the “necessity clause”.

Which is to say: phenomena that arise by natural means do not exhibit dFSCI by definition.

Wrong. The correct form is:

“Phenomena for which a good explanation based on necessity is known do not exhibit dFSCI.”

It follows that, regardless of other properties an object may have, it cannot be concluded that it exhibits dFSCI until its causal history is known.

Wrong. It is enough that no necessity explanation is available when we evaluate dFSCI. dFSCI is a diagnostic tool, and ot works with what is already known.

No procedure or calculation performed upon the object can alone warrant the conclusion that the object exhibits dFSCI absent knowledge of that causal history.

Wrong. See before.

It further follows that to claim that dFSCI present in an object is evidence for a particular kind of causal history (it was designed) is patently circular, as you cannot assert that dFSCI is present until that causal history is known.

Wrong. dFSCI is not “evidence” of anything. It is an empirical basis for a design inference. Its connection with a design origin (in positive cases) is only empirical.

And, lastly, it follows that no enumeration of supposed of objects displaying dFSCI, defined in this way – and the absence of counter examples in this collection – has any empirical bearing upon the question of whether the exclusion of natural objects by definition is in fact appropriate.

I think there is some confusion (my fault – I have explained it badly). My plan was to point to the list of London temperatures not as the string but to help define the function. The function was to predict whether those temperatures were above average without looking at them. Another string, of which I would tell you nothing, just give you the string would be the one that did the predicting. Your challenge would then be tell me whether that string was designed.

In fact that string would be the based on the temperature record of an adjacent location – but I wouldn’t tell you that. So it would be:

Complex

Digital

Functional using a prespecified function

I am not sure I understand. If the second string is similar, or identical, to the first, I would never affirm dFSCI for it, because it could be simply copied from the first.

If I knew that the second string is of an adjacent location, and still it is similar or identical to the first, there is a nacessity laws that explains the similarity, that is that nearby locations share similar conditions most of the time.

I don’t understand your point. Let’s say that you show me a string whose function is to be identical to the true temperatures measured in London. Why should I be surprised? I will just say that a very simple necessity mechanism can generate the second string, deriving it from the first. I may not know exactly how the string was really generated: I would probably not imagine that it is the measure of temperature in a nearby location. And so? The only important point is: the function of the string is only to give information about true natural events. So, a necessity mechanism (of measure and storage of the measure) can easily explain it.

OK. I will be more precise. So the point of dFSCI is to detect design origin when there is a positive case. This is not much use if you have to know the origin to determine if there is dFSCI in the first place. Right?

Wrong. I don’t have to know the origin. But I have to know the defined function. If the defined function in itself implies a possible necessity origin, I will obviously ackoledge that fact.

In the case of a functional protein, I have no need at all to “know the origin”. And the functional definition tells me nothing about a possible necessity explanation. And I can pretty well assess dFSCI.

You always have to watch for circularity with dFSCI because it is used to determine whether something has a design origin but in order to decide whether something has dFSCI you have to assess whether it has other origins.

No. This is always the same error.

Let’s sum up your arguments in the last days.

First you tried to give a post-specified functional defintion (without saying it) that in your opinion would have prompted me to affirm dFSCI. As soon as I requested more explanation about the definition (not about the origin of the string), I could easily affirm that no dFSCI was present in that particular string.

Then you tried to define a string that is certainly complex and functional. But the definition itself tells us that it is a data string, ar at least a data derived string, and therefore there is a perfect necessity explanation available.

As you can say, the oriign of the string is never the problem. And circularity is never the problem. Here, the problem was only with the definition of the function.

Once the function is defined well, the assessment of dFSCI (negative or positive) follows naturally.

We only agreed a set of conditions when it was not circular by being extremely precise about the definition and recognising it was relative to a particular observer at a particular time with that observer’s knowledge.

We only agreed that the function must be well defined. And that we have to follow the rules given in the definition and procedure for dFSCI.

As a matter if interest can you point to a real example of someone using dFSCI to detect design when they didn’t already know the answer because they knew the origin was designed for other reasons?

Yes, sure. Mung’s string.

You see, you lot could have given hundreds of string exhibiting dFSCI: meaningful strings of text, functional source codes, even functional engineered molecules, and so on. Nobody in your field has done anything like that. Why? Because you know that in all those case I would have probably affirmed dFSCI, inferred design, and been correct.

Your only purpose has been to find somethinf that could be give a false positive. And you have failed.

The list of your errors and misunderstandings is so long that it must certainly be complex.

I cannot correct them all.

Just a couple of examples:

A string of “information” can be “digital”, “functional” and “complex” but the key attribute is “specified” which has to do with its origin.

Not at all, obviously. The specification is in the observed and defined function. Nothing to do with the origin.

gpuccio’s argument is that a string with all the attributes of “dFSCI” loses that designation strictly because of its origin, in other words, if nature can generate DNA with a necessity mechanism, then DNA has no “dFSCI”.

Not at all, obviously. My argument is that I use dFSCI to infer a design origin (let’s say it is a diagnosis). If my diagnosis is proven wrong, IOWs of the origin is then assessed as a non desing origin, my evaluation is a false positive. And that would be a serious blow to the specificity of the dFSCI procedure.

If you red again my definition of dFSCI, you will see that an integral part of it is what Mark calls the “necessity clause”: IOWs, we must know no necessity mechanism that can explain what we observe, before we can affirm dFSCI.

I strongly disagree. dFSCI exists regardless of its origins.

That said, everytime we have observed dFSCI and knew the origins it has always been via agency involvement AND we have never observed blind and undirected causes producing dFSCI. THAT is why it is a design indicator.

If a necessity mechanism can produce dFSCI then it is no longer a design indicator.

I think we may just have a little communication issue.

As for function, as I said before that is something we observe and then try to figure out what caused it.

Just a little more clarification about data strings, in case it is not yet clear to all.

Let’s say that we measure the highes daily temperature in London each day, and record the results in a digital string.

Now, there is no doubt that the system that makes the measures and the recordings has some complexity. I would also say that it is designed, because I am not aware of a natural system that can do all that. Beware, I am not using dFSCI here, I am just giving a common sense judgment. First of all, the measuring system could be mainly analogic, although some digital procedure is expected to create the data storage string. And second, I am not completely sure that some natural system could not keep some track in time of the highest daily teperature in London for some time. I don’t really see how, but it could be possible. And anyway, I am sure that some natural objects can keep a lot of information about some natural events, is correctly read.

So, let’s say that for the moment we cannot quantify the complexity of the mechanism that measures and stores the temperature in a string. Let’s say it has complexiy “X”, and obviously a well defined function: measuring the temperatures in London and storing them.

So, X is the functional complexity of the mechanism, whatever it is.

But what about the string?

Let’s say that we use the mechanism for two days only. Then the string of data is simple enough: a few bits. The necessity mechanism would certainly be more complex than its output.

Now, let’s measure the temperature for 1000 days. Now the string is very complex, and function in the sense we have defined for data strings (it gives us information about the temperature in London in time). But where does that complexity come from? And is it dFSCI?

The answer, as already said, is: NO.

The mechanism has not changed. No new complexity has been added to it. It just goes on working repetitively.

The simple answer is: the complexity derives directly from the complexity of the events that are measured: in this case, the temperature in London. The necessity mechanism only “translates” the information in the events to the string.

Now, the way the temperature changes in London is certainly a complex issue: it is the result of complex natural law, and of random components. A whole science tries to understand and describe those kind of systems.

But there is no doubt that the temperature in London can be explained by natural explanation,s be them necessity laws, random configurations, or a mix of the two.

We agree on that, don’t we?

So, the conclusion is simple: the functional complexity in a data string is simply the complexity in the events the data describe. That complexity has perfectly understandable necessity/random causes. Therefore, the data strings too have perfectly understandable necessity/random causes.

I agree that some special property is really in the object. But dFSCI is a way to catch that property. It is a concept defined by us, and as I have said many times, to be empirically useful we have to consider it as a diagnostic tool, an empirical property objectively definable, and whose empirical sensitivity and specificity can be measured.

So, for me, dFSCI is evaluated in the object. It is a judgement made by us accordign to a precise definition and procedure.

I strongly disagree. dFSCI exists regardless of its origins.

But waht are you disagreeing with? I have always said that we need not know anything about the origins to assess dFSCI. The only difference is that I would not use the word “exists”, because ny definition and procedure are empirical, and have no pretence to deal with the problem of “substance”. I would only say that:

“dFSCI can be evaluated regardless of its origins”.

Now, please read again my statement:

“If you read again my definition of dFSCI, you will see that an integral part of it is what Mark calls the “necessity clause”: IOWs, we must know no necessity mechanism that can explain what we observe, before we can affirm dFSCI.”

Where am I talking of “origins”? I am only saying that, if we know a credible necessity explanation for the string, we do not affirm dFSCI. That has nothing to do with the historical origin, that we don’t know.

I remind you that, if we say that we cannot affirm dFSCI, the string can still be designed.

IOWs, I will not afform dFSCI if I know a possible necessity mechanism that can explain the information in the string. But I am saying nothing about the origin of the string.

Only if I affirm positively dFSCI, I will make a design inference. That does not mean that I know the origin. It just means that I infer design. If the origin can be independently known, now or in the future, my inference will be either confirmed or falsified.

You say:

If a necessity mechanism can produce dFSCI then it is no longer a design indicator.

That is true. If dFSCI fails in the design inference, and gives false positives, its utility will be falsified.

dFSCI is not a dogma. It is not a religious faith. It is an empirical tool, part of a greater empirical theory, that is ID. I am fully confident that ID and dFSCI are very useful scientifc tools. I belive ID is the best explanation for biologic information.

But ID is not a Bible, nor a religion. It is science. I treat it as pure science, and I believe that is the greatest tribute I can give it.

I am really struggling to understand your comments (I do begin to wonder if there is a language problem after all). So I will try to be very precise and limited in what I say.

Suppose I

(1) Define a function: “predicts whether monthly London temperature anomalies will be positive or negative.”

(2) Present you with a string of 500 bits. I tell you nothing about its origin except to reassure you it was not in anyway derived from the record of London temperature anomalies.

That is the total of all the information I give you.

On investigation you find that the string does indeed correctly predict London temperature anomalies.

Has the string got dFSCI? If not, why not?

I have some problems with those statements. Please, help me understand:

a) You definition of function: “a string that predicts whether monthly London temperature anomalies will be positive or negative.”

What does “predict” means? You are given me a string that will predict that sesult for the future? IOWs, is the string a pre-specification? (It’s you who used the word “predict”: I must necessarily undersatnd what you mean).

Or do you mean: “a string that tells us whether monthly London temperature anomalies have been positive or negative.”?

Then you say:

“I tell you nothing about its origin except to reassure you it was not in anyway derived from the record of London temperature anomalies.”

First of all, you should simply “tell me nothing about its origin”. Why the exception?

And the unrequested exception is not correct, either. If I understand well what you say about the origin of the string, that it is a measurement of the temperature in a nearby location, it is, if not “derived”, certainly connected to the string of the temperature anomalies in London by the simple necessity rule that temperatures in very near locations usually may have a very similar trend. IOWs there are precise laws of meteorology that can explain the similarity (or identity) between the two strings.

You don’t specify if the string you give me is identical (if we make the correct comparison) to the string of the temperature anomalies in London, or if it corresponds only in part.

If you specify all that, my answers will be simple (I believe you can already anticipate them).

I am sorry – I have not been able to read every comment on this thread. What was Mung’s string?

It’s at #111 here. I asked about the function at #167. Mung answered at #170. I affirmed dFSCI at #177. Mung confirmed a design origin at #179.

OK. Let’s change the challenge a bit.

Why? Do we agree that the challenge up to now confirms that dFSCI has 100% specificity when applied to string whose origin is known?

All your examples are of things that are known to be man-made for other reasons.

Not exactly. The designed ones (like Mung’s string) are examples of that. The random strings are not. I agree with ou that the only examples of design origin of which we are historically certain are human artifacts. But I suppose we already knew that.

We all know that text and code are man-made.

OK.

Life is not.

That’s true.

Give me an example where somebody used dFSCI to deduce design and the string was not known to be man-made (remember we are talking digital).

Well, first of all let’s change “deduce” with “infer”. Then the answer is simple. Me. I have affirmed dFSCI for all protein families, in Durston’s paper whose functional complexity exceeded 150 bits according to Durston’s results. And I have inferred design for all of them.

Incidentally, I did give what I thought was a false positive and you said it didn’t count because you didn’t like the function.

No. Because the function, correctly stated, told me it was a negative.

Now I am trying to pin down what the rules are with a hypothetical example! (I am not saying that your procedure does not work – but I think if you define it so it does work and it is not circular you will find it does not apply to life.)

The answer is simple. It does not exhibit dFSCI, because it could be copied from the string for London, or be connected to it in some other way.

You kindly offer an assurance that the string was not copied. As I said, that is nor correct. I should know nothing about the historical origin. The string is identical to an existing string of data. It could have been copied. That’s enough. It is not dFSCI.

Even if I accepted that the string was not copied, which is not in itself part of the correct procedure, I should obviously consider all possible necessity origins. For instance, you could have measured the temperatures in London yourself, without copying an existing string. Or the string could be derived from other meteorological data (pressure, wind) by some simple meteorological algorithm.

I don’t understand what you want to demonstrate by using measure made in a “nearby location”. Obviously, if the location were far enough, the mere temperatures would somewhat differ. But you astutely did not use the raw temperatures, but rather a comparison of them to the averages. That is probably more likely to be identical in nearby locations. Moreover, what do you mean by “nearby”? I could just measure the temperatures a mile away from where they are usually measured (is that Greenwich?), and I would have the same results, I believe.

You may say: but you know nothing about the origin. OK, I am fine with that. Then I really must know nothing. The only thing I know is: I have a string that is easily obtained either from existing data, or from some direct measurement of the same data, or of very similar ones.

So, no dFSCI.

Data and all that is related to them are an interesting kind of information, but they are not examples of dFSCI. They are not examples that warrant a design inference.

Mung: Every case of a text in English of 143 characters is a case of dFSCI. In every case of separately known origin — billions — it is reliably the product of intelligent design. Random text generation exercises so far have hit 24 or so characters, or a factor of 1 in 10^100 of the FSCI threshold of 1,000 bits. So, those who would pretend otherwise, know or should know better. KF

Please, refer to the Durston paper. He analyzes 35 proteins. For most of them, the functional complexity is more than 150 bits. For none of them I am aware of a necessitty explanation.

And if you specifically exclude copied strings from your method, how have you determined that none of these strings is linked to any other by a copy chain?

The problem here is not if a particular string is copied. All biological strings are obviously copied.

What we are trying to explain in the case of biological strings is how that particular sequence first emerged. It is not important how many times it was copied.

Mark’s case was completely different. There the function consisted in pointing to natural events. In that case, the information is already in the events, the string is a set of measures and a recording of those measures. The problem is not if the string we observe has been copied, but that the original string is a set of measures, easily explained by the events themselves and by the process of measuring and recording. So, to be clear, we have:

a) A set of natural events, that can be fully explained by natural laws and random effects.

b) A set of measures of those events, recorded in a digital string, whose sequence is fully explained by a necessity mechanism (the measure) applied to that set of events.

c) A set of any other string derived from the first one, or independently measured from the same, or similar, events.

I’ve published the complete history of four lineages diverging from a common ancestor — on this thread. It was probably what killed the site.

I knew you are dangerous! 🙂

All the lineages are responding to the same “oracle,” which never varies. I spent a couple hundred hours building this specifically to respond to gpuccio’s claims.

I appreciate that. But I don’t understand what claim of mine you are responding too.

Do you believe that your oracle is a good model of NS? Please, prove that. My claims are about proteins and the NS mechanism, I believe. Please, be more specific, otherwise your many hours of work will be rather useless…

I am beginning to think that a key issue is the phrase “know no necessity mechanism that can explain …”. Many of the people who have argued that gpuccio’s argument that dFCSI implies design is circular have interpreted “know no necessity mechanism” as ruling out the possibility that natural selection acted.

And that is definitely wrong.

But if (as I think) gpuccio means by “know” the mechanism that we have a detailed and explicit explanation of exactly how and why natural selection acted in the particular case, then things are not so simple.

You think well. That’s what I mean, what I have always meant. But why are things “not so simple”?

In general, when I see a phenotype, such as the shape of the legs of some species of mite, I don’t know exactly what genotypes are available or what the fitnesses of the genotypes are.

That’s why I never discuss phenotypes if the molecular basis of the phenotype is not well known.

Simply because I have never studied that species, and probably no one else has either.

well, why not stick to species that you, or someone else, has studies? We cannot certainly discuss what is not known.

But I do know that for typical species there are genetic variations, and phenotypes that are that easily visible have noticeable fitness differences.

And the molecular basis for that, when known, is always very simple.

So in that sense I “know” that there are evolutionary forces that can in principle explain the phenotypes.

In principle? Please, show something that is really known. Principles are very common in this world. Real explanations are a rare thing.

If gpuccio sees a case where we don’t have a detailed explanation for the phenotype being a result of natural selection and random genetic variation, does he consider that we “know no necessity mechanism”?

I am not sure of what you are saying here. First of all, we need to know the genotype, and its connection with the phenotype. We have to know the genotype transition we are trying to explain, its phenotypic effects, and the natrual selctability of those effects.

And is we know no necessity mechanism, then we know no necessity mechanism. (I know, that is circular 🙂 ).

If so, then he would be at risk of inferring design in such cases, when later they might be shown to be explainable by natural selection and random variation.

As I have said many times, I fully accept that risk. It’s the risk of doing science. Anyone doing science is at risk of seeing his theories falsified in the future.

Or does he mean by that phrase that we know that no necessity mechanism based on natural selection and random variation is in principle possible?

No. Absolutely not. I hope that is clear enough, for all. I am not interested in “in principle” discussions about these things.

Joe I agree that this phrase is key. Clearly it is a phrase that is open to interpretation.

I hope i have clarified it enough (see my previous post to Joe Felsenstein).

How plausible does a necessity mechanism have to be and how much detail has to be conceived before you can say you know of a necessity mechanism.

Let’s say it must be detailed enough, and consistent enough, to explain how things happened. It needs not be completely detailed, but the important details, those that transform the “principle” into an empirical explanation, must be there. For instance, in the case of hypothesized NS, you must be able to show the intermediates, and that they are really naturally selectable.

The mechanisms needs not be proven true. It must only be proven reasonable and credible. That is an important point. I don’t ask for any demonstration that the mechanism really did it, but only that it could really have done it.

In his most recent example Gpuccio has indicated that he knew of a necessity mechanism:

“because it could be copied from the string for London, or be connected to it in some other way”.

without any further detail.

Please, refer to my answer to Alan Miller (#289) for that.

The point is simple: in a data string, or a data derived string, the information that we can observe consists in the connection to real events. That connection is a necessity connection. There would be no utility in a data string that is connected to real events randomly. It would give no information about the events, and would not be functional.

Therefore, for data the connection itself is of the necessity type, and that makes them data, and therefore functional. Data are measurements of events.

This is so vague it applies to life as well and indeed many of the proposed examples of things that have dFSCI. The string of text could have been copied from the Lord’s prayer!

Again the same error. If we are evaluating a copy of the Lord’s prayer, we are not interested in how the specific sheet of paper we are observing was generated, but in how the information in the Lord’s prayer was generated. Data, however copied, refer to natural events that can be explained. But how do explain the Lord’s prayer by natural events? It is not the measure of any natural event. It is not a string of data. It is meaningful language.

In the same way, the sequence of a functional proteins is a special sequence that generates a working molecular machine. It is not the copy of natural events, or the measure of anything like that. That sequence did not exist before its first emergence, and we have no good explanation of how it emerged.

It seems that the “density” of arguments at TSZ is becoming increasingly lower.

A few brief answers to scrcely relevant comments:

Joe Felsenstein:

So there we have a question for gpuccio:

In deciding whether we “know no necessity mechanism” for an adaptation, must we rule out the possibility of RV+NS? Or must the details of the RV+NS be established before we can decide that a “necessity mechanism” is “known”?

At the moment we evaluate dFSCI, either the mechanism is known, or it is not. IOWs, if you have a path for RV + NS, sufficiently detailed and tested, I will accept it gladly. If nobody has such a thing, I will consider that no necessity mechanism is known. I am not interested in imaginary “in principles”, or in fauth driven hopes in some future vindication.

2. In the latter case the inference to design is vulnerable to having it later be shown that RV+NS is possible, so the inference to design is not sensible.

It is vulnerable, like any other scientific theory. Science is not a place for invulnerable things. You should try superheroes for that.

I think GP probably feels secure in the knowledge that an historic path through a serial stochastic, chaotic process will not be repeated this side of the heat death of the universe. We are attempting to follow the paths of molecules – admittedly ones which leave a trace, but only amongst survivors. An empirical demonstration of the capacity of RV + NS to generate a specific biological string is practically impossible – there will never be that ‘later’. That does not make dFSCI any more respectable, but it seals it off for its main purpose, which I think is to convince GP.

Your idea seals neo darwinism off for what it really is: an imagination based theory, without any support from facts.

But your idea is indeed wrong. Homologies are well used to track paths that exist. Strangely, we cannot find any for paths that don’t exist, and never existed.

Moreover, even if you cannot track those “abundant” darwinian paths from existing molecules, it is always possible to find them in the lab, if they exist.

The simple truth is that i “feel secure in the knowledge that an historic path through a serial stochastic, chaotic process” has never existed, and that’s why it will never be found.

OMTWO:

dFSCI (or any variant is never mentioned in the Durston’s paper at all. I’ve asked KF why that is and he made up some blah about it being the same thing anyway. If it’s the same thing why invent another name KF?

It is clearly mentioned at the very beginning of the paper. Durston calls it “Functional Sequence Complexity (FSC)”. It is the identical concept as dFSI: the part of complexity that is bound to the function. So, KF is perfectly right, as usual.

Durston obviously can use the terms he likes. As can I. If you just understood the concepts, you would see that it is the same thing.

gpuccio: “The answer is simple. It does not exhibit dFSCI, because it could be copied from the string for London, or be connected to it in some other way. “

gpuccio, you are clearly stating here that “dFSCI” is NOT intrinsic to the “functional specified complexity” of a string.

If string A has “dFSCI”, then string B, an identical copy of string A, MUST contain “dFSCI”, if “dFSCI” is a characteristic of a string.

Only if “dFSCI” is dependent on its origin/source/generator, can you claim that B does not contain “dFSCI” even though string A does.

The simple point is that string A, here, has no dFSCI. Therefore, not even B has. The problem is not the copying. The problem is that neither the original events, nor the strings that store the measures, nor any string derived from the events or from the measures, exhibit dFSCI. All of them are explained by necessity laws, or random effects, or a mix of them.

First I am not sure what the difference is between a data string and any other kind of string. My function was simply “predict whether the London temperature was above average or below”. Would it still be data if I was predicting whether oxygen would be carried in the blood stream or not?

First of all, I still object to the use of your word “predict”. Your string just gives information that corresponds to some natural events. Therefore, it can easily be derived from the natural events themselves, or from any already existing measure of them. The important point is the necessity copnnection with the natural events.

The concept of a data string is very simpèle: it is a string whose “function” (utility) is only its necessary connection with events. It stores information about those events, and does nothing more.

Second the necessity clause does not apply to the London temperatures. It applies to a string which I have told you nothing about.

Wrong. You have told me that its function is to give us information about the London temperatures. You have told me that its function is there only if the string has a connection with the events of London temperature.

It might be a string of amino acids coded in binary for all you know.

In that case, it would have other functions. Or, if you mean that the information about London temperatures can be written using aminoacids, I agree, but then nothing changes. The importantpoint is not how the string is written, but its informational utility.

So how can any special conditions about data (whatever that means) possibly apply to it?

It’s simple. dFSI is the part of information that is necessary to express the function, excluding the part of onformation that is explained by necessity. In a data string, all the useful information is generated by necessity mechanisms, and is not dFSI. It’s as simple as that.

In a functional protein, or a sonnet, or a software, the functional information can in no way be explained by a necessity mechanism, or by mere natural events. Those are examples of dFSI.

Suppose you are confronted with a string of digits that happens to be the numbers of the most recent winners of the UK lottery in order. Would you say that string was designed?

It is always a data string. The only difference is that the events, here, are not “natural”, but related to human activities. But that makes no difference.

The answer is the same: the string itself, its sequence, does not exhibit dFSCI. The mechanism to get and store the data could exhibit dFSCI, but that is another porblem. IOWs, once a definite mechanism to know who the winners are, and to store that information, is acting, the information in the string is simply a recording of data. It can be short or long, it is certainly more or less complex, but it does not exhibit dFSI any more than the string that records London temperatures.

Well, I made a half-hearted effort yesterday to decode your string. Various 2-8 position size increments, converting to decimal, ASCII, shifting the frame by one character at at time, and so on. So far nothing . . .

I hope you didn’t spend too much time on it, lol. You’re not supposed to have to figure out what it does. I’m hoping, with no great expectation, that mark frank or someone else at TSZ will write a function that reads that string and let’s us know whether the coin that was used to generate it is a fair coin or not and post the string that defines that function.

That might clear up the confusion about the different types of string.

I tossed a coin 1000 times, recording each head or tail as a ‘1’ (representing a heads) or a ‘0’ (representing a tails). (Call this string s0.)

Then I counted the number of 1’s and the number of 0’s in that sequence (s0). If the number of 1’s was greater than the number of 0’s I recorded a ‘1’, otherwise I recorded a ‘0’ in my string (s1).

I repeated the above steps 500 times.

Of course, I didn’t actually toss a coin 500,000 times!

Instead, I took the above text, which describes an algorithm, and encoded it in a programming language and had a computer do the actual work for me.

Now, what truly amazes me, is that people cannot seem to comprehend the difference between the algorithm and the string created to encode the algorithm so that it could be run on a computer and the string that is produced by the algorithm.

But hey, no one says humans have to be rational. But for some strange reason we all seem to think that they OUGHT to be rational.

The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its “meaning” through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

Mung: So, you haven’t been paying attention. Why am I not surprised. My first string was not generated by a computer program.

That doesn’t save you from “gpuccio’s law”.

So? It saves us from your obvious ignorance, and that’s good enough for me.

What is “gpuccio’s law”?

Any string that is a result of a “necessity mechanism”, such as a computer program, does NOT contain “dFSCI”.

Is that what you’re calling “gpuccio’s law”?

What about strings that specify a computer program?

That means you are going to have to manually put together any string that you wish to assert “dFSCI” for.

So? I manually put together the first string I posted and I manually put together the strings that generated my second string.

It also means the “intelligent designer” of life is also forbidden from using computers as the resulting life-forms would NOT have “dFSCI” and therefore would not be considered as “designed” by gpuccio.

OK, that clarifies things. In cases where RV+NS is not ruled out, but still possible, but where we have not investigated enough to say much further about whether RV+NS accounts for the adaptation, gpuccio considers “that no necessity mechanism is known”.

What a pity that such a mechanism has never been shown to be existing for any molecular macroevolutionary transition, such as the emergence of a new protein domain!

Alan Miller simply believes that it is not possible to find evidence for any such transition. You say it’s just a question that “we have not investigated enough”.

The simple truth is that the neodarwinian algorithm has been considered for decade one of the most important triumphs of moderb science, has been declared a fact, more certian than the theory of gravity, and still cannot explain one single macroevolutionary event. What a shame!

1) Would gpuccio then concede that dFCSI is not a good indicator of Design?

From previous discussions at UD and at Mark’s blog, I think gpuccio believes the designer twiddles the bits in a way that is indistinguishable from naturalism.

That is simply wrong, indeed the opposite of what I believe. I believe that the act of design needs not violate physical laws (although it could). In no way that means that it is “indistinguishable from naturalism”. Otherwise, why would ID theory exist?

In other words the dice are occasionally, but not always, loaded. the history we infer is the correct history, but occasionally key mutations are forced by an immaterial designer.

That is correct, but it dopes not mean that the result is “indistinguishable from naturalism”. The results of a design process are completely different from natural results, because they exhibit abundant dFSCI. On the contrary, if RV + NS were true, the natural history should be very different from the history of a design process. In design, intelligent information can be inputted in abundance, without violating physical laws, but certainly violating all probabilistic laws. A natural process cannot do that.

1) Would gpuccio then concede that dFCSI is not a good indicator of Design?

gpuccio, Behe, Dembski, Meyer, myself, Mung, kairosfocus, PaV, vjtorley, bornagain77, Eric- well every IDist I know of would say that dFSCI is not a design indicator if and only if blind and undirected processes can be demonstrated to be able to produce it.

2) At time t1 the gene no longer has dFSCI relative to Gpuccio’s knowledge because string was not designed.

No, at time t1 we have no need to “reassess” dFSCI: we have already made a mistake in that case. It is a false positive, and a serious falsification if the utility of dFSCI as a design indicator.

What it does mean is that if inspecting strings for which the origin is already known then by definition any string with a known natural origin has not got dFSCI at the time of inspection.

No, you are making a great confusion between different things. In the process of testing the specificity of dFSCI for design inference, we use strings whose origin is known, but we still assess dFSCI without being aware of that knowledge of the true origin of the string. As explained, that is a simple process of testing a diagnostic tool.

In the (very hypothetical) case that RV + NS were shown capable to explain what it has never explained, that would show that our appication of the concept of dFSCI to biological information gave false results (false positives). dFSCI would be no more a reliable diagnostic tool, and the ID theory for biological information would be definitely weakened.

Although it would probably still be possible to make an argument for OOL, I would consider such a result as a very strong argument in favour of the neo darwinain theory anyway. I have always been very clear on that: I believe that a same mechanism should explain both OOL and the successive evolution of biological complexity. I believe that only design can explain those things. If you show that RV + NS can really explain the evolution of biological complexity, I would recognize your success, and I don’t think I would shift the argument to OOL alone.

So any attempt to correlate design with dFSCI would have to somehow imagine whether the string had dFSCI before the origin was known. But this is an almost meaningless exercise.

This makes no sense. dFSCI is clearly correlated to design in human artifacts, and my process of testing demonstrates that, as long as human artifacts and natural non biological strings are compared, dFSCI has 100% specificity.

The controversial point is its application to biological information.

We in ID believe that dFSCI reamins a perfectly valid diagnostic tool for design even in the buiological field, and we base this conviction on known facts.

You, neo darwinists, belive on the contrary that biological information behaves in a completely different way from all other things, and rely on an explanatory mechanism that has not explained anything for your personal hopes.

I don’t think there is a “necessity clause” in Durston’s definition which means, unlike dFSCI, his definition is not to relative an observer’s knowledge at a given time.

Durston gives a method to approximate the functional information in proteins, IOWs to get an approximation of the target/search ratio with an indirect biological method. He says nothing in the paper about a design inference, so he does not need a “necessity clause” at that level. However, the whole theoretical framework of his paper is based on Abel’s concepts, so it is rather obvious that he does not think that any necessity mechanism can explain that kind of inforamtion that is found in proteins.

I use Durston’s data to assess the quantitative values of dFSI in real proteins. The design inference is mine, and only mine: Durston has no responsibility for it (although I believe he would agree).

Perhaps it would be clearer if it were called dIC. Digital Irreducible Complexity. I think what gpuccio is arguing is there are sequences for which there can be no incremental history. Perhaps he avoids making this expllcit because it removes the argument from the realm of mathematics and into the realm of chemistry.

No, we could discuss that aspect, and I believe you are essentially right. Obviously, I would say “there are sequences for which there is no known, or reasonably expectable, incremental history”. Otherwise, you would pretend that I demostrate a logical impossibility, which is in no way my intention.

The simple fact is: there is no logical reason why complex sequences should as a rule be universally deconstructable into an incremental histopry with very specific requirement, such as that each ioncrement should give a reproductive advantage. Indeed, there are many logical reasons, especially perteining to protein sequences, why that should not be the case.

So, if such a result is in no way expected logically, or biochemically, I really need to see real demonstrations of its existence before believing in it.

He disregards inner ear evolution because we don’t have the molecular history. He disregards detailed molecular histories because they aren’t sufficiently complex. He disregards simulations because the undermine the mathematical basis of his belief system.

I disregard “evolution” without molecular history because all my arguments are based on the complexity of molecular history.

I disregard detailed molecular histories of microevolution because they are not complex, and all my arguments are based on complexity.

I am still wondering what gpuccio would have us do when a sequence is designated as having dFCSI because there is not a sufficiently detailed explanation of it by RV+NS, but later such a detailed explanation is found.

Strange. I believe I have answered that at least 10 times in the last few days.

(1) Would gpuccio then concede that dFCSI is not a good indicator of Design?

Yes.

It will be interesting to see which of those gpuccio would choose.

I hope you find my choice interesting.

In the meantime gpuccio’s dFCSI appears to be an attempt to formalize Michael Behe’s irreducible complexity argument in terms derived from William Dembski’s CSI argument.

I have no objections to that. In a moment of huge narcissism, I would call it “the modern ID synthesis” (just kidding! 🙂 ).

The attempt does not seem to me to be successful.

You cannot convince everybody…

There are important unanswered questions — so far no one here on TSZ can claim that they know how to apply the dFCSI concept.

That is probably not good, either for my clarity, or for TSZ’s understanding ability :).

I don’t think that dFCSI adds anything to Behe’s argument, and I don’t think we’ll be seeing Behe switch to describing his argument in terms of dFCSI.

Any string that is a result of a “necessity mechanism”, such as a computer program, does NOT contain “dFSCI”.

That means you are going to have to manually put together any string that you wish to assert “dFSCI” for.

It also means the “intelligent designer” of life is also forbidden from using computers as the resulting life-forms would NOT have “dFSCI” and therefore would not be considered as “designed” by gpuccio.

Evolutionary theory predicts that the two trees will be highly congruent, if not identical. In other words, evolution predicts that given a morphological tree, the molecular tree will come from the tiny sliver of possible trees that are highly congruent to the morphological tree.

1- There isn’t any “tree” amongst prokaryotes- more of a web and evolutionary theory is OK with that.

2- Different trees result from different molecules from the same organisms and evolutionary theory is OK with that

3- Evolutionary theory would be perfectly OK with a prokaryote-only, ie non-tree, world.

4- Evolutionary theory would be OK with a non-branching lineage formed by descent with modification

5- Evolutionary theory would be perfectly OK with any of the alleged possible 10^38 nested hierarchies

And finally keiths still has not demonstrated any understanding of neither nested hierarchies nor evidence.

Finally, given these statements by gpuccio, gpuccio’s inference to Design is not circular. gpuccio intends not to exclude cases that later turn out to have a natural mechanism for the CSI. Instead of being circular it is based on a faith that cases of dFCSI that have not yet been investigated will always turn out to be cases of Design.

Correct. Thank you for the non circularity admission. I had noticed that the circularity argument was no more very popular, but you were very kind to state that explicitly.

I have no problems with your review of my position, even if, obviously, I would describe some points with different words.

For example, you say:

“I think that this makes it almost inevitable that the argument will fail, since sooner or later someone will study one of these unstudied cases and find a plausible nondesign pathway.”

I am happy for your faith and hope (which are anyway good qualities of the soul). I humbly remind, however, that the problem is not that we have hundreds of cases where macroevolutionary complex molecular events have alredy been explained by RV + NS, and a few cases that still need to be explained for lack of time and resources to study them. Indeed, it is exactly the other way round: no macroevolutionary complex molecular event has been ever explained by RV + NS. Maybe fro you it’s the same thing. For me, it is not.

For GAs, I have explained that we could model RV and NS with a GA (I have also suggested how Lizzie’s algorithm could be chabged to apèproach such a result). The simple fact is that existing GAs do all expcept modeling NS.

Finally, you say that my inference “is based on a faith that cases of dFCSI that have not yet been investigated will always turn out to be cases of Design”.

That is true, but IMO it is not faith, here, but a very reasonable scientific conviction. I prefer to consider faith the hope that something that has never been shown, and that has logical reason to exist, will one day be shown to exist. Again, it’s always a problem of choices. In this case, specific cognitive choices and cognitive styles.

You write on a blog that is, admittedly, for “skeptics”. As I have said, I hate the word. But you could probably say that I am completely skeptic about you conviction that those things will be found.

While the “engineers of life”, .i.e. IDists, point to human engineering examples when explaining biology, they refuse to accept that the concept of “inheritance” as used by software, is also applicable to biology.

It certainly is. Why should I refuse to accept that? Biological design is, at least in good part, “object oriented”.

Every test of evolution ends up with the “improbability” argument that insists all “functionality” appears at once.

Object oriented design is still design. You have to design the objects that will be inherited. and the inheritance itself is designed and controlled. I can’t see your “point”.

So we know it but are not aware of it! I am glad to have things simplified.

Mark, are you serious here? Sometimes I am amazed at your comments. This is not complicated, it is simply nonsense. We have done that! How can you not understand?

It is simple ands obvious. We are esting dFSCI with string whose origin is known. So you, or Mung, or anybody else, collect string whose origin you know. Then another onserver (that would be me, in the thread that has been dedicated by you to the testing itself!) assesses dFSCI “without knowning the origin” of the string (which, however, is known to you). As I have written a lot of times, tyhe testin is done “in blind”. Then after I have assessed dFSCI, my assessment is evaluated against the true origin of the string, and calssified as true positive, false positive, true negative, false negative. As discussed, with you, many times in the last few days.

So, why do you come out with nonsense like that comment?

Actually I think you are saying the same as me – although I guess you find your wording simpler than mine. To assess the dFSCI procedure you have to imagine you do not know what in fact you do know i.e. the origin

Are you kidding? I am not saying the same as you at all. And what I am saying is very simple. Again:

“To assess the dFSCI procedure I have to “imagine” absolutely nothing. I have to assess dFSCI without knowing the origin, and then checking my assessment with the known origin.”

Is it so difficult to understand that the origin can be known, but the observer who assesses dFSCI can be prefectly unaware of it?

So did it or did it not have dFSCI at time t1? (Sorry to be so complicated).

I we assess dFSCI again at time 1, we will not say it has dFSCI. But that would be from a point of advantage. Our previous judgement, correctly given at the time, would have been however falsified, and the procedure would have been falsified, in this particular case.

GP: The executing machine works by mechanical necessity, per the loaded strings of instructions and data. However the code itself, as an information-based entity, is highly contingent, functionally specific and as a rule well beyond the 500 – 1,000 bit threshold where the only empirically backed explanation is design. And those who are playing rhetorical games to dance around such inconvenient facts know it or full well should know it. This speaks volumes. KF

While the “engineers of life”, .i.e. IDists, point to human engineering examples when explaining biology, they refuse to accept that the concept of “inheritance” as used by software, is also applicable to biology.

A computer program is a machine. It works by necessity mechanisms, but it is not in itself explained by necessity. It is very simple, as KF as very correctly pointed out in post #351

There is no ambiguity at all in these simple concepts. Take for example the case, many times debated here in the last days, of a data string generated by a necessity mechanism. The string of highest temperatures in London is a good example.

We must consider separately:

a) The complexity of the natural events that are described by the measures in the string: the variations of temperature in London are certainly complex. But, in general, most people would agree that they are not designed. But their complexity is explained by necessity-random mechanisms, such as those studies by metorology.

b) The complexity of the necessity mechanism that measures and stores the temperatures (in digital form). This is a designed mechanism, whose complexity can be assessed. The mechainsm works by necessity, but is not explained by necessity.

c) The complexity of the data string. This is the result of the repeated measures of the events (a) by the mechanism (b). Now, the important point is that (b) connects (a) and (c) by a necessity mechanism: the measurement of the temperature. Therefore, the complexity of the string (c), however long it may be, is only “derived” from the complexity in (a), and as the complexity in (a) can be explained by necessity-random mechanisms, also the complexity in (c) can be explained in the same way, plus the contribution of the complexity in (b).

But the complexity in (b) (the designed complexity) remains always constant, however long the string (c). That measn that the main complexity in the string (c), especially if it is long, derives from the complexity in (a), and not from the complexity of the measuring mechanism.

You here have clearly said that if the origin of a string, (i.e. the generator of the information in a string), is a necessity mechanism, then the string does NOT have “dFSCI”, solely because of where the string came from.

I believe here you are really confused, in good faith, about a point that is in itself very simple.

I will try to clarify it. I have not said that:

“if the origin of a string, (i.e. the generator of the information in a string), is a necessity mechanism, then the string does NOT have “dFSCI””

Those are your words.

My words are, as you yourself quote them:

“IOWs, I will not affirm dFSCI if I know a possible necessity mechanism that can explain the information in the string. But I am saying nothing about the origin of the string.”

You see, you would avoid much confusion if you just used my words in commenting my words, instead of rephrasing them.

Can you see the difference? I am speaking of “a possible necessity mechanism that can explain the information in the string”. Not of “a necessity mechanism that is the origin of the string”.

IOWs, I am still evaluating dFSCI without knowing anything of the origin of the string, as explicitly stated in the words you quote. I can know a necessity mechanism that can well explain the string. That is a cognitive judgement about
a possible explanation, and not an inference about an origin. You should always keep separated explanations and inferences. It’s always epistempology that creates the greatest problems for you darwinists, in my experience.

So, the point is: if I judge that the information in the string can be explained by a necessity mechanism, I will not affirm that dFSCI is there, and I will not infer design. Still, I know nothimg about the true origin of the string.

As I have said many times, many things are designed, and yet they are simple. Some of them could be explained by a necessity mechanism, even if in reality they were designed: think of some very simple darwing in the sand, that could be the result of waves or wind, but it can really be designed.

So, the origin is one thing. Explanations are another thing. The plausibility of available explanations, IOWs what we believe to be the best explanation for what we observe, is the basis for our inferences about true origins. Inferences are inferences, and are never certain. Explanations just have to be reasonable, credible, consistent, as detailed as possible, and better than their competitors.

Another point that deserves to be clarified, because you have spread a lto of confusion about it.

I have never said that a designer cannot use any necessity mechanism in designing, or that the use of a necessity mechanism, such as a computer, excludes dFSCI in an object.

If you read my words, and not your rephrasings, you will see that my concept is very simple: it is the functional information in the object that must not be explained by necessity mechanisms, if we want to affirm dFSCI.

I have shown, in my post #353, that the data string of Londom temperatures is no dFSCI not because it is registered by a necessity mechanism (the measuring machine). Indeed, the measuring machine is the only part of the system that is designed, and could exhibit dFSCI, if it is complex enough.

The data string does not exhibit dFSCI because the information in the string can be explained by necessity-random mechanisms: those mechanisms that generate the differences in London temperatures. And the designed mechanism of measurement derives the data string from the necessary events according to a necessity mechanism (the measurement).

Is that clear?

So, Shakespeare writes his sonnet. That is an act of design, and the complexity in the sonnet cannot be explained by any necessity-random mechanism. Then, Shakespeare writes his words on a paper by necessity mechanisms, his publishers generated millions of copy of the sonnet by necessity mechanisms, including our computers. That is of no importance. The information in the sonnet remains the information created by Shakespeare, by an act of design.

While complete “objects” might be complex, their building blocks, i.e. their “methods”, can be quite simple and way below the UPB.

Seen in this way, “objects” can be viewed as configurations of “methods”, not bits, and life can also be seen as configurations of “functionalities”, not pure bitmaps.

True, but there are three important points that you are not considering:

1) Those methods are susally still complex enough not to emerge by chance. As far as I know, “methods” are desinged.

2) The reuse of methods in dofferent contexts is designed too, and often it rquires not only intelligence, but great intuitive creativity.

3) As you know, I always speak of proteins. In proteins, basic domains are the functional units (anad, often they are not functional in themselves, needing much irreducibly complex organization in biological machines to be really useful). It is true that proteins are made of simpler structural modules, but those modules cannot certainly be considered as functional, when the only function that can be considered is giving a reproductive advantage, as in your RV + NS algorithm.

An enzyme cam be useful, and in the end give a reproductive advantage, only if it does what it does. Its simpler units do not do what it does. It’s as simple as that.

Moreover, most enzymes are of some utility only when they are inserted in a protein cascade, or in a multi protein machine, and when they are correctly expressed, at the right time and in the right quantity, in the transcriptome.

When I stick only to the complexity in basic protein domains, I am really doing you darwinists a great favout.

I am happy we have reache some agreement on that point. You make two interesting comments. Here are my counter comments:

The first point:

In any real situation the observer is going to have some knowledge in both dimensions 1 and 2. A digital string is always in some context which tells you something about it. It is going to be extremely rare that a string with a natural origin gives no inkling to the observer that it could arise through a necessity mechanism (and thus automatically does not have dFSCI). Hence the danger of circularity.

It is not really so. There are various aspects in this statement, and I will consider them separately.

a) First of all, I really agree about your considerations about how we can know the origin. It is really so. Obviously, when I say that we use strings whose origin we know, I mean that someone can give us sufficient information to convince us of the origin. In the case of Mung, for instance, he has declared that he presonally wrote the string of softwrae. Unless we think he is lying, that would be good information.

b) I have always said that, to really assess dFSCI, we should define well the system where the string emerges, and the time span we are considering for its emergence. That, however, is not because we must try to “guess” the true origin, but only because in many cases we need that information to decide what threshold we should use for dFSCI. IOWs, we need to approximate the probabilistic resources of the system. We also need to be able to reason about possible necessity explanations, and that is much easier if we know something of the system.

c) We also need to know the object where the string can be read. We usually underemphasize that point in our discussions here, but it is important. A string of nucleotides in DNA is not the same thing as a string with the same information written in a blog.

d) However, after making those distinctions, I still believe that in most cases the most important thing remains the information in the string itself. Take our “challenge” here. for instance. You have given me the strings in the most abstract form. I have not, in general, asked anything about the system, or the time span, or the object where you had read the strings themselves. All the clarifications I have asked were about the function, and about how it was defined.

And yet, I have given a judgement in all cases. Many have been negative judgements, but I have motivated each of them.

Take the only positive: Mung’s software string. I suspected immediately that it was code, as you would have done yourself. It was very clear, no special “guess” wa necessary. I needed confirmation of the language and the function, to motivate my assessment correctly.

The same is easy for language, for most software, for most machines.

That brings us to your second point:

Second. The test you describe will be a situation where an observer with limited knowledge did an estimate of dFSCI and then someone else (or that observer later on) discovered what the origin really was. You talk of all the thousands of proofs of the dFSCI correlation. How many instances can you think of where this process has been followed other than specially designed tests by bloggers over the Internet!

As KF always points out, the test can be done for myriad of objects in daily life. We do it daily for all language. For software, and, in analogic contexts, for art objects, houses, machines of every type, and so on.

You darwinists were only trying to build some false positive, or some example that could demonstrate some ambiguity in the definition. But you could have offered thousands of simple examples of designed strings. I would have inferred design for most of them. Mung could have given 1000 strings, instead of one.

In the same way, you could have easily offered thousands of randomly generated strings: we had agreed in advance that using a computer RSG was perfectly OK. I would have inferred design for none.

So, the test is always there, under your eyes. You don’t even need a blog to do it.

And keiths continues to prove that he does NOT understand nested hierarchies:

On the other hand, a process (e.g. evolution) that involves descent with (gradual) modification and primarily vertical inheritance will produce an objective nested hierarchy.

OK keiths, based on what criteria, exactly? What criteria are you using for this alleged “objective nested hierarchy”?

And how does the fact that traits can be lost affect this?

Also do you realize that Linnean taxonomy, an objective nested hierarchy, is based on a common design and has nothing to do with universal common descent?

And if gradual modification demands a smooth BLENDING of traits, which it does, how does that square with objective nested hierarchies which require SEPARATE and DISTINCT, objectively defined sets, something that gradual modification cannot produce?

Anyone who thinks that unguided evolution predicts an objective nested hierarchy needs to read “Evolution: A Theory in Crisis”. keiths won’t because he is blissfully ignorant and obvioulsy proud of it.

For instance, the existence of the nested hierarchy largely precludes separate creation.

Unfortunately for Zachriel the EXISTING nested hierarchy was produced by a CREATIONIST attempting to classify the CREATED KINDS, ie a special and separate creation.

For example, the following reads like a DESIGN spec-

In the nested hierarchy of living organisms we have the animal kingdom.

To be placed in the animal kingdom an organism must have all of the criteria of an animal.
:

All members of the Animalia are multicellular (eukaryotes), and all are heterotrophs (that is, they rely directly or indirectly on other organisms for their nourishment). Most ingest food and digest it in an internal cavity.

Animal cells lack the rigid cell walls that characterize plant cells. The bodies of most animals (all except sponges) are made up of cells organized into tissues, each tissue specialized to some degree to perform specific functions.

The next level (after kingdom) contain the phyla. Phyla have all the characteristics of the kingdom PLUS other criteria.

Chordates have all the characteristics of the Kingdom PLUS the following:

Chordates are defined as organisms that possess a structure called a notochord, at least during some part of their development. The notochord is a rod that extends most of the length of the body when it is fully developed. Lying dorsal to the gut but ventral to the central nervous system, it stiffens the body and acts as support during locomotion. Other characteristics shared by chordates include the following (from Hickman and Roberts, 1994):

bilateral symmetry
segmented body, including segmented muscles
three germ layers and a well-developed coelom.
single, dorsal, hollow nerve cord, usually with an enlarged anterior end (brain)
tail projecting beyond (posterior to) the anus at some stage of development
pharyngeal pouches present at some stage of development
ventral heart, with dorsal and ventral blood vessels and a closed blood system
complete digestive system
bony or cartilaginous endoskeleton usually present.

The next level is the class. All classes have the criteria of the kingdom, plus all the criteria of its phylum PLUS the criteria of its class.

This is important because it shows there is a direction- one of additive characteristics.

Yet evolution does NOT have a direction. Characteristics can be lost as well as gained. And characteristics can remain stable.

And as expected the design specifications get more intense as one narrows in on the organism.

I mean actual examples – not ones that could be done – actual cases other than examples dreamt up to created to illustrate the point in completely unrealistic contexts.

What on earth are you talking about?

My string wasn’t made up to illustrate the point and it most certainly was not from some unrealistic context. I took it from an actual working program that I had previously developed completely independent of this whole exercise and merely tweaked it to have exactly 150 characters.

I could put forth any number of other strings likewise created completely unrelated to this whole thread that are neither trivial nor unrealistic. Real code from real working programs. You could to, if you wanted to.

Do you still think there’s no difference between a string that defines a function and one that is merely a repository for data?

Toronto, please show actual examples of software inheritance and how it works and compare it to biological inheritance. It’s not the same and it sure as heck doesn’t involve replication of the code defining the classes and objects. If it did you’d be constantly getting out of memory errors.

And again, show us how to create an object in your favorite object-oriented programming language by building it up from small methods of less than 500 bits.

But there is a clear difference between functional strings and non-functional strings. Try a data string:

eval “abcdefghijklmnop”

=> #

oops! Not a functional string!

Why don’t you give us some similar examples that you think make your case?

You would not replicate “definitions” of anything since those are really compile-time responsibilities, not run-time.

You make no sense. Again, I’m using Ruby. It’s not a compiled language, it’s an interpreted language.

Do you really think that the code of a class is duplicated every time it’s inherited, regardless of whether it’s at run time or compile time?

If it’s not, then what is the similarity to biological organisms? Seriously. Would you just shut up and think for a moment?

The replication of actual runnable code and data is up to the running process and is only limited by computer resources just like any real biological process would be limited by its resources.

A living organism may indeed not replicate due to lack of resources. I think there are studies that confirm this.

But computers don’t refuse to implement inheritance due to lack of resources. Sheesh.

Remember, we are dealing with object code at run-time, not source code.

You didn’t read anything I wrote previously, did you. Or linked to.

1. I am using Ruby, an interpreted language. I don’t have to compile it to run it.

2. There is an additional step called linking that occurs with compiled languages such as C. Object code is not what gets run at run time.

What language do you program in? Or have you never programmed, and that’s why you are having so much difficulty with concepts? Do you know what a make file is?

I would not do a simulation at the programming language, i.e. source code level, it would be done with real functional object code which would be significantly below 500 bits for simple “methods”.

Do it. Show us. How are you going to get object code without source code? Use pseudo code for all i care. Just show us something real. You keep making assertions without any demonstrations to back them up.

I posted a link to Ruby’s BasicObject. Even that is incredibly complex and well beyond 500 bits. And it’s been stripped of much of what is normally present for a standard object in Ruby.

What will your object do, if anything?

Well, let me give you my opinion.

1. An object must be capable of having attributes.

2. An object must be capable of having behavior.

So, all you need, as far as I am concerned, is two methods. Now, your task, should you choose to accept it, is to define a method that allows a programmer to define methods (behavior) for the object.

First if all, I cetainly share your (and Petrushka’s) enthusiasm for OOP.

But let’s go back to work. You say:

I think you should come up with a different scenario that explains why the “generated output of a computer algorithm”, does not have “dFSCI” if all other requirements, i.e. d F S C and I, are met.

Remember, the computer program, the “necessity mechanism” is not passing data from input to output when I use the analogy of a computer program.

One of the few things Joe and I agree on is that the attribute “dFSCI” as applied to a string, is not dependent on how it was produced.

Well, let’s say that if you have a computer program that generates a string that exhibits dFSCI, and I observe the string, and know nothing of its origin, I will affirm dFSCI for the string, and infer design. Which will be correct, because the string was generted by a designed computer program, and therefore its origin is from a design process. So, that would be a true positive.

A computer program that can generate a string with dFSCI must be, as far as I know, a designed program exhibiting dFSCI. It is, however, a compression of the output string, therefore the Kolmogorov complexity of the output string is the complexity of the program, if that is lower.

As I have aòready discussed, the only way to generate a false positive this way would be the following:

a) To have a computer program that was not designed (for instance, that arose in the system without any intelligent intervention, by pure RV).

b) And that is capable to generate a string that exhibits true dFSCI, and has no objective sign (like order, or computability) of being compressible by a very simple program that could arise by chance.

In that case, I would make the design inference, and I would be wrong. That would be a false positive.

Can we find suitable terms such as “presenter” and “generator” to show the different meanings?

Why not use mine, now that you have understood them?

For convenience, I repeat here some explicit definitions:

a) Origin: the way a string, or object, comes into existence, as observed, directly or indirectly, in a way that gives reasonable certainty. IOWs, we treat the origin as a fact, known or unknown, If the fact is unknown, we can try to make inferences about it. Those inferences can be right or wrong, but we can know that with certainly only if the origin is independently known as a fact.

b) A necessity mechanism is any mechanism that links an output to specific causes by necessity (probability = 1). It is usually part of some proposed explanation (theory).

c) A probabilistic explanation is a proposed explanation for some output where the output itself is descirbed as the result of a random system, and its probability in that system can be evaluated.

d) Explanations are theories that try to explain facts. They are based on necessity mechanisms and /or probabilistic explanations. Explanations are proposed to explain facts, and the explanation, if accepted as “the best available explanation”, can imply some inference about unknown facts, like the origin of what we observe.

e) Proposed explanations can be good or bad, consistent or not, simple or complex, convincing or not, supported by known facts or not. Inferences can be right or wrong. However, no scientific knowledge is ever absolute, or definitive.

What is designed are the human designed “methods” as used in software, which is one half of our analogy.

The other half, what we are making the analogy about, is biology and it is precisely the origins of its “methods” that are the subject of debate.

That is true. Whoever said anything different?

I just said that what corresponds, in your analogy, to “methods” in a protein, that is simple subunits, alpha helixes, beta sheets, and similar, is not naturally selectable and has in itself no useful biochemical function. If you were talking instead of multi-domain proteins, then single domains can certainly be functional, or potentially selectable.

I mean actual examples – not ones that could be done – actual cases other than examples dreamt up to created to illustrate the point in completely unrealistic contexts. There are mean’t to be thousands of them which have demonstrated the utility of the dFSCI process. Surely you can show me one? The alternative is that the empirical correlation only based on hypothetical cases and noddy examples.

Remember the conditions. An observer did not have any idea of the origin. Assessed that the string had dFSCI and then another observer revealed that the string was designed.

This was the challenge here. Why don’t you give me thousands of strings to jubdge? I am ready, as far as my time allows.

You choose 500 strings designed and 500 generated by a RSG. This is the idea. We can do it. Anyone can do it.

I already know what the result will be. But I am available to spend the time to convince you, if you are not yet convinced.

But any readable “string” that is output by a computer is ultimately a repository of data, regardless of whether it “defines” values or “defines” functions.

If instead of writing a “string” to a screen however, you instead output data to a D/A converter, you now truly have function.

There is a difference. We have discussed data strings here as those strings whose only detectable function is as repositories of information about facts (like measures).

Instead, a string that has an indepependent function, such as the information for a functional software, or a functional protein, is different (although you can still consider it as some form of “data”).

The difference is: in the second case, the string conveys data about a function. In the first case, the string only conveys data about natural events.

For our purpose, this is very important: natural events can be explained by necessity-random mechanisms. Instead, functions are ususally designed by intelligent agents (unless they are very simple).

That’s why we use functional information as specification, but we don’t use simple data (factual information) in the same way. It’s all implied in the “necessity clause”.

What you did was effectively ask Gpuccio to guess the function. That’s a completely different game.

No! Absolutely not! Why do you say things that are not true? There is alredy enough confucion here.

Mung has proposed the string. Then I asked him its function. He gave it. Then I affirmed dFSCI, explaining why, and made a design inference. Then I asked Mung confirmation about the origin, and he confirmed that he had designed the string.

This is the simple truth. Please, stick to the truth.

Just look at my post #284 to you. The whole story is there:

Mark:

“I am sorry – I have not been able to read every comment on this thread. What was Mung’s string?”

It’s at #111 here. I asked about the function at #167. Mung answered at #170. I affirmed dFSCI at #177. Mung confirmed a design origin at #179.

If I design a computer program that dumps memory from a random memory location, that string might contain English text or a string of binary digits in sequential order or even valid machine code.

Since I didn’t “specify” what the actual output should be, the string does not have “dFSCI”, as the ‘S’ attribute is FALSE.

When you read it however you might see, “Please enter your name….”, which seems to be specific, and therefore the ‘S’ attribute of “dFSCI” would be TRUE.

You would have a false positive if you claimed designed “dFSCI”.

Even though the program itself is designed, no output was “specified” since the “search space” and “target” were random.

Well, this is interesting. And it is even more interestin to witness how you guys are trying the impossible to show that dfSCI can be “circumvented”.

But the impossible remains impossible.

You see, if you just tried to simply understand the meaning of dFSCI, you could probably easily answer your questions yourself.

In your example, the only important point is: what can explain the functional information we observe?

So, let’s say that your random program dumps a Shakespeare sonnet and some random memory without any meaning.

We could obtain something like that:

bbbtsrwi doenwlo ohv nspiwe llllasèap
bxhsoidy nffo elsnau ls spdppòd
istciaed lflcpm aaiauusuusalnxòpxè
Why is my verse so barren of new pride,
So far from variation or quick change?
Why with the time do I not glance aside
To new-found methods, and to compounds strange?
Why write I still all one, ever the same,
And keep invention in a noted weed,
That every word doth almost tell my name,
Showing their birth, and where they did proceed?
O! know sweet love I always write of you,
And you and love are still my argument;
So all my best is dressing old words new,
Spending again what is already spent:
For as the sun is daily new and old,
So is my love still telling what is told.
xtrsuds floe qalkpapò pòclpsp ,mlp,kmpòa
mnxozgtausb c cpdnvcik sakuuz òlòpcòcò

Now, answer your own questions:

a) Does the string exhibit dFSCI?

Simple answer: part of it does. Can you guess which part?

b) Shall I affirm dFSCI for the string?

Simple part: yes, but only for the meaningful part. Remember, dFSCI is about the minimal information that can express the function. The meaningful part of the string completely expresses the meaning. All the rest is irrelevant.

c) Shall I infer design?

Simple answer: certainly yes, for the functional part.

d) Would that be a false positive?

Simple answer: certainly not. It would be a full true positive. The meaningful part of the information in the string, indeed, was designed by Shakespeare, as we all know.

Fire is a very important function but it was not designed by an intelligent designer, rather it is a chemical process that has both “necessity and random mechanisms”.

I don’t follow you. Fire is a function? That’s new, for me. I thought it was a natural phenomenon.

If a random lightning bolt hits a dry branch in a forest you will have a fire that will keep burning until it runs out of fuel or something stops it.

And so? If it rains, it rains until it stops. Can you see? I can build brilliant arguments too, if I want.

The process of fire is functional but not designed and neither are some specific instances of fire though they also sometimes serve a purpose for example, in promoting re-growth in a forest and thus changing the local environment.

You are mentally confused. All events have effects. That does not mean that they are “functional”. Function is a condition of consciousness. But certainly, we can recognize some function in anything, even the falling of a stone or the flow of a river.

That’s why I leave complete freedom of function definition in the dFSCI procedure. I am not interested in the function itself, but in how complex is the functional information.

If the functional information (the information necessary to express the function, the information that is not merely the effect of necessary laws) is highly complex, then the object exhibits dFSCI and I infer design (this is probably the nth time I say it, with a very big “n”).

Fire does not exhibit dFSCI. Any “function” you can define for it is completely explained by necessity and/or randomness. But, if someones applies fire selectively, according to a complex pattern that can be linke to a definable function, then the scenario can be very different. The pattern cannot any more be explained by laws or randomness, if it is complex enough.

I made a challenge. You accepted it. I complied, and I made all my assessments and motivated the. There was no false positive, as I expected. For me, the challenge is done, unless you want to present more strings.

The procedure with Mung’s string was perfectly correct. I recognized that it was code because I nave some experience with programming, but I had to ask Mung for the function because my experience was not enough to detect it myself. I had already desclred that I would ask the function, and only the function. And you had agreed to that.

My design inference for Mung’s string has nothing to do with him being an ID supporter. I would have done the same inference if you has presented the string, and given the same functional clarifications.

I can admit that I would have been more careful in your case, and thought two seconds more probably, just to be sure that you were not playing tricks. That’s only human, and if you want to interpret it as Bayesian thinking, instead of simple human caution, be my guest.

I would ask you why you never offered a designed string, or simply a random string without meaning: that’s playing games, indeed.

In your language, all of a sudden serious discussions in a blog like this, where most of us spend sincerely their time, their resources, and their passions to defend what they believe is true, become:

“the unreal context of Noddy challenges on blogs put together or selected by people whose motivation you can easily guess”

I strongly disagree. This is a deep place, where deep issues are discussed, much more seriously, IMHO, than in many scientific papers or academic circles.

I love this place, and I love this activity of the mind and of the heart. If you suddenly despise all that, if you suddenly think that it is out of the “real world”, please keep your opinion, but I have no sympathy for it.

But more to the point – you repeatedly said before we embarked on this that there were many, many cases of dFSCI which have all proved to be designed. When challenged you seem to try any other response than describing a single one.

You must be kidding.

Let’s reverse the challenge. I give you a few strings, and you decide f you want to affirm dFSCI and infer design:

1) “Feedback is a process in which information about the past or the present influences the same phenomenon in the present or future. As part of a chain of cause-and-effect that forms a circuit or loop, the event is said to “feed back” into itself. Ramaprasad (1983) defines feedback generally as “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way”, emphasising that the information by itself is not feedback unless translated into action”

If you were teaching something to a child and they said they didn’t understand you, would you just keep repeating yourself instead of adapting yourself to your listener?

I have never refused to clarify. I have clarified a lot of times. I refuse to adopt your terms instead of mine, for the simple reason that mine are better.

If instead you just answered questions as asked, you would actually be typing less and putting the pressure on our side to answer your clarified position.

You are a liar, and believe me, this is really a compliment. I have always answered questions, if they has a minimum sense. Your side, including you, very often does not asnwer my answers, or rephrases them with malice or arrogance or both. Your side (with few exceptions, and you are not one of them) would never admit any true thing about ID or dFSCI, whatever the pressure. You are fanatics, and many of you are liars.

OK, you admitted a very obvious thing once. And I recognized that (as a rare event).

All the other times, your behaviour has been obstinately reticent, and many times, like the last one that earned you the liar thing, explicitly unfair, arrogant and insultant.

You know, people are often mixed realities. I can live with that.

People are trying to understand your points, because we need a good understanding of exactly what you are trying to say, in order to generate a proper response.

I don’t agree. If you guys are trying to understand my points, either you are not making any serious effort, or you afre seriously habdicapped, or more likely you are only trying to understand any possible weak aspect in my points.

In a Bayesian mood, I would definitely bet on option number three.

Therefore, your attention and memory is strangely selective and biased, and even the simple understanding of simple terms, however many times explicitly defined by me, seems targeted only to find fault with them, not certainly to understand what I mean with them.

The general attitude of all of you is much more than biased, many times intentionally offending and cheating.

As I like to give thanks for good things, I have always recognized the few good things that have come from some of you, including the one from you: they can be counted on the fingers of one hand.

What have you all done, on the otgher hand? Let’s see.

You have attacked me with a false accusation of circularity for days or weeks, explcitly ignoring, rephrasing, or simply lying about my detailed explanations. Well, now the argument of circularity seems less popular, and a couple of you has in some way admitted, through their teeth, that my argument is not circular. And yet, my argument has never changed in all this time.

You have challenged me to apply dFSCI to strings provided by you. I have done it. All the strings provided by you have been tricky attempts to demonstrate that dFSCI is false, or ambiguous. All have failed. Nobody among you has even tried to do what the challenge was about: provide a series of strings, designed or not designed, to verify if dFSCI really can infer deisgn. All you strings were negatives, more or less smart attempts to affirm a false positive. All were easily recognizable for what they were: negatives, string where no dFSCI was recognizable.

Only Mung has provided a designed string, and I have promptly recognized it as exhibiting dFSCI. A true positive, the only one.

And yet, what happens? Mark, who is certainly the most sincere among you all, after having debated that, after having had a fulll response by me about the only true positive in our “challenge”, suddenly forgets everything about it, and write a series of wrong considerations! Not intentional certainly, but a clear demonstration of the strong cognitive bias, and scarce fairness, that all of you show for my arguments.

Abd you say that you don’t understand what I say because of my terns, or because I don’t answer your legitimate questions? Liar, liar, liar.

You understand only what is convenient for you. You are so slectively biased that usually I must explain the same obvious thing at leat ten times before you are forced to step back a little. You should be ashamed.

Why would you bother to formulate your “dFSCI” argument if your only intention was for it to be accepted by those who already share your convictions? Clearly they don’t need convincing, we do.

I have no intention to convince any of you. I would be a stupid optimist if I thought that.

What I do want to do, is to show how my arguments are perfectly sound and convincing, even in the face of unfair and unreasonable criticism form biased adversaries. I want to show that to all sincere readers of this blog, and that’s why I discuss with you, either you deserve it or not. You know, the world is not made only of people who have already embraced ID and of people who will never consider it, not even in the facte of overwhelming evidence. There are many paople there who still have an open mind, and are interested in deciding what is true and what is not.

If your terms are vague to the very audience you are trying to reach, why would you continue with them?

You have might a false negative if you decide the part you cannot read from a human point of view is not designed. As an example, it might be the header for the communications packet that ensured the data payload was properly received by a slave terminal.

And so? I will have a false negative for that part. What’s the problem? What is vague in my definition of dFSCI, where I say that there can be many false negatives?

Suspiciously, the bottom unreadable portion could be a trailer for this packet and/or the header for the next one.

Anything could be. And so?

Even worse is the fact that you have a false positive here for any data string you handle in this manner, since the content of the string was not “specified” by me for you to receive, it was “specific” only to you, which means you painted a target around the arrow.

Utter nonsense, to say the least. I have no interest in you specifying anything, especially “for me”. Your imagination runs wild.

I recognize the meaningful part as exhibiting dFSCI. It does. It is a true positive. The information in that part is designed. Are you denying that?

Since I had no intention of conveying that “specific” message to you, it fails the ‘S’ portion of “dFSCI”.

Srange, in my definition of dFSCI you and I were not quoted. The “S” portion of the name was not even considered as you seem to do. Indeed, my concept is simply about “functional specification”, the specification given by a function.

Here, the function is the conveying of the meaning in the sonnet.

What have you or I to do with that?

That would be like the “intelligent designer” assembling portions of DNA out of order and yet expecting that “string” to be functional.

The string could still be considered functional, but the function should be defined in a wider way, such as “a string made of single phrases that are correct in english and meaningful, but that do not have a genral meaning as they are assembled”. With that definition, the functional complexity would be lower, and should be computed differently.

At this point it may probably not be “functional” as well as being not “specifically” intended.

We don’t assess function as a specific intention. We are not mind readers. We just recognize a possible function, define it appropritaely, and compute the complexity linked to the function we have defined.

We are really discussing ID as it applies to biology and the sonnets are analogies.

No. We were discussing my challenge. My challenge was about string whose origin we know, such as sonnets, software, or random strings. Not biological strings. We do not know the origin of biological strings. That’s why I have used the sonnets as an example.

I have read you new summary. I find ot honest enough. I disagree on many points, but I believe I have already explained why, so I will not repeat myself. Let’s say it is a honest summary of you views about my arguments.

But I fully agree on this conclusion:

The problem now is that carbon chains are the only digital strings with any kind of complexity and these are just the one’s we are trying to evaluate. There are no digital strings at the molecular level with dFSCI except for those involved in life.

That’s exactly what makes me so certain about the design inference for biological strings. Again, different choices.

The problem now is that carbon chains are the only digital strings with any kind of complexity and these are just the one’s we are trying to evaluate. There are no digital strings at the molecular level with dFSCI except for those involved in life.

The announcement at TSZ that your definition of dFSCI was circular was met with great fanfare and many cheers from the peanut gallery. Strangely, when that line of attack failed, many of them just fell silent.

You passed the logical test. Then came the empirical test.

As you point out, the main thrust was to have you give a false positive. What does that tell us?

They understood, at whatever level, that giving you strings with objectively defined functions that were in fact designed was not going to help their cause. It means that in the main, they full well know that you are correct.

How can I believe that you are not intentionally generating confusion, just for the hell of it?

But then you have failed a design connection! The whole point of “designing” something is to get the result you intend.

Please, this is ridiculous. What I said was:

” We don’t assess function as a specific intention. We are not mind readers. We just recognize a possible function, define it appropritaely, and compute the complexity linked to the function we have defined.”

It is very clear what I am speaking of: the procedure to assess dFSCI in a string. In that procedure, there is no discussion about “intention”, least of all “specific intention”.

If we affirm dFSCI, we infer design. Design is obviously an intentional process, but what has that to do with the assessment of dFSCI as described?

I affirm dFSCI for the sonnet. Then I infer design for it. Correctly. The sonnet is obviously intentional: Shakespeare certainly intended for it to express a definite meaning (not always easy to understand, but that’s another story). But you and I have nothing to do with Shakespeare’s intention, least of all with the evaluatio of dFSCI in his sonnet.

Is this vague, again?

If you don’t get the result you intend, your design has failed to meet its specified functionality.

And so? what has that to do with the assessment of dFSCI?

If you’re saying a designer would accept a result that exists in a “set of results”, then any improbability assigned to a “target” in a “search space”, drops drastically.

I don’t know if this is vague, but certainly I don’t understand what it means! Please, explain better.

If the “intelligent designer” of life is willing to accept one of a number of “configurations” for his “designs”, then Behe, kairosfocus and Dembski can no longer use the improbability argument.

Again, not very clear, but I will try to interpret it in some way.

A designer can choose any configuration he likes, but obviously, if he is a good designer, he will choose a configuration that expresses well the desired function. That is very trivial.

But it is certainly possible that many different configurations can express the function well enough. That is certainly the case with proteins. We know that many configurations can still express the function. That’s why, in evaluation dFSCI, we try to compute the target space/search space ration to calculate the dFSI. That’s why, in Durston’s paper, the functional complexity of protein is much lower that the maximum complexity derived from the length of the sequence.

But we can still use the improbability argument, obviously in reference to the functional complexity, and not to the complexity of the search space.

I don’t know if these vague comments can answer your “point”, because I am not sure I have understood your point.

No, what you have done is make a “functional description”, not a “functional specification”.

As an example, if someone asks kairosfocus to look at an electronic circuit and find out what it does, he could perform tests and then render his description of how it operates.

If the builder of the board then disagrees and quotes his “functional specification” as stating a completely different functionality, KF gets to say, your “specification” has not been met.

What KF cannot do, is proclaim that what he has observed is the “functional specification”, as clearly he had no way of knowing what was actually intended.

A design that has an unintended result, has not been “specified”.

I am in complete amazement. What you say is beyond any reasonable interpretation.

Please. go back to my definition of dFSCI.

d is for “digital”

FS is for “functionally specified”

C is for “complex”

I is for “information”.

I don’t know what you mean for “specification”, but I certainly know what I mean for “functional specification”. It happens that I have defined that vague term many times here. Maybe you were distracted.

“Functional specification” means that an observer recognizes a function in the string, whatever function he likes, defines it explicitly, and gives some objective method to recognize that function and measure it in any possible string.

That is a specification, because it defines a subset of the search space, the target space: the set of all strings (usually, for simplicity, of the same length) that express the function as defined. the ration of the target space to the search space will be the functional complexity for that functional specification.

This is all in the definition and procedure. You will certainly find it vague, but I cannot help you for that. The obvious thing is that my definition and procedure have nothing to do with the strange, rambling comments in your post.

Our problems with gpuccio are mainly in trying to understand what he means.

At one point it seemed that gpuccio, Joe and I agreed that “dFSCI” was not dependent on its origin. Soon after that, Joe and I both seemed to think that gpuccio had made statements to the effect that in certain cases, “dFSCI” was dependent on its origin.

Further on I found out that gpuccio didn’t mean origin of the string as much as he meant the “content” of the string, but he doesn’t use terms as I understand them.

It’s the same sort of confusion with Upright BiPed’s use of “arbitrary”.

Using the term “materially arbitrary” however, is as clear as using the term “aggressively passive”.

Please don’t anyone ask me to clarify what the word “passive” means as you could easily Google it and get a clear definition.

Ehm… Just to be clear:

a) When we assess dFSCI in a string, we have no idea of its origin (said as clearly as possible at least 20 times in the last week).

b) The origin of a string is the simple observable fact of how it is generated: I see you writing a poem to express your feelings, I know its origin is from you, a conscious intelligent beings: the string is designed. I see you tossing a coin and just writing the results, I know that the origin of the string is from a random system (the coin tossing). Obviously, we are referring here to the origin of the information in the string, of the arrnagement of values: it is not important if the string is written by you, or by the rain, or by a computer.

c) The content of a string is the content, the infromation, the arrangement, you name it. What is not clear in the word?

All that has been clarified a lot of times. You have problems? Maybe, but I can’t understand why.

Half of the strings are proper English sentences and the other half are random nonsense.

The odds of my randomly sending you a valid text string is 1 in 2 but the odds of my sending you any “specific” message from that pool is one 1 in a 1000.

Now let’s apply it to biological “design”.

If I as the “intelligent designer”, have a pool of “genetic code” that contains the “information” to make corn bug resistant and also the code to make it 12? tall, and my intention is to make the corn bug resistant, I must send you that “specific” code.

An IDist cannot in this case, look at a field of 12? tall corn and say, “There is no way corn can so quickly change from being 8? tall to being 12? tall without being “designed” that way.

As the “intelligent designer”, I would have failed in this case as the “design” I intended, clearly did not get into the organism.

That is the whole point of these discussions, that ID is a “theory of intent”, and should not accept “random variation” as acceptable evidence of “design”, which is what would have happened in this case.

Despite the facts that the strings themselves qualify as “dFSCI”, you cannot claim “design” unless you know what the designer’s intentions were.

Well, now it is more clear.

What can I say? You are pparently elaborating your own theory of Intelligent Designed, and you are entitled to that.

But your theory has nothing in common with mine, or with anyone else’s here.

So, if we want to discuss my theory, we should discuss it, and not confound it with yours.

In my theory, there is nothing like what you describe.

In my theory, an object is designed if, and only if, the arrfangement of the information in the object is given by a conscious intelligent being, who purposefully outputs his conscious representations to the object.

In my theory, the only intention of the designer is to design the object according to his conscious representations.

In my theory, there is a property that can be assessed in some digital strings, and not in others, that I call dFSCI. It has nothing to do with the origin of the information in the string.

In my theory, there is a procedure to assess if that property is present or not.

In my theory, a connection is empirically observed between the presence of dFSCI and a design origin of the information. The connection is: if dFSCI is present, the information in the object has a design origin. If dFSCI is not affirmed in the string, the origin can be either design or a random system or a necessity mechanism. That is the same as saying that dFSCI is an indicator of a dersign origin with 100% specificity and low sensitivity. Obviously, the measurement of dFSCI’s specificity is made with strings whose true origin can be independently known.

In my theory, dFSCI can ne ueds to infer design for strings whose true origin is not know, like biological strings. That is the essence of ID theory for biological information.

This is my theory, and I believe it is essentially what the ID theory is about.

In my theory, both “the “information” to make corn bug resistant and also the code to make it 12? tall” would qualify as dFSCI if they have the requirements. That has nothing to do with the iontentions of tyhe designer, that we obviously do not know.

In my theory, a separate functional complexity would be computed for each of those two codes (and functions), and a separate judgement about dFSCI would be given for each of them.

In my theory, intent is certainly part of the design process, but information or inference about the intents of the designer is not part of the assessment of dFSCI, and therefore of the design inference for the observed object.

I see you talk a lot of honesty. Do you accept that you don’t know of any instances where the dFSCI design relationship has been demonstrated outside of examples put forward in debates like this? I have asked you several times to provide an example and on every occasion you have failed to do this. Remember that you yourself said that demonstrating this relationship requires the observer not to know the origin.

I don’t know what you want from me.

The connection between CSI and design has been repeatedly observed by all those who have developed the ID theory.

As I have formalized a special definition and procedure here in this blog, that IMO allows a better clarity of the concept and an easier application to digital strings, I have challenged you to verify my statement that dFSCI could be easily observed to have 100% specificity for design in all possible settings where the origin is independently known. We have tried together to verify that affirmation of mine, with the results that you know as much as I do.

The connection between CSI and design is universally affirmed in all ID literature. That’s very natural, because the definition of CSI and its role in design detection are a product of ID theory. Obviously, much less interest in the subject can be found in the official literature.

dFSCI is my personal form of CSI, with some personal aspects, although it is essentially a subset of CSI. Everybody here is well aware that it is extremely easy to distinguish designed strings from randomly generated strings by the use of dFSCI. Those who understand well the concept are also aware that it is easy also to exclude strings that can have a necessity origin.

You at TSZ have expressed many doubts about dFSCI, its definition, its procedure, its results, its specificity for design. I have challenged you to test all that here, and we have done it.

Now, you want “instances where the dFSCI design relationship has been demonstrated outside of examples put forward in debates like this”. Do you want papers on Nature about dFSCI specificity? I cannot give them. But you certainly know that.

The specificity of dFSCI for design can be tested everywhere, in every setiing, just by respectinbg a few simple rules. I have shown you how to do that.

As KF always says, anyone who recognizes language as designed, every day, is using the same principle, although certainly less formally.

You will say that Bayesian inference can give the same results. I have nothing against that. If Bayesian inference can show that a string is designed, so it be. My non Bayesian method certainly can.

The problem of the final inference for biological strings remains the same. I think we have clarified enough the terms of the problem, and why, IMO, you choose one way, and why, in your opinion, I choose another way. The simple truth is, we have different vies, different cognitive approaches, and certainly different committments. All that can influence our choices, but it is our responsibility to make our choice as adherent to truth as we feel possible.

I hope that answers your request. Your need to read an assessment of dFSCI on Nature is only your personal need. I am happy that I do not need that, because I am afraid I would have to wait some time.

If, on the pother havd, your strange request is only a new form of appeal to authority and to conformist thought from a “skeptic”, I have already commented on that.

Truth is truth, wherever observe it, or debate it, or try to catch it. As I have said, I love this place, because this is a place where we are seeking truth together.

Finally, it is obvious that the testing requires the observer not to know the origin. That’s exactly what we have done here.

If you are worried that I could guess the origin from the person who proposes the string, we can repeat the experiment. You can get strings form people here or from people at TSZ (you think how, at some e-mail address for instance), and then give them to me anonimously, here, in a single document. I will assess dFSCI for them by my procedure, and then you will verify the results. We can do that whenever you like.

I see you talk a lot of honesty. Do you accept that you don’t know of any instances where the dFSCI design relationship has been demonstrated outside of examples put forward in debates like this? I have asked you several times to provide an example and on every occasion you have failed to do this. Remember that you yourself said that demonstrating this relationship requires the observer not to know the origin.

I also find this remark very strange. gpuccio has never shied away from the fact that we don’t have complete knowledge about the origin. That’s why his argument is based upon inference, and he’s never claimed otherwise.

Now your position is really very weak mark, if you’re reduced to, at this time we don’t know the origin, therefore we never will. How anti-science is that? 😉

It’s my understanding that we have the technology to encode English text into DNA. Now all gpuccio has to do is find a case where this has been done where the sequence is of sufficient complexity, and then where will your argument stand?

I predict that we will be able to find encoded in the DNA of a living organism information for which we can demonstrate the origin has an intelligent cause. If we can’t already do that, it’s only a matter of time.

How can we be sure that the M. mycoides is synthetic? When recreating it, the team added a number of non-functional “watermarks” to the genome, making it distinct from the wild version.

Our problems with gpuccio are mainly in trying to understand what he means.

Well maybe you should find out before launching unfounded accusations and then cheering your success. He’s shown every willingness to clarify.

Like I said. He withstood the logical challenge. Then he withstood the empirical challenge. I predict that it won’t be too long before you all will be crowing about his defeat over there at TSZ and accusing him of intellectual cowardice.

Everybody here is well aware that it is extremely easy to distinguish designed strings from randomly generated strings by the use of dFSCI.

The specificity of dFSCI for design can be tested everywhere, in every setting, just by respecting a few simple rules. I have shown you how to do that.

Your need to read an assessment of dFSCI on Nature is only your personal need. I am happy that I do not need that, because I am afraid I would have to wait some time.

Gpuccio seems to be saying he can distinguish between a sequence of symbols that are not random from a sequence of randomly generated symbols. Leaving aside the impression that this is yet to be convincingly demonstrated, gpuccio seems to say that he does not need to apply his method to Nature (a biological example, I assume he means) presumably because all life is designed anyway and he doesn’t need to check. I am left wondering what it is that CSI and dFSCI (is that a typo; should it be dFCSI?) can achieve as concepts. Accepting for the sake of argument that there is a universal way (gpuccio’s method) to tell if a string of numbers were either random or non-random, how does this advance ID? What am I missing?
_____F/N: Mr Fox, it is you who have the variant. dFSCI stands for digitally coded, functionally specific complex information, and means a digital string of sufficient length that in a given context (e.g. 500 bits for our solar system), blind chance and/or mechanical necessity would be maximally unlikely to hit on it or the functional equivalent. Empirically, every case of dFSCI of known origin — e.g. the contents of libraries, files on computers, etc, posts in this thread — is the product of intelligent design. On both inductive generalisation and the needle in the haystack analysis, we have good reason to infer that such dFSCI is an empirically reliable and analytically plausible sign of design as best causal explanation. Why this is controversial is not because it lacks such a warrant as just outlined, but because it assigns DNA and related entities to design, which is a challenge to an established school of thought. never mind, that the members of that school, on being pressed, have to admit that they do not have a cogent, empirically well warranted explanation of OOL, the first biologically relevant case. KF

Alan, The simplest living organisms exhibit dFSCI. And what they acheive as concepts is they allow for objective design detection. And as anyone who has actually conducted an investigation knows, the mere determination of design changes the investigation and changes the way we look at it.

You do realize that we investigate a pile of rocks differently than we do a pile of artifacts (made from rocks), and that we investigate a crime differently than we investigate natural phenomena?

gpuccio seems to say that he does not need to apply his method to Nature (a biological example, I assume he means) presumably because all life is designed anyway and he doesn’t need to check.

Do you have any actual quotes from gpuccio saying he doesn’t need to apply dFSCI to biological examples, or are you just in the mood to make things up?

In fact, that seems to be the exact opposite of what he does say. Did you miss his cites of the Durston paper? How about his continual refrain about protein domains, to which no one over at TSZ can ever bring themselves to bear upon and actually debate?

You keep misunderstanding that computer programming languages as seen by the programmer do not exist in text format at execution time.

lol. Listen, you are very sloppy with your language. No wonder gpuccio has a problem understanding you. I have problems understanding you, and English is my first language. Is English your first language?

I have programmed in interpreted languages, such as Ruby and Perl, in compiled languages, such as C and Java, and in assembly language (in ASA, among others).

I am not misunderstanding what goes on when computers execute instructions.

I can write code in Ruby and save it to a file. I can launch the Ruby interpreter and it will read and process the file. The file and the ASCII text it contains does not disappear just because it is being executed. The file continues to exist along with the instruction in it, in text format, at execution time.

So maybe you meant to say something else, because that was a really dumb statement.

Toronto:

…so claiming 80 bits of complexity for 10 characters is misleading as the tokenized executable code might be smaller or larger.

ok, you got me here. What is ‘tokenized executable code’?

My point was, and this seems to be a point lost on you, that the text has to be turned into ‘bits’. Why must it be turned into bits? Maybe I don’t misunderstand after all.

Toronto:

For instance, the “return’ command in “C” does not necessarily generate any more code than typing “ret” in an assembler language.

and? my examples were def and end, two more three letter words. Even assembly language typically uses three characters or more. At some point the instructions have to be converted to bits that are meaningful. I challenge you to show a 16 bit ‘function.’

Toronto:

Like this: Memory[x] = 0xF2; Memory[x+1] = 0xAE;

Those 16 bits are the machine code for “REPNE scansb” for an x86 CPU which will search through sequential memory, i.e. a “string”, until a match is found.

That’s not machine code. How did you arrive at a figure of 16 bits?

Toronto:

As an example, the function of ‘XOR’ing two registers takes one byte (8 bits), in most micros.

That’s a function that already exists. How many bits did it take to define it?

As I said over in the Upright BiPed thread, I’m willing to take you seriously if you’re willing to carry on a serious discussion. We probably all are. (Except Joe.)

Because Joe knows, and history confirms, there ain’t any such thing as carrying on a serious discussion with that ilk. OTOH Mung and the rest are hopeful that the most dense material known to exist- the skull of a materialist- can be breached and then reasoned with.

It would seem that the only suggested use for dFSCI would be to distinguish biological sequences that could be the result of incremental evolution form those that could not.

As usual, we are left to wonder what on earth you mean. How could dFSCI possibly distinguish between the two?

As gpuccio has explained many times, a biological sequence either exhibits dFSCI or it doesn’t.

dFSCI cannot, even in principle, distinguish between two biological sequences that exhibit dFSCI.

So what that means is, you need to identify a biological sequence that exhibits dFSCI, get gpuccio to agree that it exhibits dFSCI, then show how it had it’s origin in something other than design.

That would, to say the least, weaken the design inference for biological strings that exhibit dFSCI, wouldn’t you agree?

But you won’t. You can’t. Neither can any of the other nay-sayers over there at TSZ. At lest keiths knows this, which is why he clings, against all evidence to the contrary, to assert that dFSCI is circular.

Consider a lengthy gene with a known function. The known function means that it’s specified, and the length means that the dFSI is greater than the threshold, so it meets those criteria for the presence of dFSCI.

The circularity is in the remaining criterion:

This from the person who stated that the criteria were irrelevant to the definition.

Are you retracting your earlier claim?

keiths:

The circularity is still there. I just got tired of pointing it out again and again.

But you didn’t get so tired that you could not change your argument to depend on the criteria, when your earlier argument was that the criteria didn’t matter?

Are you retracting your earlier claim?

Why is it that now, all of a sudden, the criteria matter?

Did you think we had forgotten?

keiths:

The addition of qualifiers doesn’t (and can’t) save these concepts from circularity. It’s a simple matter of logic, and it doesn’t depend on what the qualifiers are.

Carbon is the only element with the right electron shell configuration to form long chains (Silicon has it to some extent but not sufficient to form chains like this).

Long chains of what? Chains like what?

Polymers capable of long term information storage?

Did you not learn this at school?

No.

I don’t see the point of this comment.

You need to learn to change your framework of thinking.

Carbon. Life. Fine Tuning.

There is no way that any natural process could have known that carbon was the best choice. The best that you have to offer is pure dumb luck. Coincidence. Chance. Indistinguishable from a miracle. Not science. See?

No, what I actually said was that the addition of qualifiers can’t save dFSCI from circularity:

And the difference between qualifiers and criteria is? You don’t tell us. Why not?

As I pointed out earlier (did you ever respond?), it is precisely those “qualifiers” that define the meaning of dFSCI. They are not something “added to” the definition. They are elements of the definition.

You can’t just choose to ignore them and then claim the definition is circular:

…it doesn’t depend on what the qualifiers are.

Yes, it does. Even Mark Frank acknowledged it does.

Exhibit A (keiths):

The known function means that it’s specified, and the length means that the dFSI is greater than the threshold, so it meets those criteria for the presence of dFSCI.

The circularity is in the remaining criterion:

So it does matter what the qualifiers are. Of course it matters. They constitute the definition under discussion. Even you admit this, finally.

The context is there for everyone to see. Your comment was made in response to the following.

Exhibit B (mark frank):

If dFSCI was simply a synonym for “no good natural explanation” then the case for circularity would be obviously true. But it [dFSCI] incorporates other features (as do its cousins CSI and FSCI). So for example dFSCI incorporates attributes such as digital, functional and not compressible – while CSI (in its most recent definition) includes the attribute compressible.

Your response:

The addition of qualifiers doesn’t (and can’t) save these concepts from circularity.

That’s simply false. And it was exposed as false. So you ought, by now, to know better. Are you going to continue the charade?

Gpuccio seems to be saying he can distinguish between a sequence of symbols that are not random from a sequence of randomly generated symbols. Leaving aside the impression that this is yet to be convincingly demonstrated, gpuccio seems to say that he does not need to apply his method to Nature (a biological example, I assume he means) presumably because all life is designed anyway and he doesn’t need to check. I am left wondering what it is that CSI and dFSCI (is that a typo; should it be dFCSI?) can achieve as concepts. Accepting for the sake of argument that there is a universal way (gpuccio’s method) to tell if a string of numbers were either random or non-random, how does this advance ID? What am I missing?

This is just a misunderstanding. My fault, probably.

I was answering Mark about his request of “tests” of dFSCI outslide blogs. So I wrote:

“Now, you want “instances where the dFSCI design relationship has been demonstrated outside of examples put forward in debates like this”. Do you want papers on Nature about dFSCI specificity? I cannot give them. But you certainly know that.”

And:

“I hope that answers your request. Your need to read an assessment of dFSCI on Nature is only your personal need. I am happy that I do not need that, because I am afraid I would have to wait some time.”

I meant “Nature”, the scientific journal, which I quote just as an example of something that a “skeptic” like Matk would probably consider as “authority”.

Having clarified, I hope, the misunderstanding, here are some other clarifications about more substantial issues:

Gpuccio seems to be saying he can distinguish between a sequence of symbols that are not random from a sequence of randomly generated symbols.

Not exactly. I have been saying that I can distinguish between a designed sequence of symbol, provided I can recognize and define a function for it, and that its functional information is complex enough, and a non designed sequence of symbols.

Leaving aside the impression that this is yet to be convincingly demonstrated,

We have demonstrated it here. We can do it again. We can do it anywhere. Anyone can do it.

gpuccio seems to say that he does not need to apply his method to Nature (a biological example, I assume he means) presumably because all life is designed anyway and he doesn’t need to check.

That would be a very strange concept indeed. I hope it is clear now that I never meant such a thing.

I am left wondering what it is that CSI and dFSCI (is that a typo; should it be dFCSI?) can achieve as concepts. Accepting for the sake of argument that there is a universal way (gpuccio’s method) to tell if a string of numbers were either random or non-random, how does this advance ID? What am I missing?

It seems you have missed practically all my other posts here in the last week.

b) The concept is: dFSCI is empirically a reliable marker of design (100% specificity, but low sensitivity) in all cases where it can be tested, because the true origin of a string can be independently known.

c) That is enough to convince us not-too-smart IDers that it is a good marker of design also in cases where the true origin cannot be independently known, and that it can therefore be used to infer desgn in those cases.

Just to pretend that you still have an argument for circualrity, you have started again inventing things I have never said. That speaks volumes about your honesty.

The circularity is in the remaining criterion:
1. Gpuccio thinks it couldn’t have evolved because it exhibits dFSCI.

A simple, explicit, stupid lie. Whoever said such a thing?

I never said that “I think it couldn’t have evolved because it exhibits dfSCI”. I have always said (hundreds of times, I suppose) that:

a) I affirm dFSCI if a function can be defined for the string that is complex enough

AND

b) No explicit explanation based on necessity mechanisms is available.

I have said many times that I am not interested to mere “possibilities”, and to statements such as “It could have evolved” or “It couldn’t have evolved”. I feel no need to affirm or deny that kind of statements: they are simply not science.

2. Why does it exhibit dFSCI? Because gpuccio thinks it couldn’t have evolved (or been produced by any other ‘necessity mechanism’).

Another lie (or rather, the same lie again). As even a small child would understand, at this point:

It exhibits dFSCI necause:

a) A function has been defined for the string, its functional complexity measured, and the complexity is higher than an threshold appropriate for the system we are considering.

b) No necessity mechanism is known that can explain the emergence of the information for that function.

Quite different, isn’t it?

If you excise the circularity from dFSCI, all you’re left with is “has a specified function” and “couldn’t have been produced by pure random variation.” Well, everyone agrees that the gene for hemoglobin, for example, has a specified function and couldn’t have been produced by pure RV. The question isn’t being asked by either side.

Lies. What we “are left” with is:

a) Has a specified function

b) Complex enough to exclude RV as an explanation

c) No other explanation based on necessity mechanisms, or necessity + RV, has been shown.

So, please show how the gene for hemoglobin can emerge by RV + NS. Or let me infer design for it, and keep your convictions. But don’t say it is useless, because it is a lie. It is very useful, because it provides me (and all reasonable people) with a credible explanation for a very important question: “How can the emergence of biological information, such as the gene for hemoglobin, be scientifically explained?” You know, a lot of people are asking that question.

This is definitely a question I am asking.

Consider a lengthy gene X with a known function:
1. Gpuccio is unaware of a ‘necessity mechanism’ that could have produced X.
2. Therefore X is designed.

Lie. Why have you cut dFSCI from my argument? Just to lie? My argument is about dFSCI. And the main part of the dFSCI definitionis:

“A function can be defined for the string, and the functional information linked to that function is complex”

Obviously, if you cut that part from my argument, the part that is necessary to empirically connect the definition to a design origin, what remains is simply an argument from ignorance: your ignorance (spontaneous or more likely intentional) of my argument.

Irreducible complexity isn’t circular, but it’s wrong. dFSCI is circular, and its circularity renders it useless.

So now, answer these points, and apologize for the lies. Or just go on lying. Your choice.

“dFSCI” is not a tool that is useful for differentiating “all at once design” from “a bit at a time” evolution.

You are always the king of bad understanding, and of unwarranted enthusiasm for the bad understanding consequences.

As many times discussed, it is of absolutely no relevane if the transition happens “all at once” or “a bit at a time”. The probabilistic barriers are the same, if NS or IS do not act.

So, the only question would be: “Does it happen one bit at a time, and is each state generated by one bit transition capable of conferring reproductive advantage? Is each step selected and expanded?”

So, please, show that that is the case. And avoid false statements.

Design needs not be “all at once”. And unguided evolution is an explanation not just if it happens “a bit at a time”, but if, and only if, all the transitions and possible selections are explictly defined, the RV component probabilistically credible, and the NS component causally verified.

When we take the trouble (of which there is a lot) to test every possible one step variation on a functional sequence, we find that most point mutations have no effect at all. Or at least that is the finding of recent research. What does that say about needles and haystacks?

I suppose it says that you can look in the haystack to find the needle, provided the mutation is neutral. And that your probabilities to find the needle are exactly the probabilities of finding a needle in a haystack.

A test for specificity is made in controlled situations. Often, for example in medicine, it is made by just applying retrospectively only some diagnostic procedures to cases where the gold stndard too has already been applied. You get perfectly good 2 by two tables that way, to assess sensitivity and specificity.

That’s exactly what we have done here. That is perfectly scientific. This is our lab.

I have never said, as you seem to imply for reasons known only to you, that the “true origin” of the string must become known “after”. I have always talked about strings “whose origin we already know”. IOWs, the gold standard is known.

The fact that the person who assesses dfSCI is kept blind is simply a methodological tool to ensure that his judgement is not biased.

This is how science is currently done. I believe you have serious epistemological problems about that.

By the way, have you given an assessment about my three strings? With two different threads active at TSZ, it is easy to miss things. I would encourage everyone there to post in the last thread, now.

AF: I have annotated your remark at 400, on clarifying what dFSCI means. And yes, the main usage is dFSCI etc [such as FSCO/I which brings in organisation that is reducible to info, digital info being WLOG as anything informational can be reduced to digits by A/D, but where EXPLICIT code is significant because it can be directly seen to be a code], so you have been following a variant mainly used by objectors. KF

The two arguments are logically identical. The only difference is that the first one unnecessarily mentions dFSCI.

Tons of lies:

a) You have not answered any of my points about your stupid arguments.

b) The two arguments are logically identical only in your false version. You have completely deleted the part where dFSCI is tested empirically as a reliable indicator of design.

As I don’t think you are stupid, and you must certainly be aware that we have been debating exactly that point, in the same thread where you posted, what remains after we apply the “Keiths explanatory filter”?

Only lies.

But maybe that is circular. After all, I already knew you were a liar, and Mark would not be happy…

It is cheeky to challenge what is presumably your area of expertise but I am sure you are wrong. You must know that sensitivity and specificity should be measured throughout the process – from creating a test in the laboratory through to its use in the field – because the values may change significantly when you get to the field.

That was not exactly my idea. Take a hospital lab, for example. It is a field, isn’t it? It deals with true patients. We deal with true strings. A lab can be part of the field.

And yet, yu can work in a controlled context, such as:

The lab diagnoses a disease for one year in the traditional way, with the traditional set of tools that is considered “the gold standard”. The results are stored, and all clinical decisions are taken according to the gold standard.

At the same time, the lab test on all those patients a new test, let’s say one that is less expensive, or less invasive. The results are stored. If the research is serious, the assessment of the results of the new test will be done in blind, without any awareness of the results of the gold standard.

The, the two sets of results are combined in a two by two table, and specificity and semsitivity (and possibly PPV and NPV) are computed for the new test.

That is a perfectly valid scientific methodology. That’s exactly what we have done here, or what we can do anytime, with all the necessary safeguards.

We are dealing with strings, information, design. We are not dealing with a rare disease. There is no need for us to go testing dFSCI in Africa. What we are doing here is perfectly correct.

To put it another way I am asking for an example of someone assessing/demonstrating the dFSCI procedure as described by you outside the “laboratory” context of debates. Why do you fight so hard? All you have to do is give me one of your thousands of examples.

I am not fighting. I was trying to show you why your objections have no sense.

Now, let’s say I have demonstrated the same thing that I have demonstrated here, at my home, with my brother. Would that be “out of the lab”?

You must be desperate. Your argument of circularity has failed. You have failed in the challenge about dFSCI. Your arguments now are completely bogus. I don’t think I can follow you any more on that path.

I left you free to decide. If you could not see any function, you could simply ask. Here is my answer:

– The first one is functional if it is a meaningful english phrase, conveying a meningful concept.

– I have no function for the second one, unless you can find one.

– The third one is highly compressible.

2) I know the origin of the first one – so that is no good. Remember you said it would not be necessary to imagine I didn’t know the origin.

I don’t know how you know it. But OK, let’s try again:

“Raffaele Attilio Amedeo Schipa is born in Lecce, fourth in a modest family (his father Luigi is a custom officer) in the working-class neighbourhood called Le Scalze, last days of 1888, though recorded January 2, 1889 for conscription reasons.

His supernatural vocal gift is immediately noticed by his primary school teacher Giovanni Albani, then by all Lecce, wich actually always considered him “propheta in patria”.

The arrival from Neaples (1902) of bishop Gennaro Trama, real talent scout of those times, offers the young talent – whose nickname is now “Titu” (tiny) – the chance to enter the local Seminary, where he will study singing and composition.”

The function: a meaningful english passage, describing correctly events in the life of Tito Schipa.

3) The second could have a natural or a designed origin and may or may not be complex as it could be an encrypted version of a naturally generated string.

I wll take that as a negative assessment of dFSCI. If you cannot explicitly define a function, and measure its complexity, you are not assessing dFSCI as present. As the origin of the string was random, as explained to Mung, this is a true negative.

4) The third can easily be created by a natural mechanism.

OK, then it does not exhibit dFSCI. Which is correct. Abd, as I had designed the string, that is a false negative.

So, to sum up: you made a good work. In cases 2 and 3, you essentially agree with Mung, who is a very smart person 🙂 .

In the first case, you just avoided the challenge with a stupid excuse.

Now, for a moment, be really sincere: You really could not assess dFSCI as present in that case because “I had failed to define the function for it”? You really couild see no function in it by your own intelligence?

Or, you really could not assess dFSCI as present in that case because “you knew its origin”? So, if you had never read that phrase in advance, would your judgement really be different?

You did not distinguish between them, despite you saying it’s “extremely easy” to do so.

You seem to need things repeated hundreds of times.

What have I said here hundreds of times?

No, don’t use your supreme intelligence to answer, I will help you:

That dFSCI is a diagnostic tool for designed objects with 100% specificity and low sensitivity.

Now, do your homework. Go to Wikipedia, and study well what specificity and sensitivity mean. You will spend well your time, better than writing folish posts here. And you will acquire an important concept, a Bayesian one, so Mark will be happy too.

Gpuccio’s problem is that the length of the sequence is irrelevant if it accumulates.

And your problem is that it does not accumulate.

But having admitted that one or two step accumulations are naturalistically possible, the only claim he is making is that longer accumulations do not happen.

Yes.

He bases this on the length rather than any evidence of an alternative history.

Now, let’s think correctly. It is you who base your explanation on the accumulation of functional changes. It is you who have the burden to show that functional changes accumulate. That’s science. The reverse is only cheating.

Moreover, as said many tiems, I base my point not on the length, but on two very string arguments:

a) There is no logical reason why complex functions should be the accumulation of very simple functions.

b) There is no example of that “natural history” that you always invoke, where those simple functional changes accumulate to create a complex function.

gpuccio does not allow us to use sequences evolved in a genetic algorithm as test cases, and sequences made up by discussants here don’t seem to working as test cases. gpuccio wants to know exactly how they were made as part of the dFCSI assessment.

But that is simply not true. I don’t mean the first part, that is perfectly true. I mean the part about “sequences made up by discussants here don’t seem to working as test cases”.

Why do you say that? I hope you were simply distracted. I cannot doubt your honesty.

Sequences made up by discussants here are perfectly OK for me. I have assessed dFSCI for each of them, and I have provided three myself, for which Mung, and in part Mark, have correctly assessed dFSCI.

And why do you say that:

“gpuccio wants to know exactly how they were made as part of the dFCSI assessment”?

That is ompletely false too. Please, read again all the previous exchanges. the only supplementary information I have asked for, both to Mark and Mung, was about how they were defining the function. Never about “how the string was made”.

F/N: Wiki on tests and reliability (as signs), Positive Predictive Value article here. dFSCI and related criteria have high specificity and positive predictive value, as intended, with the gold standard being tests on cases of known origin, e.g. 500-bit to 1,000 bit or more strings of discrete symbols as may be found on library shelf contents, threads such as this one, the net as a whole, as well as in digital computers and similar technology. It is prone to false negatives, i.e. to assigning to chance and/or mechanical necessity, but that is not a problem for what it is intended to achieve, quite high confidence in cases where it does rule positively. KF

That’s patently false. A, B, and C are symbols that represent the qualifiers, they are not themselves the qualifiers. A, B, and C don’t qualify anything. And unless and until you can tell us what they represent, they are meaningless symbols.

I asked you what they were, specifically. What are “the qualifiers” that A, B and C represent?

Does the ‘A’ represent the F in dFSCI, or something else? Why not just use an ‘F’?

And no, I haven’t figured out what A, B, or C is supposed to mean. That’s why I asked. Until you tell us what they represent they are meaningless, as is your argument.

And did you ever clarify your use of ‘qualifier’ in the earlier thread and now your use of ‘criteria’? Are the terms synonymous in the way you are using them?

We’re talking about digital strings. A virus is only one example of a digital string. I expect you to understand that from the context. Do I really have to spell out every little detail of the argument every time?

The digital string may or may not be executable. It may execute on one platform but not another. There is no necessity that a given digital string be executable and there is no necessity that it execute on a given platform and there is no necessity that it exhibits the behavior of a computer virus or worm.

Excuse me, but as I have said I cannot follow you in what appears to me as mere paranoia. So, I wil respect your strange “last level” opinions abvout this problem, but I don’t agree on anything that you say.

* It is well defined how to do the complexity calculation. The space of all possible strings is clear and a uniform probability distribution of each possible string is assumed.

And so? That is exactly the theory of dFSCI. Obviously, if in some situations there were reasons not to assume a uniform distribution, I would take them into account. In almost all cases, there is no such reason.

The participants are selecting examples with an agenda.

My only agenda was to show how dFSCI works, and that it has 100% specificity for design. That was my explicit agenda from the beginning, that is what I have done. Your agenda, I suppose, was to show through my challenge that my concept and procedure were wrong. You failed, so now the challenge is not enough for you, and you want me to go to Africa. No, thank you.

So you and Mung are presenting examples which are clearly designed and clearly meet your criteria for dFSCI.

I have presented examples of designed strings with dFSCI, of random strings, and of designed strings without dFSCI. Practically all the possible range. You have presented examples that met my criteria for dFSCI (in negative). Any possible string meets my criteria for dFSCI, because for any possible string I can assess it as present or no present.

We try to present more difficult ones, and indeed you can’t tell whether they are designed, but then you dispute whether they have dFSCI on grounds such as unacceptable natural processes (no GA algorithms), or unacceptable functions (no post-specified lists, no “data” strings).

You presented strings that did not exhibit dFSCI. According to the definition and the procedure. I never for a moment thought that they could exhibit dFSCI, and not certainly bevcause you were proposing them, but because it was obvious that they did not have it.

It is possible to arrange things so the observer has an unrealistic lack of knowledge about context and origins (although it is quite hard to do this).

I don’t understand what you mean. Your insistence about possible bias from indirect knowledge of the origin is ridiculous, as it was ridiculous on your part to deny dFSCI to the feedback definition because “you knew the origin”. The feedback definition obviously exhibited dFSCI. You have lost any connection with reality.

So really if the case is to be convincing then there should be examples of the dFSCI/design relationship outside the debate context. This can still be controlled, just as assessing diagnostic tests in a hospital can be controlled. In fact I am asking that the examples conform to the control standards you yourself set i.e. an observer who had no knowledge of the origin deduced dFSCI and then later found the string was designed.

This is your paranois, and nothing else. Keep it, if you want. For me, the case of dFSCI specificity is absolutely convincing. You are not convinced? Your choice. Why am I not surprised?

2) This sounds a bit pompous but I want to be reassured about your ability to be self-critical, as I suggest Joe and I have been over the issue of circularity. The ability to look at your own ideas critically and accept you are wrong is rare. I do not expect any of your colleagues to do it. But I hoped you might be an exception. If the participants are not prepared to be self-critical then debates are bound to become sterile as no one can change their opinion.

This is not only pompous, but false, unfair, and ridiculous. I am satisfied of my self criticism. You, think what you like. I could say a lot of things about your self criticism, orcomplete lack of it, but I am not used to criticize people’s faculties, only their arguments or their behaviour.

It seems to me obvious now that you have never observed an example of the dFSCI relationship being confirmed outside the debate context. I have asked for a single example many, many times and even given you the form of words such an example would take. You are honest so you haven’t made one up, but you also haven’t provided the example. Instead you have raised almost every objection I can imagine. I think this must be because it would be embarrassing for you to confess that despite your claim that the principle has been verified thousand and thousands of times you actually have never observed it being verified outside the debate context.

I don’t ubderstand what you want. I think you only want to evade from reality, and from your defeat in the previous discusssion.

Mt concept of dFSCI was created, refined, tested and discussed in what you call “the debate”. For me, there is nothing “outside the debate context”. The debate is all, and is perfectly fine for me. Again, you think and believe what you like. But don’t tell me what I should do.

And I am not embarrasses, and I have nothing to confess. I propsed a defintion and a procedure, and I entered a challenge to show you and others that both worked. I have done it. I can do it again. I am very happy of that.

Obviously you can now go and make up an example with your brother but this would share the characteristics of a debate context and would not count as self-criticism.

All my life is a debate context. I am proud of that. As I am proud of my self criticism. I really have nothing else to add. Please, out of simply courtesy, don’t repeat again the same points, at least with me. You have said what you liked, and I have said what I liked. Try something else.

“mecA is responsible for resistance to methicillin and other ?-lactam antibiotics. After acquisition of mecA, the gene must be integrated and localized in the S. aureus chromosome.[27] mecA encodes penicillin-binding protein 2a (PBP2a), which differs from other penicillin-binding proteins as its active site does not bind methicillin or other ?-lactam antibiotics.[27] As such, PBP2a can continue to catalyze the transpeptidation reaction required for peptidoglycan cross-linking, enabling cell wall synthesis in the presence of antibiotics. As a consequence of the inability of PBP2a to interact with ?-lactam moieties, acquisition of mecA confers resistance to all ?-lactam antibiotics in addition to methicillin.[27]
mecA is under the control of two regulatory genes, mecI and mecR1. MecI is usually bound to the mecA promoter and functions as a repressor.[26][28] In the presence of a ?-lactam antibiotic, MecR1 initiates a signal transduction cascade that leads to transcriptional activation of mecA.[26][28] This is achieved by MecR1-mediated cleavage of MecI, which alleviates MecI repression.[26] mecA is further controlled by two co-repressors, BlaI and BlaR1. blaI and blaR1 are homologous to mecI and mecR1, respectively, and normally function as regulators of blaZ, which is responsible for penicillin resistance.[27][31] The DNA sequences bound by MecI and BlaI are identical;[27] therefore, BlaI can also bind the mecA operator to repress transcription of mecA.[31]”

From the paper “Origin and molecular evolution of the determinant of methicillin resistance in staphylococci.”

“Methicillin-resistant Staphylococcus aureus (MRSA) is one of the most important multidrug-resistant pathogens around the world. MRSA is generated when methicillin-susceptible S. aureus (MSSA) exogenously acquires a methicillin resistance gene, mecA, carried by a mobile genetic element, staphylococcal cassette chromosome mec (SCCmec), which is speculated to be transmissible across staphylococcal species. However, the origin/reservoir of the mecA gene has remained unclear. Finding the origin/reservoir of the mecA gene is important for understanding the evolution of MRSA. Moreover, it may contribute to more effective control measures for MRSA. Here we report on one of the animal-related Staphylococcus species, S. fleurettii, as the highly probable origin of the mecA gene. The mecA gene of S. fleurettii was found on the chromosome linked with the essential genes for the growth of staphylococci and was not associated with SCCmec. The mecA locus of the S. fleurettii chromosome has a sequence practically identical to that of the mecA-containing region (?12 kbp long) of SCCmec. Furthermore, by analyzing the corresponding gene loci (over 20 kbp in size) of S. sciuri and S. vitulinus, which evolved from a common ancestor with that of S. fleurettii, the speciation-related mecA gene homologues were identified, indicating that mecA of S. fleurettii descended from its ancestor and was not recently acquired. It is speculated that SCCmec came into form by adopting the S. fleurettii mecA gene and its surrounding chromosomal region. Our finding suggests that SCCmec was generated in Staphylococcus cells living in animals by acquiring the intrinsic mecA region of S. fleurettii, which is a commensal bacterium of animals.”

* Are there examples which meet your criteria for dFSCI which are not designed?

Never seen one. And you?

Are there any cases outside the debate environment (see my earlier comment) which meet your criteria for passing the dFSCI test at all?

Any meaningful string of language long enough. Any working software code long enough. They would all qualify as having dFSCI, and they would be designed. I am not sure if they would be “outside the debate” because, as I have said, that means nothing to me.

As it happens the first example does throw some light on some of the problems with dFSCI and the debate environment. Of course I can see that it was based on some English text which someone wrote and therefore it was designed. I can do this on good Bayesian grounds. But does it meet your criteria for passing the dFSCI test?

If you are referring to the feedback definition, or to the biography of Tito Schipa, the answer is yes.

The biggest problem is the clause that says the observer must have no knowledge of the origin.

To be precise, that is not a clause in the definition of dFSCI, but a methodological procedure to avoid bias during the testing. IOWs, one can assess dFSCI even if he knows the origin, if the assessment is done objectively, and verified by others. But assessing it in blind guarantees protection against cognitive bias.

While I was not previously aware of that particular bit of text (perhaps you wrote it) I am familiar with English text in general, I have a pretty good idea of how it gets created, and I know that you provided the text and are very capable of writing it yourself or finding a piece that someone else created. So I really have a very good idea of the origin – which is why I can deduce it is designed on Bayesian grounds.

And I need nothing of that to infer design on Fisherian grounds. All i need to know is that it is too unlikely for it to emerge in a random system, at least in this universe, and that there is no natural necessity mechanism that write definitions of feedback, or biographies of Tito Schipa. So, believe me, I can live without your Bayesian inference.

This is not trivial. It explains why you cannot find examples of strings which pass the dFSCI test outside the debate environment.

This is not trivial: it is false.

Almost every string (except molecular strings) comes with a context that tells the observer a lot about its origin. In the debate environment it is possible with a great deal of effort to devise strings which have very little context and tell the observer very little about their origin, but I am not convinced it happens anywhere else (although even in the debate environment the fact that a debater with an agenda created or selected the string tells you something about the origin).

Let’s call that a Bayesian excuse for excluding a scientific design inference for biological information, against all evidence, and only because you don’t like the idea. What were we talking about, a few moments ago? Cognitive bias?

Of course, we could always test dFSCI by trying to imagine we didn’t know anything about the origin. But you yourself ruled that out – probably wisely as it is very hard to do that objectively.

As I have said, I have not ruled it out. I just prefer to do it in blind, because it is better methodology. Whatever you say, I have repeatedly tested dFSCI, and it works. But you can always deny truth, otherwise why would you call yourself a skeptic?

If such a detailed explanation were determined, per gpuccio’s definition, it would cease to have dFSCI. It wouldn’t be a false positive! Rather, the new knowledge would change our evaluation of dFSCI.

No. The only reason for dFSCIìs existence is that it is a specific tool to detect design. The existence of false positives to a corredctly evaluated dFSCI would falsify the procedure, or at least its usefulness.

So, if necessity mechanisms are found that could not be anticipated at the time dFSCI was assessed as positive, using all the necessary criteria, that is a false positive, and a falsification of the validity of the procedure.

No, gpuccio declares that gpuccio would not do that — that once dFCSI is inferred, further clarification of the origins of the string does not cause dFCSI to be undeclared. If gpuccio did what you say, then the charge of circularity in the Design inference would be correct.

But gpuccio declares that once a string is inferred to have dFCSI, if sufficiently strong evidence that it arose by RV+NS is found, then it is constitutes a false positive. And owing to that, gpuccio’s procedure is not circular. And in that case gpuccio”s inference of Design totally collapses. But gpuccio argues that this has never occurred — that no false positives have occurred.

t is when we use unknownness as a defining attribute of dFSCI, and then declare that because we have dFSCI we know the origin, that we run into trouble.

But I have never declared such a thing! I say that, if we have dFSCI, we infer design on the basis of its repeatedly observed empirical connection with design. I have never said that “because we have dFSCI we know the origin”.

Let’s say that you guys run into trouble whenever you invent circular variants of my non circular definition.

Of course, the way gpuccio limits the class of sequences eligible for the term dFSCI, no false positives will never occur.

Thank you. That just means that my tool works well.

One cannot model a process that creates sequences in order to demonstrate the mathematical feasibility, because that would violate gpuccio’s definition.

That only violates logic. As I have said, you can model NS, if you want. The results will be trivial.

The problem is that you like to model NS with the parameters that are not of NS. That is simply cheating.

One cannot point to a gradual process for creating dFSCI, because the quantity of change is insufficient.

That’s not true. You can certainly deconstruct a protein into simple steps of change, each of them naturally selectable, that accumulate into the final protein. Just do that.

But suppose we do a little thought experiment. Let’s agree that gpuccio’s definition entails a threshold. Say 150 bits. That implies that 149 bits does not trigger the dFSCI indicator. The number is arbitrary, but gpuccio’s paradigm requires a threshold. Let’s just call it t. Now suppose that Lenski or someone starts with a sequence of length t-1, and after 20 years, observes a length of t or t+1. Suppose the happens in a natural setting. What happens to the definition of dFSCI?

Absolutely nothing. I have discussed that scenario many times. There would be no dFSCI in that transition, because you pass from t-1 to t, through a simple change of 1 bit (or aminaocid).

The problem is, you have to explain how you get t-1, that is not an unrelated state, but is derived from t. It is a poor trick tried many times by darwinists.

The dFSCI of a protein applies to a full transition from an unrelated state to the final state. That’s why I apply the concept to basic protein domains.

It explains why you cannot find examples of strings which pass the dFSCI test outside the debate environment.

You’re going to need to be more clear about what you mean by ‘outside the debate environment.’

I posted a segment of Ruby code. That code, as I already stated, had been previously developed in a context that had nothing to do with testing the dFSCI concept.

I can direct you to plenty other instances of code that can be demonstrated to have been developed prior to this debate and ‘outside the debate environment.’

So your claim is a bit bizarre and likely false.

And we can take a string that was generated during the debate and remove it from the debate environment (whatever that means). What then of your objection?

You’ve come up with this challenge that is completely arbitrary and so poorly defined that you have no problem coming up with ad hoc rationalizations why examples provided to you don’t meet your criteria.

So you need to spell out, explicitly, what is acceptable and what is not acceptable.

I take it that any such string that gets posted by gpuccio or anyone else participating in the debate is no longer ‘outside the debate environment’ and is therefore disqualified.

I take it that nothing in any known human language can be used, because we know the origins of those languages. Likewise nothing in any computer language, or anything executable on a computer.

So, item 1. nothing that anyone can know anything about.

that pretty much rules out everything. You win.

Because the observers did not assess the virus for dFSCI and because they know an enormous amount about the origin of the string.

So?

Since when does the dFSCI depend on who assessed it? That’s pretty much the exact opposite of objective, isn’t it?

And since when does the dFSCI depend on how much or how little is known about the origin? Is that an inverse relationship? The more you know about the origin the less dFSCI there is?

Okay. The definition you had provided wasn’t clear, which is why so many people said it was circular. Normally, empirical tests can be repeated as many times as necessary.

I apologiza for not being clear enough. Keiths is still convinced of the circularity, maybe there are no limits to how clear one must become 🙂

Empirical tests can be repeated, unless they are shown to ne useless. That is my simple point. dFSCI is the basis for design inference. In a very important field of cognition, such as the inference of design in biological objects, it has sense only if it is really 100% specific.

The specificity of dFSCI rests on the simple fact that the kind of strings it points to have never been shown to emerge from necessity, randomness, or a mix of the two. Thta’s why, whose that exist, have alwasy been found to be designed, when the origin could be assessed.

The point is that such a result will never change. For us, who accept the specificity of dFSCI, its specificity has nothing to do with our ignorance of appropriate necessity explanations, that could be found some time. We really believe that dFSCI identifies a type pof strings that will never be explained by randomness, necessity mechanisms, or a mix of the two.

This conviction can be true or false. As far as no counterexample is sound, we believe it is true. You can obviously believe differently, and go on looking for a counterexample.

I hope this helps to understand better why it would be useless to “update” the judgement about dFSCI if a necessity mechanism is found that explains the object. The point is that if the object was really that kind of string dFSCI points to, and then a necessity explanation is found, then dFSCI is not what we believe it is, and becomes useless.

In any case, that just loads #4 with the question of evolution. As that seems to be the question at issue, what is the purpose of dFSCI?

Although there is no doubt that the falsification of neo darwinisn evolution remains a fundamental point of the ID theory, there is a very important reason to base design inference on dFSCI, ar any equivalent concept.

It is a coomon criticism from your field that ID is only a “negative” theory, what many of you love to call “an argument from ignorance”.

That is simply not true. The argument is positive, and derives from dFSCI (or any equivalent concept) and its empirical connection with design. This is a very positive argument, the possibility to detect factual origin of something fro an observable property of that something, based on past observations of that connection. It has nothing that could be described as “an argument from ignorance”.

Now, the whole ideology of neo darwinism is based on a very special assumption: that objects like biological objects and strings, objects that, according to Dawkins himself, “look designed”, can really be explained by a mixed mechanism (RV + NS).

But those objects certainly exhibit dFSCI!

Therefore, if the neo darwinian assunption is true, the observed connection between dFSCI and design would be, for the first time, falsified.

That’s why ID, together with its “positive” part, has taken also the burden of whoing that the neo darwinian theory is a false scientific theory. That is the “negative” part of ID.

But the positive part remains the most imnportant component of the theory. And it critically depends on the concept of CSI/dFSCI.

>If (as seems the case) gpuccio is willing to designate a sequence as having dFCSI based on its length, function, and complexity, then his argument does not rule out it arising by natural selection and mutation. If he designates dFCSI based on our not now having evidence for RV+NS, the designation of dFCSI does not inherently rule out that the evidence might be found later.

gpuccio argues that this has never been shown to occur, as an empirical proposition. gpuccio does not seem to have any theorem showing that it inherently cannot occur in the future. So the use of #4 does not rule out finding evidence of a “deterministic mechanism” in the future. The use of dFCSI seems to be that it represents a formal determination that gpuccio has made that he feels that it is extremely unlikely that this sequence will be found in the future to have arisen by RV+NS.

OK, but my reading is that the size of dFSCI or the number of bits contributes to determining that evolution is improbable.

I say this because Gpuccio raises no objection to two or three functional mutations taking place in a few years in a small population.

So if half a dozen characters are not a problem, but 80 characters are a problem, then dFSCI is being used to determine that RMNS is not the cause.

Keiths:

petrushka:

OK, but my reading is that the size of dFSCI or the number of bits contributes to determining that evolution is improbable.

The numerical dFSCI value reflects the probability that the target could be produced by pure random variation, without selection, in a given amount of time. This is one of the reasons that dFSCI is so misleading — the value itself is unimportant. All of the freight is carried by the implicit boolean part.

You may be thinking of gpuccio’s protein family argument, in which he argues that the distance between selectable intermediates is too great to be bridged by pure RV. In other words, his argument (at least in that case) does not deny the power of RV + NS per se; it’s really an argument about the nature of the protein fitness landscape. He thinks that NS never gets a chance to do its thing because the selectable intermediate proteins are too far apart to be located by RV in a reasonable amount of time.

Hey, Keiths, thank you for doing so well my job! I don’t think I can pay you adequately (you know, the crisis), but thank you anyway. You can go on like that, I am sincerely pleased 🙂

Yes, and he’s also assuming that evolution has a prespecified target, rather than opportunistically exploiting anything useful it stumbles upon.

That is another problem, what i call the “any possible function” objection. It is a perfectly serious objection, and I have discussed it many times. I will not do that again here, just not to widen even more the present discussion.

One thing it means is that once a gene family (say the globins) gets started, none of the subsequent change creates dFCSI if the original globin had it.

That’s more or less correct.

To be more clear, let’s say a gene family has started. I would say that two different kinds of events seem to happen in its following history:

a) First of all, the same proteinb, with more or less the same structure and function, becomes gradually different in different species at sequence level, while retaining structure and function. The more distant the species, the bigger is usually the sequence divergence. But structure and function are maintained.

That can be easily explained, IMO, as the effect of neutral mutations accumulating in time. That scenario has been described as the “big bang theory of protein evolution”: A protein suddenly appears, and then slowly “traverses” its functional space as a result of neutral mutations. Neutral mutations and negative selection of detrimental mutations can very well explain that.

I believe that such observations are at the same time a very good argument for Common Descent, and a very good argument for the designed origin of the protein family.

b) Another type of event is the development of new functions in the family, usually because of variation at the active site, while in most cases the general folding structure is maintained.

Now, I agree that dFSCI has no role on explaining this kind of transitions, if the complexity implied is low. So, here we are in a field that could more easily be explained by the neo darwinian mechanism. Anyway, even here the mechanism should be well documented, before we accept the explanation.

Let’s call these scenario “semi microevolution”.

Axe has discussed this problem in a paper at Biocomplexity. He believes that many of these transitions cannot be explained by the neo darwinian mechanism, ans sets the empirical threshold for explainable transitions at 7 AAs. That is a reasaoning similar to my dFSCI, but the threshold is much lower, according to Axes’s computations.

Let’s say that I have based my 150 bits (35 AAs) biological threshold on a very wide reasoning “a la Dembski”, applied to the biological scenario: I have grossly compute maximum biological probabilistic resources for the whole planet earth, with the most favourable assumtpions for the RV + NS theory.

According to Axe’s reasoning, that threshold is certainly too high. I believe he may be right.

However, I would happily leave the problem of “semi microevolution” open.

But Dembski’s CSI is not formulated in terms of originating a new gene from an “unrelated” one — Dembski could as easily be talking of making (large) improvements in an existing gene. But dFCSI is not concerned with that, instead with the “origin” of a new protein from an “unrelated” one. Different notion entirely.

Well, I don’t know… As explained, dFSCI can be applied to transitions from a related state, but we have to ignore the related part in the computetion, and just compute the functional complexity of what needs to change to obtain the new function.

F/N: Have the objectors to design theory and the use of the FSCO/I concept as a sign taken time to see how the statistical form of the second law of thermodynamics is framed, and how it is confidently asserted but is subject to empirical refutation by empirical counter-example? KF

Of course, the vast majority of scientists believe that evolution does result in functional complexity.

Of course that alleged vast majority can believe whatever they want to. However they cannot demonstrate that unguided evolution can produce functional complexity and that means they don’t have any science to support their belief. What does that say about their objectivity?

Of course it’s negative: “#4) It is required also that no deterministic explanation for that string is known.” That’s a negative condition.

Pure stupidity- ALL design inferences require the elimination of necessity and chance. That is part of Newtons four rules of scientific investigation.

Joe, gpuccio does not assess “dFSCI” positive if a string is the result of a known “necessity mechanism”, even if that string passes all other tests for “dFSCI” except for its origin due to a known “necessity mechanism”.

No, dFSCI exists regardless of how it came to be. We have been over and over that already.

If a known “necessity mechanism” can generate a string that would be as functional as one requiring complex design above the UPB, why would I need to go through a design procedure?

There isn’t any known necessity mecahnism that can produce dFSCI. When you find one please submit it to peer-review. Until then, stuff it.

‘Transitions’ are all about moves between related states – related by descent

When I say “unrelated state”, in my argument, I mean “unrelated at sequence level”. To be very severe, we can choose groups with less than 10% sequence homology. You can get 6258 such groups from the SCOP database.

But separating out ‘protein domains’ is an arbitrary class that we can see but evolution/genetics cannot. It amounts to protein baraminology.

Is SCOP protein baraminology?

But the string-copying process doesn’t know where genes start and stop, let alone domains, and makes no contribution to assessments of the length of a segment whose entire linkage sequence is discretely ‘functional’.

Yes, and so? That just means that variation at that level is random.

Function is assessed in lives, skewing the distribution of non-demarcated strings presented to the copy process in favour of the ‘useful’.

It’s not perfectly clear. I would say that function works in lives, and is assessed in the consciousness of conscious observers. But OK, let’s go on.

What GP misses is the fact that parts of biological strings are related to each other.

Am I really missing that?

The basic unit of the protein is simply the amino acid, not the domain.

The aminoacid is the basic structural unit. It is not certainly the basci functional unit. The basic functional unit is the domain.

And the basic unit of the amino acid coding segment is the nucleotide, blindly copied.

Yes, the nucleotide is the basic structural unit of DNA. Are we done with trivialities?

These ‘modules’ can build by duplication locally or distally into higher-order modules,

Which modules? Aminoacids and nucleotides? What are you saying? That single aminoacids can be joined, and form sequences of two, three, “n” aminoacids? That single nucleotides can form sequences of nucleotides? Is that what I was missing?

among which are the domains that have proved of evolutionary value en bloc, and hence are found repeatedly in various proteins of various functions.

So, up to now we have discovered that functional domains are made of sequences of individual aminoacids. Thank you for the interesting information.

But the domains themselves are modular. A four-acid sequence may give you one turn of a helix. Duplicate it and you get two. Duplicate that you get four…

So, let me understand: a four acid sequence is functional, and is naturally selectable? Does it give reproductive advantage?

And may I ask, are proteins made of “words” of four aminoacids? Can we see those repeated sequences when we blast proteins?

Answers, please.

In no time at all you have a lengthy structural element.

A functional, naturally selectable element?

A bit of point mutation, with quite a bit of latitude due to relatedness of amino acid properties, may then obscure the ‘true’ relatedness of these elemental repeats.

Excuse me, I don’t understand. We can find homologies netween related proteins in a family, even after the whole process of evolution from OOL to now, and you are saying that we cannot find any homology between proteins generated by the duplication of the same modules? That’s good logic, indeed!

Then GP comes along and says that there is no way that this longer sequence could have arisen by mechanical means

Yes, I do.

And with no clear audit of the steps, thanks to the eliminatory and obfuscatory nature of the very processes of evolution, his diagnosis of Design is irrefutable

Only one thing is irrefutable. With your faulty logic, you insist in making of neo darwinism a non scientific theory, one that will never be supported by facts, and that any Popperian student would immediately dismiss as pure phylosophy.

I think I am more charitable with neo darwinism: I consider it a perfectly scientific theory, one that can be falsified. Indeed, one that has alredy been falsified.

First – I apologise about the self-critical comment. In fact I thought about editing it and removing the comment shortly after posting it – but it is bad practice to edit comments that have been posted except for clarification.

I happily accept the apology. No harm done!

I am going to leave the “outside the debate context” argument because I have failed to explain what I mean and I am tired of it.

Me too.

There are easier ways to expose the problems with the dFSCI argument.

OK, let’s see.

Presumably you are aware that there are deep conceptual problems with classical hypothesis testing (and Fisher’s original formulation has been abandoned by statisticians). In fact you can’t live without Bayesian inference. You use it – but without realising. I will explain.

Ehm, I was afraid you would go back to Bayesian inference. OK, let’s see.

First let’s be clear. You can never know there is no natural necessity system. All you can say is that you do not currently know of one and assess the probability of one existing. A quite different thing.

That’s waht I have always said. The concept of knowing that something cannot logically exist is not part of my thinking. It has repeatedly come from your side.

So given a digital string with a function you have three alternative types of explanation:

It arose through “random” arrangement of the string (there are problems defining what “random” means here but that’s another post)
It arose through a natural necessity mechanism (and a natural selection process is one such process)
It was designed

That smells of Dembski, but it’s fine form me. Let’s remember, however, that these three “explanations” are derived from expereince, and are not logical alternatives that exhaust all that can exist. We are not dealing with a logical theorem here.

Now, let’s go to your “Bayesian” argument.

Please, let me express in simple words, that IMO catch your concept without going into the details of Bayesian statistics.

What you are saying is:

I can agree that the probabilities of 1 and 2 (RV and necessity) may seem low, but if 3 (design) is even more unlikely, I prefer 1 and 2.

You some up the true point in your final statement:

But of course, unless you have a prior belief in a God, the same is true of 3 as it applies to life. There is very little chance of a designer being able to implement life and no obvious reason why such a designer would want to do so.

Well, I understand your position, but have to disagree. That’s why:

a) Your position depends critically on your pre commitment to a specific world view: not only one where God does not exist, but one where non material beings do not exist, consciousness can be explained by arrngements of matter, and consciousness has no special properties that matters cannot explain.

OK. I have nothing against your world view. But it is not mine. And it is not the world view of a lot of people in the world. Above all, it is not a world view that is more compatible with science than any other.

In my world view, Gpod exists, non physical beings exist, consciousness cannot be explained by matter, and it has distinctive properties that cannot be explained by matter.

Now, I am not asking you in any way to share my world view, or to even consider it non laughable. What I am saying is that you have no right to ask me to make my scientific inferences as though your world view were true, just as I cannot ask the same of you.

IOIWs, we cannot establish the probabilities of 3 (a non physical designer, that has designed life), unless we impose our personal world view on all.

b) And then? Well, we cannot establish the probabilities of 3. But the probabilities of 1 and 2 can well be established and do not require a world view war. And they are extremely low.

But here is something we can assess, because it comes from observation: a conscious intelligent being seems to be the only entity in the universe that easily outputs dFSCI in designed objects.

So, if conscious intelligent beings other than huamns exist, and could have been existing and interacting with matter when life appeared, and in the course of its history, the abundant dFSCI in living things can be explained by the only thing that seems to have the power to generate it: conscious intelligence.

Maybe that is nothing for you. Maybe your commitments to a reality where the only cosncious intelligent beings are humans is so strong, that you easily dismiss 3, in spite of its explanatory power, and of the lack of any other explanation.

That’s fine for me. As said, I respect peoples’ faith.

I am happy iof we can agree on that:

a) You say: I agree that my explanation is not satislying, but it is the only one I can accept, so I stick to it.

b) In alternative, you can just say: “I don’t know”.

c) I will say: I know that my explanation implies the existence of conscious intelligence in forms that are different from human intelligence. That’s not a problem for me. Indeed, my map of reality is absolutely compatible with that implication. I happily accept design as the best explanation for biological complexity, because all other explanations are really non credible, and because conscious intelligence is a principle that exists, and that has repeatedly demonstrated to be able to easily generate dFSCI.

I have nothing against your reflexions about medical diagnosis. I think I have already answered your points on my previous post. However, I would also say:

a) That the simple fact that dFSCI exists in nature is amazing. Indeed, we don’t know how conscious intelligence does it. But it does.

b) All your considerations arise fromsticking to the concept that “humans” do it. You never consider that “humans” can do it only because they are conscious and intelligent. So, we are back to the question: what is consciousness? Waht is intelligence?

If you believe that consciousness and intelligence are the product of a physical brain, your reasoning is consistent. But it is based on a false premise. Consciousness cannot be explained by arrangements of matter. Never has been, never will.

If that were true, computers should be able to generate original dFSCI, independent on what is inputted in them. They can’t. They will never do that.

In the end, our scientific reasoning is always conditioned by our world views. Philosophy of science is right about that. Science is not the place of absolute knowledge. But many things can be shared even if worldviews are different.

So, I am happy that I could share my concepts about dFSCI with you, and that in the end only your worldview prevents you from accepting them.

Of course, the vast majority of scientists believe that evolution does result in functional complexity.

Again an appeal to conformistic thought?

The entire biosphere is the counterexample.

Unless it is designed. Please, don’t be circular now.

Not sure what you’ve accomplished with dFSCI, as #4 is determined external to your calculation of functional complexity. The most you can say is that you have identified a property, functional complexity, then assert by analogy, while rejecting the evolutionary sciences, that the property signifies design.

It’s not “the most I can say”. It’s what I say.

Is that the entirety of your point?

Yes.

Of course it’s negative: “#4) It is required also that no deterministic explanation for that string is known.” That’s a negative condition.

Wrong. Something is negative, and is an “argument from ignorance”, only if it is based entirely in negative conditions. But a negative condition can certainly support a positive definition.

ID is an “argument from ignorance” only in the ignorant arguments of its adversaries.

I see that gpuccio (#476) has agreed with you. Of course gpuccio does not formulate dFCSI in terms of proteins — it’s strings. But gpuccio does want an “explanation” of the string, as a step-by-step scenario for the evolution of the string from an unrelated string.

That’s correct. But, as I have said, we can also apply the concept of dFSCO to a transitionfrom a related state, just by measuring only the functional complexity of the transition. In that case, the functional complexity that was alredy available in the starting state is not computed.

dFSCI is a flexible tool. The point is always the same: how much information is necessary to explain the new function that arises?

Gpuccio has been at this for at least a couple of years, and I thought he had made it plain that he believes protein domains were created ex nihilo by a non material designer.

At one point I believe we discussed the possibility that the designer revisits his creation every million years or so to pop in a new domain. As evidenced by the apparent history of new domains appearing in new lineages.

Two lines of reasoning are critical to GP’s argument.

1. Function is isolated and cannot be bridged by evolution.

2. At least some domain sequences have no relatives or variants. He uses this to reinforce the concept of isolated islands.

I see no reason to get excited about the fine points of his definition of dFSCI when these are the real meat of his argument. If he is correct that one needs 80 or more bases in a precise sequence before any selectable function appears, then he has a strong argument.

OK. All that is very reasonable.

By the way, I would precise that I beleive that the functional information in protein domains was inputted by a non material designer. There is no reason for the string to be created “ex nihilo”. Only designed by intelligence.

Moreover, I would not accept that “the designer revisits his creation every million years or so to pop in a new domain.”

I never identify the designer with a creator. That is not a ocnclusion that ID can reach. And anyway I don’t think that the designer “revisits” anything. Fow what we know, he (or they) could just be there always. It is true, however, that the history of information emergence in life is “punctuated”, so his interventions, at least the most obvious, are certainly discontinuous.

Why would you say that only finding a “necessity mechanism” after a positive “dFSCI” assessment, falsifies the validity of the procedure?

Any string X, that has passed all other tests except an assessment of whether it is a result of a “necessity mechanism”, is already functional at the level you claim only design can deliver.

If you find a “necessity mechanism” that can generate a string that you would assess as positive for “dFSCI” had no “necessity mechanism” been found, then “dFSCI” has already failed as a procedure for testing design.

There is no need to wait to find a “necessity mechanism” to invalidate “dFSCI” as a design detection tool for an otherwise suitable string, if you’ve already found it beforehand.

I agree. That would falsify dFSCI just the same. I was answering Mark, I believe, and using the framework of his argument.

But you are right. If a necessity mechanism can generate strings that would normally be considered, by a careful observer well aware of the correct procedure, as exhibiting dFSCI, that would falsify the utility of the procedure itself.

Interestinger and interestinger. So the thing that must be explained is not origin of a gene from an unrelated sequence but new information needed for the origin of a “new function”, even if from a related sequence. So what has to be “new” about a new function? Where is that explained in the definition(s) of dFCSI?

I believe it is rather simple. We can use dFSCI to evaluate how unlikely it is for a protein gene to arise from some unrelated state. In that case, the full FI in the gene must be explained.

On the other hand, we can use dFSCI to evaluate how unlikely is a transition from a related state to a new state that has a different function (or even a refined level of the same function).

Let’s take the case of emergence of nylonase from penicillinase, for example. We can give penicillinase as already existing (the starting state) and consider the transition to nylonase, which implies a definite change of affinity for a substrate (nylon), with the ensuing ability to metabolize it. That transition generates a new function (the ability to metabolize nylon), but it is not complex, because it implies only one or two substitutions (I don’t remember exactly).

Other transitions, that allow the emergence of new biochemical activities in a same family, can have different complexities. dFSCI can be a valuable tool to analyze them too.

Obviously, the explanation of the emergence of a new gene remains its most relevant application.

Well that depends a bit what you mean by “natural necessity” which I find rather vague but I think means “not designed or random”.

No. It means an explanation where natural laws necessarily determine the observed output, without any need for a probabilistic treatment of the system.

It is a bit stronger than that. If 3 is even more unlikely then it would be irrational not to prefer 1 and 2. Do you accept that this is true? It is not clear from what follows.

If 3 is more unlikely, I would prefer the more likely explanation, or just look for another one. I would not express that choice in term of being “ratinal” or “irrational”, but I would certainly choose what seems the best explanation, or just admit that there is no credible explanation available. I have no problems to live with mysteries.

Yes under your world view God is the most likely explanation for everything for which we cannot find an answer.

No. You must excuse me, but I really believe that you do not understand my world view. In my world view, God is an answer for all things for which He is an answer. Some of those things could probably be answered differently, but again I choose the best explanation. The negative concept of a “God of the gaps” is yours, and only yours. In my world view, God is a very tangible reality, and has nothing to do with “gaps”.

However, I must remind you that I have never used the concept of God in my arguments. That concept is very intimate for me, and I feel no need to use it in this kind of discussions.

I have, indeed, used the concept of consciousness and of non physical conscious beings, and similar, because those are a credible, although not necessary, implication of the design inference for biological beings. I have only quote that I believe that God exists to correctly characterize my world view, but I don’t think we should center our discussion about ID on the concept of God.

But the design argument is meant to be independent of any world view. It is an important concession that it is only valid if you adopt your world view.

No. I believe that no argument, of any kind, can be “independent of any world view”. That is simply impossible. And the point is not that the design argument is only valid if someone adopts my world view.

It’s the other way round. The design argument is perfectly valis for all, except those whose worldview has alredy desided, for instance, that non physical conscious beings cannot exist, and that therefore 3 is more unlikely of what has been show to be extremely unlikely. It’s your commitment to your worldview that makes 3 so unlikely. I have only proposed to agree that we cannot establish the probability of 3, because that depends critically on personal worldviews. Therefore, your argument that my argument should be refuted, because 3 is too unlikely, is not valid for all, but only for those committed to your world view. All the others, including serious agnostics about the problem, can well accept my argument, and ignore yours.

Well actually the probability of 2 is also unknown. We don’t know what necessity mechanisms might be round the corner. I only said that your argument assumed this probability was low.

This is just your opinion. My argument, and my intelligence, assume that the probability of 2 is extremely low for strings of the type that would elicit a positive assessment of dFSCI. If it were not so, we would have many counterexamples already available.

Careful. What I have discovered over the course of this debate is that dFSCI is a relationship between a person’s knowledge, a function and digital string. (It also describes a process which might work well for detecting design if design is a common explanation (see my medical analogy)). Only by accepting this can you avoid the circularity objection. A string can have dFSCI for one function and not for another. It can have dFSCI for one observer and not for another. It can have dFSCI for an observer at one time and not at another. It is not a property of a string which can be “output” by anything.

I don’t agree. It is certainly true that dFSCI is evaluated for a certain definition of function, and is relative to that definition. That is implicit in the definition and procedure of dFSCI, and is perfectly true.

But I don’t agree that an object “can have dFSCI for one observer and not for another”. Allowing for the possibility of individual errors, or of obstinate irrational positions, which are the right of any human, dFSCI can be objectively decided. If there are differences of judgement, these can be discussed, and a correct adherence to the definition and procdure will always bring to a consensus, at least among reasonable observers.

And I do not even agree that an object “can have dFSCI for an observer at one time and not at another”.

As I have explained to Joe, and them refined to Toronto, that is not really true. The necessity clause is necessary to exclude essentially highly compressible strings and pure data strings, as we have seen. It is not meant to exclude arguments like unguided evolution, or any similar non obvious necessity explanations, which could work but have never been found to work. As I have said, the moment such kind of explanations were shown capable of producing objects that exhibit dFSCI, IOWs that are not highly compressible, are not data strings, cannot be easily explained by existing laws, and so on, then the whole meaning of dFSCI for a design inference would be falsified. So, I don’t agree that the judgement about dFSCI is subjective. It is not.

Finally, I will reformulate your two styatement in a way that I can fully accept:

a) There are reasons to refute the argument from dFSCI only if you have a prior belief that an intelligence with the power and motivation to design life is extremely unlikely.

b) If the design explanation were even less probable than the random or necessity explanations, then I would choose the best explanation available, or just procalim the issue still a mystery, if no available explanation is IMO credible.

However, you must be aware that this very same understanding of your argument is what leads me to the conclusion that dFSCI is a useless concept that adds nothing of value to the discussion. All it does is to impart a pseudo-scientific sheen to the ID argument and disguise the fact that it is really just an argument from ignorance.

OK. I will live with that, as long as you help me being understood on the other side! 🙂

Sorry for the accusations (I would not call them insinuations, they were very explicit) of lying, but I believed, and still believe, that in the context you fully deserved them. But I am sure you can live with that too. 🙂

So we have random, design and “without the need for probabilistic treatment”. Where do explanations that make the outcome more probable than random but not fully determined fit?

I am surprised at how you put it. Let’s try my way.

a) A necessity explanation explains what we observe by laws, usually mathematical laws, where the evolution of a system is completely determined by initisl state and by the laws themselves. A good example is classical mechanichs. In a necessity system, the evolution of the system is determined with probability 1.

b) In a sense, all natural phenomena, if we do not deal with quantum physics (or, maybe, conscious phenomena) are usually assumed to be caused by necessary laws. But in many cases, the nature itself of the system prevents us from achieving any detailed description of its evolution in terms of those laws. That’s where a probabilistic description is of some help: it does not allow us to describe in detail what will happen, but it still give us some useful information about what can happen. IOWs, probabilistic systems are indeed necessary systems, but our description of them is completely different. Quantum mechanics is obviously all another matter, as you well know. Conscious events: well, let’s say they are an open problem.

c) Design is any process where a conscious intelligent being outputs a specific, purposeful form from his conscious representations to a material object (or system). As the origin of desogn is a conscious representation, I would say that its real nature remains an open problem.

So, as you can see, it is not that necessary systems are explained in terms of probability. Necessity is the natural way we cognize things: the cause and effect relationship, which is so basici to all human reasoning. Probability is a more sophisticated concept, whose nature is still open to debate. It is, however, a useful cognitive tool, fundamental for modern scientific thought, because most systems cannot be described in detail in terms of necessity.

With regards to an outcome being “more probable” or less probable”, I would say we are always in the case of probabilistic description. A random system is a random system, whatever the probability of an outcome. It is true, however, that necessity effects can change the probabilities of outcomes in a random system.

Let’s take the simple example of coin tossing. If the coin if perfectly fair, the discrete distribution of the two possible outcomes of a single tossing will be uniform: 0.5 for H, 0.5 for T.

If we change the weight distribution in the coin, we are acting by law, and the result (the different description of the coin in traditional mechanics) can be easily described by necessity laws. But, when we toos the coin, the result will still be unpredictable, because of the many variables we can’t control. We are again in a random system.

But the probabilities of the two events have probably changed. Let’s say that now H has probability 0.6 and T has probability 0.4. It is still a perfectly valid random system, with a different distribution of probabilities, because of the necessary effects of our acting on the coin.

Something like that can be said of the traditional algorithm if RV + NS. The system is a random system, and all the variation derives from random events. But we know that some configurations, if they happen by chance, will modify an important parameter of the system, the reproductive rate of some beings versus others. The effect of that new asset on reproduction can be described by laws, biochemical and biological laws. However, as the necessary results of this necessary variation interact with other random variables, the final effecvt is in some way unpredictable.

Still, we anticipate, and indeed verify on the field, that those variation that affect negatively, because of its necessary effects, some important existing biochemical function, will usually affetc negatively reproduction, in the measure that it depends on that function. The opposite can happen although much more rarely, for beneficial mutations. Neutral mitations by definition should not have any necessity effect.

So, here too, the system remains a random system, but the probabilities of specific outputs can change.

As I have tried to show, and model, the introduction of a step of pure, perfect NS in a trasition greatly affects then probabilistic barriers, essentially by increasing the probabilistic resources of a specific outcome. that is due essentially to the expansion of the intermediate.

I hope this answers your question.

What do you mean by “best” explanation? Is it different from most probable?

Slightly. And again, it is a question of worldviews. In your, all seems to be described in terms of probability. Not so in mine. In mine, cognition is a complex act of the consciousness, which implies reason, probability, intuition, feeling, choice. Let’s say that the best explanation is a choice of the cognizer, which can be, at least in part, be described as a comparison between probabilities. I would rather desccribe it as a comparison between complex representations.

But you accept we need to centre it on an intelligence with power and motivation to design life? Whether this corresponds your God or not I am happy to leave.

Yes, I certainly accept that.

This is unreasonable. There are an infinite variety of world views. You write as though there were only two alternatives my view and yours. You make it sound as though I am making the commitment which excludes ID. But you can believe in non physical conscious beings and still think it impossible that there was something with the motivation and power to design life. You have to make the prior commitment to the extraordinary possibility of such a designer for ID to work.

This is unreasonable. There are certainly infinite world views, probably one for each conscious intelligent individual that ever existed. But it’s only your prior commitment to your world view that makes you think that a designer “with the motivation and power to design life” is an “extraordinary” possibility. I can’t see anything “extraordinary” in it. How do you explain that?

Therefore a string can have dFSCI for one function and not have it for another, right?

Yes.

Therefore it is not a property of the string which can be correlated with design.

This is simply ridiculous. Of course it is a property of the string, in relation to that function. The string has a specific form to express that specific function. For another function, it has no specific form. It seems that sometimes you fall back in some form of “animistic” reasoning, as though dFSCI should be a “ghost” that either haunts the object or not. It is a property, relative to a function. What is difficult in that simple concept?

Whatever type of necessity mechanisms we are talking about (and this does seem to be getting rather complicated) one observer may know of such a mechanism while another does not. Therefore for one observer the string has dFSCI while for the other it does not. It may that as a result of discussion one observer will change his or her mind. But that happens later. I cannot see how you can avoid this. It follows from your definition.

The simple fact is that I am reasoning in a very clear and empirical way:

a) We observe the string, and try to know as much as possible about the system where it emerged.

b) We eliminate all strings that have any evidence of being ordered and highly compressible.

c) We carefully consider if any laws acting in the system are logically related to the information in the string, and therefore can explain it.

If all these conditions are reasonably satisfied, we affirm dFSCI. Any non biased observer will affirm it. If there are difficult situations, they can be discussed. There is no more subjectivity here than in any scientific approach.

As I have said, the purpose of the necessity clause is not to deal with possible “magical” necessity explanations, or with sophisticated mechanisms that are owned only by an élite of theosophists.

The question is simple: in the light of what science knows, is this information connected to simple computations (hiughly compressible)? Or is it in some way what we would expext from the working of physical laws, or of biochemical laws, or of any other well known law of nature?

If the asnwer to both questions (that are, in essence, the same question in two different forms) is no, then the string objectively exhibits dFSCI: it will have the form of a pseudo random string, a string that nobody can specially identify, except for its functional content. IOW’s it’s only the functional content that makes the string part of a specific subset of the search space.

The magic necessity mechanisms that would explain that kind of strings exist only in your mind. Show one of them, prove that it works, and you will have demonstrated that all my discussions about dFSCI are bogus.

It seems to me your are saying that dFSCI is only true if there is no necessity mechanism (as opposed to no known necessity mechanism). That way leads back to circularity.

No. Stop there. Known, in the sense that someone can really show it. As opposed to fanciful, never seen, hoped for, imaginary. Known. That does not mean that, if I cannot see an obvious mechanism that should be evident to all those who can reason and have enough understanding of the context, my assessment of dFSCI will be valid. As I have said, dFSCI, like any other procedure, must be applied by intelligent and responsible observers, who know well the definition and procedure and understand well their meaning. If those elementary rules are followed, there is nothing subjective in it.

Why phrase it in terms of my prior belief? It can equally be phrased in terms of your prior belief that an intelligence with the power and motivation to design life is possible

There is a good reason for that. dFSCI empirically points to design. So, it is natural to accept that it can point to design even in those cases where the origin cannot be ascertained. It’s you who deny that simple connection, in name of the utter improbability of a designer “with the motivation and power to design life”. An improbability that derives only from your world view.

I have no pretence that my argument is stronger or weaker than Behe’s. It’s just formally differnet, but I agree that all ID arguments share the same fundamental intuition.

I respect deeply Behe’s work. I just express thing in my way, and try to answerr the possible objections of my kind antagonists.

in UD comment #442 gpuccio chastised me as misrepresenting gpuccio’s method for analysing the strings proposed in the Challenge

I don’t want ro chastice anyone. I try to correct wron statement when I see them, because that is the spirit of a debate.

You had said that I wanted “to know exactly how they were made as part of the dFCSI assessment”. IOWs, that I asked for the origin before assessing dFSCI.

That is simply not true. I have never done that. All the clarifications I hav ever asked during the challenge were about the functional definition offered by those who were proposing the string. I have never asked anything about the origin of the string in order to assess dFSCI.

That’s all.

It’s true, in the end I wrote:

“Please, be more careful in the future”.

But that was not to “chastise” you, but because really each incorrect statement about what I have said or done, in this context, is going to cost me a lot of work just to clarify the point.

If you exclude highly compressible strings, you have excluded an “origin” for strings that in all other respects would qualify as “dFSCI” positive.

A negative assessment does not exclude anything, because dFSCI has many false negatives. I accept more false negatives in order to avoid false positives.

If the “information” in DNA is found to be compressible by some algorithm, does DNA now become “dFSCI” negative?

If it turns out to be higly compressible, the result of some simple computation, that would be exactly the same situation as finding a necessity mechanism that can explain it. That would falsify the procedure.

As I have said, compressibility and necessity mechanisms are essentially the same thing.

You and I have said that regardless of origin, any string whose bit configuration meets “dFSCI” requirements, has “dFSCI”.

Yes, and so?

Are you proposing a simple computation that can give us the sequence for a functional protein? I would immediately propose you for the Nobel!

As mentioned here by many, self-replicators are necessity mechanisms that can generate a lot of information that would be considered “dFSCI”, and yet the “information” required for self-replicators is below the UPB.

If you find an algorithm that can “uncompress”, i.e. generate, a string that for any other reason would be acceptable as “dFSCI” positive, and that compressed string is below the UPB, the concept of “dFSCI” is invalidated.

dFSCI can be invalidated as an indicator of design only in one way:

a) a string is correctly assessed as exhibiting dFSCI in a defined system and time span

AND

b) you show that it can credibly be generated without any design intervention in that system and time span.

Amusing to think that somewhere within the digits of pi is a representation of the best possible defense of the argument from dFSCI. The only question is: is it good enough? I strongly suspect the answer is no.

Yes, though gpuccio would no doubt respond that any physically realizable random number generator could not generate the complete works of Shakespeare within the lifetime of the known universe — hence the universal probability bound.

Thank you.

And even if pi turns out to be exhaustive in the required sense, gpuccio could argue similarly that a physically realizable ‘necessity mechanism’ for generating the digits of pi couldn’t create the works of Shakespeare within the allotted time.

Does anyone really believe it couls?

However, this would mean adding yet another qualification to his patchwork argument: Instead of

It is required also that no deterministic explanation for that string is known.

…it would have to be:

It is required also that no deterministic explanation for that string is known that could have produced the string within the lifetime of the known universe.

And of course, it still ends up being an argument from ignorance.

I have nothing to add. It’s all already there. Haven’t you read the part where I speak of defining the System and the Time Span?

One can, however, generate a pretty good encryption key simply by specifying the starting point within the digits of pi. It is not beyond ordinary means to calculate a portion of pi, starting with any arbitrary point in the sequence. This could be XORed with the message to be encrypted or decrypted.

And that, I suppose, happens every day without any design intervention.

GAs wreck the neat mathematical arguments of the ID movement, which is why Dembsky tries so hard to cast them as “searches”…

Sure. GA’s as a search is a Dembski concept. He made it up all on his lonesome. Right.

That’s one of the most stupid comments I’ve ever seen from you. Having a bad day? Or just knowing you can say any old thing there at TSZ and no one there will say any different.

WikiPedia:

In the computer science field of artificial intelligence, a genetic algorithm (GA) is a search heuristic

Search Space

A population of individuals are is maintained within search space for a GA, each representing a possible solution to a given problem. Each individual is coded as a finite length vector of components, or variables, in terms of some alphabet, usually the binary alphabet {0,1}. To continue the genetic analogy these individuals are likened to chromosomes and the variables are analogous to genes. Thus a chromosome (solution) is composed of several genes (variables). A fitness score is assigned to each solution representing the abilities of an individual to `compete’. The individual with the optimal (or generally near optimal) fitness score is sought. The GA aims to use selective `breeding’ of the solutions to produce `offspring’ better than the parents by combining information from the chromosomes. here

If we are solving some problem, we are usually looking for some solution, which will be the best among others. The space of all feasible solutions (it means objects among those the desired solution is) is called search space (also state space). Each point in the search space represent one feasible solution. Each feasible solution can be “marked” by its value or fitness for the problem. We are looking for our solution, which is one point (or more) among feasible solutions – that is one point in the search space.

I think this coupled with the fact that most of those folks forget or ignore all the related drunkards that went extinct from mutations that didn’t provide enough use in given environments or simply those organisms that didn’t get to pass on their genes. These two groups are just as significant to the existence of the organisms we have today and are part of the reason that the target concept is unnecessary.

Right. All those misses are why the target concept is not necessary. Forget the hits. Got it.

And it is only your commitment that makes it seem possible – after all we have never observed any examples that come close to it. We have no idea how it could be done. It seems to me that we are both assessing the design option on the basis of prior commitments – why stress one rather the other? I have the slight advantage in that there have been many phenomena that were formerly explained by a commitment to a the design(s) of a supernatural force of some kind which are now universally accepted as being explained by previously unrecognised natural forces. There are zero occurrences the other way round.

I don’t agree, and I have explained why. There are objective reasons to believe that a conscious intelligent being has designed the biological information, and only your previous commitment can deny that.

But I will leave you to your ideologies. As said, I respect faith.

Accepted. I guess you are saying that if a string is found to have dFSCI for any function then it always turns out to be designed.

Yes.

Formerly you answered a) which avoided circularity – but of course that implies dFSCI is relative to an observer’s knowledge. It sounds like you are now shifting to b) which is makes dFSCI circular.

I don’t understand why you say that. I am not “shifting to b”. I can see nothing in my words that justifies this comment. Perhaps you could explain better why you think that.

The evaluation of dFSCI is not subjective, but it is certainly relative to what can be assessed by observers from a careful observation of the string.

Well of course the body of widely known scientific laws changes dramatically from time to time. There is nothing magic about that. But what I am really getting at is that an explanation may be totally unanticipated by one observer or a group of observers even without discovering new laws of nature: tectonic plates are a good example. No one among all the community of geologists thought of this as an explanation of the formation of mountains and earthquakes. But it did not require new laws of nature – just a fertile imagination.

So, maybe one day a “fertile imagination” will falsify dFSCI. Or maybe not. Until then, it works.

The problem here is that you and I have much the same body of knowledge and intellectual skills. So any explanation that I thought of you would have already thought of and therefore there would be a necessity mechanism and you would not regard the string has having dFSCI. Suppose I found an example of a person or group of people with less knowledge than us because of history or age or education. Then I produce an example of a functional digital complex string for which they cannot see the necessity mechanism that created it, but we can. Would that refute the dFSCI argument?

Is that really an argument? My argument is: I can recognize designed strings by dFSCI, without false positives. Can you falsify that?