Old Earthers universally say that alternating layers of ANYTHING always denote annual varves... Which we now know to not be the case.

That is what I gleaned from Oard's book page 131. So I strongly suspect that it is the authors of the lake K research paper that jumped the gun in concluding that they are indeed annual layers.

Did you even read my posts, Dave?

Why should you "suspect" that Stanton et al "jumped the gun in concluding that they are indeed annual layers" when you didn't even know that they weren't diatom layers? And when Stanton et al go to great lengths to show that they are?

That they are forming today?That they are forming annually today?By an annual process?That this is confirmed by four different independent cross-checks?That these cross-checks include radiocarbon dating up to the limit of YEC acceptance i.e 3000 years?

In other words there is NO reason for anyone, not even a YEC to think there is ANY reason to "suspect" that the top 3,000 layers are not annual layers, and no reason for anyone EXCEPT a YEC to think there is ANY reason to suspect that the bottom 6,000 layers are not annual, as they are exactly the same as the top 3,000 and the dates check out just as beautifully.

And you are simply wrong that "Old Earthers universally say that alternating layers of ANYTHING always denote annual varves". In fact I'd say that's a straight lie. You know it isn't true.

Please read my previous two posts. That's read as in "read" not read as in "Hawkins".

Old Earthers universally say that alternating layers of ANYTHING always denote annual varves... Which we now know to not be the case.

That's absolute BULLSHIT, dave. "Old Earthers" most certainly do NOT "universally" say that ALL alternating layers must be annual.

Quit lying.

Quote

That is what I gleaned from Oard's book page 131.

Yeah, hawkins-reading will do that to you, even if sleazeball Oard hasn't implied it.

Quote

So I strongly suspect that it is the authors of the lake K research paper that jumped the gun in concluding that they are indeed annual layers.

IOW, "they musta been wrong somehow".

Pathetic.

Got anything else?

Who even made the rule that we cannot group ducks and fish together for the simple reason that they are both aquatic? If I want to group them that way and it serves my purpose then I can jolly well do it however I want to and it is still a nested hierarchy and you can't tell me that it's not.

... I strongly suspect that it is the authors of the lake K research paper that jumped the gun in concluding that they are indeed annual layers.

So we've gone from "Lake Kalksjon is No Help for Old Earthers" to: some nobody with no education, experience or credibility in the field, with no idea what the actual evidence is, relying on some guy with a contractual obligation to support his religious dogma regardless of the evidence, talking about some phenomenon unrelated to the evidence in Kälksjön, "strongly suspecting" that a paper that he hasn't read "jumped the gun".

Old Earthers universally say that alternating layers of ANYTHING always denote annual varves... Which we now know to not be the case.

Oh really? And where did you get that idea? Have you ever thought of who it was that discovered and named rhythmites? It certainly wasn't Oard or anyone connected with AiG or ICR or the Creatidiot movement. Nor was it Oard and his ilk that determined varves are only a special type of rhytmites. How about all the mainsteam scientists that study and write about rhythmites? Like the folks cited in this article: https://en.wikipedia.org/wiki/Rhythmite, or these folks who explicitly study the rhythmites (non-varve) of Missoula Floods you used to be so fond of, and they most certainly do not accept YEC, being as they clearly state those floods occurred more than 12,000 years ago and had done so every so often over the past 100,000 years or so: http://iafi.org/missoula-flood-rhythmites/. I could go on and on, but what's the point, you'll never accept reality.

That is what I gleaned from Oard's book page 131. So I strongly suspect that it is the authors of the lake K research paper that jumped the gun in concluding that they are indeed annual layers.

Is your "gleaning" different from Hawkinzing? Less or more scanny? In any case, what you "gleaned" from one page of Oard's (not a geologist) book and what it causes you to suspect is nothing more than entertainingly amusing. Oard has been shown wrong time and again, just like you. And, as Pingu's post noted, he's completely wrong about Lake Kalksjon and for very solid reasons supported by very solid data.

Usually, that is the case. You subject your parameterised model (e.g. the carbon calibration curves) to a "risky" check against new independently derived data. That is an attempt to falsify your model.

It is different from "fitting" a model, and comparing the fit to an alternative model (e.g. the null model), which is what is done more commonly, and isn't usually an attempt at falsification (or at most is an attempt to falsify the null).

This is typically a result of using statistical hypothesis checking though. Without a p value and a t value it doesn't make as much sense. There are some significant differences between what is possible in SPSS or Excel and what was possible in Popper's day.

Can you explain what you mean? Gosset, Fisher and Pearson developed their null hypothesis tests well before Popper wrote The Logic of Scientific Discovery.

I mean that typically the null is a statistical test.

Should have mentioned Neyman too. The Neyman Pearson test came out in 1933. It preceded The Logic of Scientific Discovery by quite a bit.

Yes, i understand. Another statistical test. Those have become far more common since the advent of computers and statistical software. I think my very minor point was made poorly because I'm not sure why you are pointing this out. In Popper's day, there were very few chi square tests on 20k data point samples. The fact that the math was possible and the implications understood already is quite immaterial. Without computers, they were really quite limited in application. And, frankly, now that we have computers, they are given weight philosophically beyond their actual limits.

That last is an opinion which is harder to support. What I mean is that the correlation/causation boundary is quite permeable and often, establishing the direction of causality is used as a shorthand for causality itself. Pharmaceuticals are notoriously declared to work "because" x. When the reality is that no one anywhere has fuck all idea what the drug is doing to affect the disease. All that is known is it affects a pathway or system that is also known to be correlated with a causal direction to the disease.

Love is like a magic penny if you hold it tight you won't have any if you give it away you'll have so many they'll be rolling all over the floor

... I strongly suspect that it is the authors of the lake K research paper that jumped the gun in concluding that they are indeed annual layers.

So we've gone from "Lake Kalksjon is No Help for Old Earthers" to: some nobody with no education, experience or credibility in the field, with no idea what the actual evidence is, relying on some guy with a contractual obligation to support his religious dogma regardless of the evidence, talking about some phenomenon unrelated to the evidence in Kälksjön, "strongly suspecting" that a paper that he hasn't read "jumped the gun".

Really. That calls for a second

And you wonder why people say you suck at science.

Dave, voxrat made an important point here.

Love is like a magic penny if you hold it tight you won't have any if you give it away you'll have so many they'll be rolling all over the floor

Yes, i understand. Another statistical test. Those have become far more common since the advent of computers and statistical software. I think my very minor point was made poorly because I'm not sure why you are pointing this out.

Well, I probably missed your original point. My point is that null hypothesis testing, which is still the workhorse methodology for hypothesis testing, Bayesian approaches notwithstanding, isn't Popperian - it is an attempt to falsify the null, not falsify the alternative hypothesis.

You could argue that it's still conservative, or "risky" in Popper's sense, but not because you are trying to falsify your hypothesis - only because you are committed to "retaining the nulL" if you fail to falsify that null.

And yet Popper was writing after Fisher et al - whose approach continued right through the establishment of the Popperian principle. My point is that mostly we don't do falsification tests of our theory-derived hypotheses (apart from validation tests, as Dave rather amusingly got diametrically wrong) - we do falsification tests of our null. And in practice, most nulls (in two tailed-tests, generally regarded as "conservative") are almost always wrong. So we have the bizarre situation where we attempt to falsify a null we know is almost certainly false!

And we don't even do it by calculating the probability that it's false - we do it by calculating the probability of what we observe given a hypothesis we know to be false!

In practice it sorta works, but it's a hell of a kludge. We need something better.

In Popper's day, there were very few chi square tests on 20k data point samples. The fact that the math was possible and the implications understood already is quite immaterial. Without computers, they were really quite limited in application. And, frankly, now that we have computers, they are given weight philosophically beyond their actual limits.

Null hypothesis testing is philosphically incoherent. But it works because most people regard the p value in Fisherian terms (as a proxy for how robust the result is) rather than Neuman-Pearson's (as a criterion by which to accept or reject H1), a trend that's become even more common now that computer's spit out "exact" p values, rather than you looking up your test statistic in a table that just gives thresholds. But it's a really dumb proxy, and leads to the perception that somehow the p value is the probability that the null is true. Which it isn't.

That last is an opinion which is harder to support. What I mean is that the correlation/causation boundary is quite permeable and often, establishing the direction of causality is used as a shorthand for causality itself.

Well, that's a different issue. I'm not sure I agree that "the correlation/causation boundary is quite permeable". The key is whether you randomly allocated your predictor variable. If you did, I think it's reasonable to conclude that your predictor was the cause of your effect. But only proximally of course. If it was a drug, say, it doesn't tell you the full causal pathway. It only goes back as far as the drug allocation method, and it's "permeable" in that blinding is rarely perfect. So if that's what you mean, I agree.

Pharmaceuticals are notoriously declared to work "because" x. When the reality is that no one anywhere has fuck all idea what the drug is doing to affect the disease. All that is known is it affects a pathway or system that is also known to be correlated with a causal direction to the disease.

I almost replied immediately to say no, I hadn't read this. But, opening it, I realized that I had! You linked it several years ago and I read it then but had forgotten it. It does make a good point though I don't think it dramatically affects my point. I tend towards a Feyerabend view of science as practice as I think I've mentioned so I view Popper as making basically a limiting case point. He made a very convincing point that verificationism is an impossible task. Falsification is, I think, the basis of the idea of the null which seems more like a special case of falsifiability to me. My own anarchic approach is to make a system model view that produces inputs and outputs that match observation, then work on the converters as falsifiable nodes of a schematic. But I think that falsification is still the criteria by which the scientific quality of the model must be judged.

That has a lot to do with my applications and very little to do with historical attitudes though. Different applications have different methodologies and it is easy to get hung up in reifying models based on methodology. The important thing to me is to remember why it is that all models are wrong and what it means to say that.

Love is like a magic penny if you hold it tight you won't have any if you give it away you'll have so many they'll be rolling all over the floor

I almost replied immediately to say no, I hadn't read this. But, opening it, I realized that I had! You linked it several years ago and I read it then but had forgotten it. It does make a good point though I don't think it dramatically affects my point. I tend towards a Feyerabend view of science as practice as I think I've mentioned so I view Popper as making basically a limiting case point. He made a very convincing point that verificationism is an impossible task. Falsification is, I think, the basis of the idea of the null which seems more like a special case of falsifiability to me. My own anarchic approach is to make a system model view that produces inputs and outputs that match observation, then work on the converters as falsifiable nodes of a schematic. But I think that falsification is still the criteria by which the scientific quality of the model must be judged.

That has a lot to do with my applications and very little to do with historical attitudes though. Different applications have different methodologies and it is easy to get hung up in reifying models based on methodology. The important thing to me is to remember why it that all models are wrong and what it means to say that.

I agree with you. It's why I like the Cleland article (maybe a mod could move this exchange into the Popper thread?) which points out that falsification doesn't really work in practice - what we do is compare competing models, knowing that all are wrong, but trying to find which is less wrong than the other (or sometimes, which is less wrong over a specific range of phenomena than the other).

Which is the sense in which we can confidently reject YEC - its residuals are huge! And the residuals on the standard chronology are getting better all the time.

Yes, i understand. Another statistical test. Those have become far more common since the advent of computers and statistical software. I think my very minor point was made poorly because I'm not sure why you are pointing this out.

Well, I probably missed your original point. My point is that null hypothesis testing, which is still the workhorse methodology for hypothesis testing, Bayesian approaches notwithstanding, isn't Popperian - it is an attempt to falsify the null, not falsify the alternative hypothesis.

You could argue that it's still conservative, or "risky" in Popper's sense, but not because you are trying to falsify your hypothesis - only because you are committed to "retaining the nulL" if you fail to falsify that null.

And yet Popper was writing after Fisher et al - whose approach continued right through the establishment of the Popperian principle. My point is that mostly we don't do falsification tests of our theory-derived hypotheses (apart from validation tests, as Dave rather amusingly got diametrically wrong) - we do falsification tests of our null. And in practice, most nulls (in two tailed-tests, generally regarded as "conservative") are almost always wrong. So we have the bizarre situation where we attempt to falsify a null we know is almost certainly false!

And we don't even do it by calculating the probability that it's false - we do it by calculating the probability of what we observe given a hypothesis we know to be false!

In practice it sorta works, but it's a hell of a kludge. We need something better.

In Popper's day, there were very few chi square tests on 20k data point samples. The fact that the math was possible and the implications understood already is quite immaterial. Without computers, they were really quite limited in application. And, frankly, now that we have computers, they are given weight philosophically beyond their actual limits.

Null hypothesis testing is philosphically incoherent. But it works because most people regard the p value in Fisherian terms (as a proxy for how robust the result is) rather than Neuman-Pearson's (as a criterion by which to accept or reject H1), a trend that's become even more common now that computer's spit out "exact" p values, rather than you looking up your test statistic in a table that just gives thresholds. But it's a really dumb proxy, and leads to the perception that somehow the p value is the probability that the null is true. Which it isn't.

That last is an opinion which is harder to support. What I mean is that the correlation/causation boundary is quite permeable and often, establishing the direction of causality is used as a shorthand for causality itself.

Well, that's a different issue. I'm not sure I agree that "the correlation/causation boundary is quite permeable". The key is whether you randomly allocated your predictor variable. If you did, I think it's reasonable to conclude that your predictor was the cause of your effect. But only proximally of course. If it was a drug, say, it doesn't tell you the full causal pathway. It only goes back as far as the drug allocation method, and it's "permeable" in that blinding is rarely perfect. So if that's what you mean, I agree.

Pharmaceuticals are notoriously declared to work "because" x. When the reality is that no one anywhere has fuck all idea what the drug is doing to affect the disease. All that is known is it affects a pathway or system that is also known to be correlated with a causal direction to the disease.

Indeed. OK, in that case I'm with you.

I sometimes wonder how much of my thinking has been influenced by your thinking.

Love is like a magic penny if you hold it tight you won't have any if you give it away you'll have so many they'll be rolling all over the floor

Yes, i understand. Another statistical test. Those have become far more common since the advent of computers and statistical software. I think my very minor point was made poorly because I'm not sure why you are pointing this out.

Well, I probably missed your original point. My point is that null hypothesis testing, which is still the workhorse methodology for hypothesis testing, Bayesian approaches notwithstanding, isn't Popperian - it is an attempt to falsify the null, not falsify the alternative hypothesis.

You could argue that it's still conservative, or "risky" in Popper's sense, but not because you are trying to falsify your hypothesis - only because you are committed to "retaining the nulL" if you fail to falsify that null.

And yet Popper was writing after Fisher et al - whose approach continued right through the establishment of the Popperian principle. My point is that mostly we don't do falsification tests of our theory-derived hypotheses (apart from validation tests, as Dave rather amusingly got diametrically wrong) - we do falsification tests of our null. And in practice, most nulls (in two tailed-tests, generally regarded as "conservative") are almost always wrong. So we have the bizarre situation where we attempt to falsify a null we know is almost certainly false!

And we don't even do it by calculating the probability that it's false - we do it by calculating the probability of what we observe given a hypothesis we know to be false!

In practice it sorta works, but it's a hell of a kludge. We need something better.

In Popper's day, there were very few chi square tests on 20k data point samples. The fact that the math was possible and the implications understood already is quite immaterial. Without computers, they were really quite limited in application. And, frankly, now that we have computers, they are given weight philosophically beyond their actual limits.

Null hypothesis testing is philosphically incoherent. But it works because most people regard the p value in Fisherian terms (as a proxy for how robust the result is) rather than Neuman-Pearson's (as a criterion by which to accept or reject H1), a trend that's become even more common now that computer's spit out "exact" p values, rather than you looking up your test statistic in a table that just gives thresholds. But it's a really dumb proxy, and leads to the perception that somehow the p value is the probability that the null is true. Which it isn't.

That last is an opinion which is harder to support. What I mean is that the correlation/causation boundary is quite permeable and often, establishing the direction of causality is used as a shorthand for causality itself.

Well, that's a different issue. I'm not sure I agree that "the correlation/causation boundary is quite permeable". The key is whether you randomly allocated your predictor variable. If you did, I think it's reasonable to conclude that your predictor was the cause of your effect. But only proximally of course. If it was a drug, say, it doesn't tell you the full causal pathway. It only goes back as far as the drug allocation method, and it's "permeable" in that blinding is rarely perfect. So if that's what you mean, I agree.

Pharmaceuticals are notoriously declared to work "because" x. When the reality is that no one anywhere has fuck all idea what the drug is doing to affect the disease. All that is known is it affects a pathway or system that is also known to be correlated with a causal direction to the disease.

Indeed. OK, in that case I'm with you.

I sometimes wonder how much of my thinking has been influenced by your thinking.

I almost replied immediately to say no, I hadn't read this. But, opening it, I realized that I had! You linked it several years ago and I read it then but had forgotten it. It does make a good point though I don't think it dramatically affects my point. I tend towards a Feyerabend view of science as practice as I think I've mentioned so I view Popper as making basically a limiting case point. He made a very convincing point that verificationism is an impossible task. Falsification is, I think, the basis of the idea of the null which seems more like a special case of falsifiability to me. My own anarchic approach is to make a system model view that produces inputs and outputs that match observation, then work on the converters as falsifiable nodes of a schematic. But I think that falsification is still the criteria by which the scientific quality of the model must be judged.

That has a lot to do with my applications and very little to do with historical attitudes though. Different applications have different methodologies and it is easy to get hung up in reifying models based on methodology. The important thing to me is to remember why it that all models are wrong and what it means to say that.

I agree with you. It's why I like the Cleland article (maybe a mod could move this exchange into the Popper thread?) which points out that falsification doesn't really work in practice - what we do is compare competing models, knowing that all are wrong, but trying to find which is less wrong than the other (or sometimes, which is less wrong over a specific range of phenomena than the other).

Which is the sense in which we can confidently reject YEC - its residuals are huge! And the residuals on the standard chronology are getting better all the time.

It also offers directly falsifiable predictions which is true of paradigms in general I think.

Kuhn made a very very deep point about how we think I think.

Love is like a magic penny if you hold it tight you won't have any if you give it away you'll have so many they'll be rolling all over the floor

Is anyone here aware of any flume experiments which might relate to Lake K? In the spirit of "seeking to falsify" it seems that a key question to ask would be "are there ways other than 'annual varving' which could produce layers such as those in Lake K?"

Is anyone here aware of any flume experiments which might relate to Lake K? In the spirit of "seeking to falsify" it seems that a key question to ask would be "are there ways other than 'annual varving' which could produce layers such as those in Lake K?"

Since lake K is not anything like a flume, there's no point. As has been pointed out so many times before, a flume is so narrow that the presence of the sides influenced the behavior strongly. Lake K is not constrained in that manner.

But feel free to use your awesome franoogling skills to find the answer. Which we already know.

"I would never consider my evaluation of his work to be fair minded unless I had actually read his own words." - Dave Hawkins