Lots of people are asking me about Douglas Keenan’s challenge to identify which time series meets a certain criterion. If our betters are as good as they say at identifying signals in temperature time series, challenges Keenan, they ought to be able to tell signal from noise.

There have been many claims of observational evidence for global-warming alarmism. I have argued that all such claims rely on invalid statistical analyses. Some people, though, have asserted that the analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures are increasing more than would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.

In response to that, I am sponsoring a contest: the prize is $100?000. In essence, the prize will be awared to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.

Keenan asked me for comments on his column before he released it, and I’m sure he won’t mind me telling you what I told him:

I think the offer will be ignored, but it’s a good tactic. You know James Randi? Before he lost his mind he offered a million bucks (or whatever) for whoever could demonstrate psychic abilities under controlled conditions. Some no-names took the challenge and lost, but the big boys sniffed that it was beneath them.

The real reason for their refusal is obvious, as it will be for your challenge. But it will be great fun doing it! It will highlight the main point you made at the end: these people have no idea what they’re doing.

Incidentally, my prediction has already come to pass. Yesterday, a major figure in the doom camp sniffed that Keenan’s challenge had nothing to do with climate. (I was in an email chain where I learned of this.)

You have to honor a man who is willing to put up a choking wad of his own simolians to back a boast. Spread the word and help Keenan get some well-deserved publicity. (I’d do something similar, but all I could offer is an old lottery ticket that I’m fairly sure is out of the money but which I haven’t yet checked.)

Now randomness. Some folks over at Anthony Watts’s place were discussing the challenge and, with the prime exception of one MattS (intelligent fellow), were misunderstanding randomness. We’ve talked about it many, many, many times, but here it is once again, with respect to Keenan’s challenge.

I have no idea—I didn’t ask, and Keenan didn’t explain, plus I don’t want to know—how each of the series in his file were generated, but generated they were. Caused to be is another phrase for generated. Some mechanism caused each value. A popular mechanism is called a “pseudo-random number generator”, in which pseudo-random means known. Random, of course, means unknown, and nothing else. There is no such thing as real, objective, or physical randomness.

So this known-number generator (if it was used) made numbers according to a known formula, where the numbers are as determined as death and taxes. One number follows another with perfect predictability—if one knows the algorithm, of course.

It appears Keenan used three different algorithms, one which added positive numbers according to some similarly determinative scheme to a base scheme, one which added negative numbers to a base scheme, and a base scheme. The base scheme is the known-number generator.

Of course, I’m guessing. I don’t know. But this procedure is certainly common enough under the term simulation. Problem is, too many people (not Keenan) think simulations are semi-magical, claiming they have to be fed with “random numbers.” That makes no sense, because random means unknown, and you can’t feed an algorithm with unknown numbers. More detail is here.

Anyway, this is all beside the point. Keenan asks which of the three types of generated data each series is. Now we’re into the realm of modeling. Modeling? The process of collecting premises which come as close as possible to identifying the causes of the data and which describe our uncertainty in observables.

Now…but, no. That’s all I’ll say. I’ve already given information sufficient to deduce the methods Keenan used, accepting only that he used one of the known generators. Sufficient in theory. Practically? God bless.

I stop because I don’t want the fun to end and because it is besides Keenan’s main point, which is the methods climate scientists use (and everybody is a climate scientist these days) are crap. Amen to that with bells on. Follow the “many, many, many” link above for why.

Many decades ago when I was involved in space and defense programs I wrote a number of Kalman filters employed for the tracking of ballistic objects. Uncertainties in the radar measurements (together with aberrations in the equations of motion of the tracked objects) were represented as sensor noise modeled using a base Gaussian augmented with Markov time-decaying “colored” noise, similar to what some refer to as “random walk”. We knew then that the noise was a proxy for the uncertainties we had in the modeling mathematics — these noise “sources” were brought in to emulate imperfections in what was otherwise rather a poor model of the object kinematics, which were too complex to represent accurately due to high atmospheric interactions, gravitational variation, the pull of Jupiter’s moons, etc. We simply did not have empirical data with which to produce an accurate “model” of the object. Nowhere in the process did we delude ourselves into thinking that “randomness” was a thing. The modified Gaussians were simply a way to cover our embarrassment for our lack of knowledge in object kinematics. In many cases, this was sufficient to produce accurate tracks. In other cases, the filters would diverge in an unmerciless way because, frankly, our kinematic models were missing key physical elements of the actual object motion. These deficiencies were eventually recognized through experiment (wind tunnels, observation, etc.) and the kinematic models were updated to incorporate the improved understanding of object dynamics.

My point in all of this is that understanding the underlying physics is where I prefer to start. It is one thing to study data and data trends. But, without some basic knowledge of how the data should “trend” from a knowledge of the physics or chemistry or biology, I find it all simply a mental exercise in futility.

Just remember what Von Neumann said.
Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.

John Z. makes a good point — physics is where to start…and that’s where [so-called] climate scientists have started [or think they have]. They believe that by analyzing the physics of weather & CO2 (& other gasses) the observed trend — in real world data — is consistent with expectations of physics. I.E., warming–and other measured trends (e.g. ocean acidification)–is consistent with what we’d expect from human activity.

Does anyone really believe that someone’s ability to, or not to, correctly pick out the actual statistical trends in contrived data, then somehow extrapolate that academic exercise to the real world and conclusions drawn from real-world data, will be compelling enough to change anyone’s mind on the subject?

People that don’t buy into the alarmist hoopla will, of course, find this compelling support for their preconceptions…and…those that do buy into the alarmist hoopla will have a slam-dunk easy time of dismissing this out of hand. Next to nobody will have have their conclusions changed.

Looks to me that if there’s an entry fee (as suggested by some at Watt’s site), this is a variation of a very old tried & true carnival game with the only certain outcome being that the sponsor will extract & enrich himself with money from the contestants.

Interesting contest. I agree that many top climate researchers will just say it’s below their dignity to participate, which really says they think they are perfect and should not be questioned. Science has really deteriorated in all of this mess. (It also means one should not regard them as an “authority” unless you mean authority in a dictatorial fashion, not someone possessing master skills in some area.)

Ken: Your curmudgeon is showing. A small fee to discourage non-serious entries is not a money making venture. Had he demanded $100 per entry, you might have a point. If he were going for cash, he should have chosen a much higher fee. (The fee is about the price of three Starbucks lattes, depending on the type. Not really all that exorbitant.)

Can we define “random* process” as one that produces a sequence of numbers whereby the previous numbers in the sequence provides very little insight into the next number in the sequence.

Or perhaps one where the next number in the sequence is difficult to predict without precise knowledge of the process. Or perhaps, even with precise knowledge of the model, but no knowledge of the initial parameters of the pseudo-random number generator.

We can then get into what “very little” and “difficult” actually mean, but that epsilon can be made as small as it needs to be.

Note: random* is not exactly random.

The point is that while the process that generates these scenarios may not be random, there is still sufficient unpredictability to make this an interesting exercise.

The fee is also necessary to prevent someone from emailing every possible arrangement of answers until they win. The rules aren’t very lawyer-y, so without a clause about such an outcome a fee is a good way to save the inbox.

You and I know that a notoriously learning disabled abacus, with faster arithmetics it will not be more intelligent.

So it is futile to claim that IBM Watson is more intelligent than anything else, including humans who were merely tested with “Jeopardy!”.

Two days ago they the announcement came that IBM’s Watson Forecasts Trends (IBM News room, Armonk), but their news page has no contact email so I post here. Could someone in U.S. tell the IBM Watson people that a tiny little $100-Grand challenge is waiting for the discovery of a trend.

“there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.”

I doubt Von Neumann wrote the above and if he did, he was wrong. A mathematical method can only produce a series of numbers that appear random. It cannot produce actual random numbers. Here is a simple pseudo random number algorithm (the simplest I could think of that did anything slightly useful), from an engineering book I wrote back in the ’80’s. –

R = INT((B x R + C ) / D)

This might produce a sequence such as –

7 6 1 3 4 0

Or a much larger sequence if you use use larger numbers or dispense with the use of integers.

There are two things to note about this randomness that is extremely important:

(a) The sequence is not random because the sequence is generated mechanistically.

(b) The pattern will inevitably exactly repeat. Which is not an what actual random sequence is supposed to do.

Doug M, your talk about random numbers brings to mind Gregory Chaitin’s book “The Unknowable” in which he talks about program complexity and randomness. I’ll quote from his book in the section, ” Randomness of finite and infinite bit strings” (pp 97-98):
“First of all, I should say that for finite strings, randomness is a matter of degree, there’s no sharp cutoff. But for infinite strings [yes, he said infinite] it’s either random or non-random”
and goes on to give an example of a random infinite string defined by the Halting probability.
You look at it and then explain it to me… :>)

The word “pseudo” is used to reflect that those numbers are generated by a computational algorithm, such as the one in Will N’s comment. “Randomness” is a kind of uncertainty that is assessed by a probability distribution. So the word “random” is tied to the distribution of the generated numbers (of long sequences).

The file Series1000.txt contains 1000 time series. Each series has length 135: the same as that of the most commonly studied series of global temperatures. The 1000 series were generated as follows. First, 1000 random series were obtained (via trendless statistical models fit for global temperatures). Then, some randomly-chosen series had a trend added to them. Some trends were positive; others were negative. Some trends were deterministic; some were not. Each individual trend was 1°C/century (in magnitude)—which is greater than the trend claimed for global temperatures. “

Via trendless statistical models fit for global temperatures? Not sure what Keenan means by “fit for,” though I have seen several models fit to the global temperature data in time series textbooks. Those different models, without or with a trend, produce similar short term predictions, but different long-term results.

There are many, if not an infinite number of, trendless statistical models. Within each one of the trendless models, e.g., ARMA(0,q), for each possible q value, there is an infinite number of possible values for the coefficients and white noise values.

Even if all of the 1,000 time series are generated from the same random walk (with a nondeterministic trend) model, with a certain test statistic, one might at best expect to guess 950 of the series correctly (yes, p-value).

I know why I won’t participate in the contest, but my students are obviously in a different life stage. The prize of $100,000 might entice them to give it a try. So I shall pass along this contest to the students in my time serie class.

I’m going to make a general comment about some of the commentators on this blog.

Quite often they offer opinions, but not references or logical argument to back up their assertions. More importantly, they insult other commentators, even though this does not add to the force of their argument. Being a bear of very little brain, i’m quite happy to be corrected when I goof, or when there’s a better explanation than I have to offer. But I choose not to respond when the correction is an insult, whether it be correct or not.

It seems to me that being nasty is a concomitant of communicating via the internet. (I’m just an old geezer who’s used to the better manners of the 40’s and 50’s.) And even though winning arguments seems to be the only goal of the commentators referred to above, insults are neither necessary or sufficient to do so

For a model of what gracious discourse on the net should be, I recommend the Biologos Forum (https://discourse.biologos.org), in which the topics–Evolution and Religion–are much more likely to generate heat than the ones on this blog, Yet there are no nasty comments, and lots of informative exchange. One doesn’t win arguments, but one learns.

I’m not sure the remarks above will change the ways of those to whom it is addressed; Perhaps there are lots of ego-boosting mechanisms involved that make it difficult to change. But i’ve got it off my chest, and as the Bible says,

“Brethren, and if a man be overtaken in any fault, you, who are spiritual, instruct such a one in the spirit of meekness, considering thyself, lest thou also be tempted” (Galatians 6:1).

There is no such thing as gracious discourse on the internet. That’s because people believe the internet is an egalitarian place. So my idea, which I’ve thought about for 10 minutes, is as good as some other guy’s ideas, even if he has thought about the same problem for 30 years. And when people are shown to be wrong, they get very upset. Because it’s really important that one not have one’s ego dented, even if you’re an anonymous pseudonym. Kids these days, eh? Don’t worry about people’s feelings, and focus on interesting ideas.

Once upon a time when I needed random numbers I wrote a machine code interrupt routine that incremented a memory address from the PC’s real time clock chip. It was a low priority interrupt so other PC interrupts always caused the increment frequency to be slightly irregular. The random numbers were the last few digits of the ever-changing number in the memory address.

I agree with much of what you said in your comment about blog commenters, and I generally attempt to play the ball and not the man.

But just a few days ago I encountered a situation in which I believe it was appropriate to play the man as well as the ball: the man, Christopher Monckton, has been duping quite a few of those who are skeptical about the proposition that catastrophe will result from our enriching the atmosphere with carbon dioxide, thereby making them look stupid, and few people have seemed able to recognize that.

Last January, he, our host, an a couple other guys published a paper called “Why Models Run Hot: Results from an Irreducibly Simple Climate Model.” The paper invoked theories of electrical circuits as well as linear and control systems where it was apparent that the authors had wandered in over their depth. For example, they seemed to think that their equation would compute the output of a non-linear system if only appropriate values were selected for the “transience fraction” Table 2. (I say “seemed” because Lord Monckton’s writing on technical subjects is usually a study in latent ambiguity.) Initially, I did play only the ball. In fact, although I criticized certain aspects of the paper, I mainly gave a little tutorial on relevant aspects of those disciplines so that the authors could on their own draw the right conclusions and amend their paper accordingly.

I was taken aback by the response. Not only did the authors still fail to grasp what hundreds of thousands of undergraduate engineering students routinely master every year, but Lord Monckton unleashed a welter of bluster, irrelevance, illogic, and error when he should have been addressing the errors I had pointed out and the question I raised about the provenance of his Table 2’s contents. Moreover, despite apparently feeling free to venture opinions, almost none of the commenters betrayed any knowledge of the relevant subject.

So a few days ago I warned readers not to rely on what Lord writes, because, for example, his citations don’t support what he contends they do, and I replied to his characteristically disingenuous responses by observing that they were characteristically disingenuous.

In doing so I played the man as well as the ball, and in appropriate cases I am likely to do so again.

Will: And of course no one else has ever thought about anything for 30 years unless it agrees with your idea. What was that about an egalitarian place? Let’s face it—you are the most insistent commenter on being RIGHT and insulting all those who disagree. (I’d do a comparison to other past commenters, but I’d end up in moderation. What you are doing is plain and simple trolling—insulting people because they don’t agree. You’re no different, yet you condemn other trolls.)

Gary in Erko: That is very clever. I wondered how one could generate something even close to random in numbers using a computer program.

Rich: You’re right about not telling you why. However, the original hypothesis was CO2 was warming the world. First and foremost, you must prove the world is warming. Everything else comes second.

Joe Born: I do understand. Monckton is the Al Gore of the skeptic world. Yes, he writes papers (which is beyond Al’s capability), but he has all the arrogance and condescending that Gore has. It stretches the patience of a saint to remain civil to him very long if you disagree with what he said. I don’t know that I have ever read an actual “discussion” with him—he preaches, not discusses.

Joe Born, that was a kind and gracious reply. In the same spirit let me offer a few suggestions. If you want to convince someone with opposing views, it may be more effective to offer specific rather than general criticisms. When you say
“Not only did the authors still fail to grasp what hundreds of thousands of undergraduate engineering students routinely master every year, but Lord Monckton unleashed a welter of bluster, irrelevance, illogic, and error when he should have been addressing…”
I keep wondering what it is that undergraduate engineering students routinely master . And also, even one or two examples of Lord Monckton’s bluster, etc. would have been convincing.
Again, let me stress that I’m not trying to be contentious but helpful.

I came to my own position as a climate change skeptic not from reading blog material but from Richard Lindzen’s 40 page summary of his thoughts on the subject and reading about Sally ??? and Soon’s work and reading Singer’s papers.

I wouldn’t trust myself to come to a conclusion on any issue after reading only a blog post, and even less likely only a blog comment, since there isn’t enough space to make a fully coherent argument. But that’s me.

Sheri, you may be right but Doug Keenan says:
“In essence, the prize will be awared to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.”
So he’s talking about the causes of warning not merely the fact of it. But his challenge, it seems to me, is only about establishing that a warming trend exists.
I find his presentation a little opaque.

You’re exactly right about giving specifics, which my comment above was short on—although I did mention the authors’ non-linear-system claim. But in the case above my purpose wasn’t to convince you of my position on the control-systems theory, etc.; it was to give an illustration of a situation in which it is appropriate to direct remarks at the person as well as the subject. The situation is that a great many people who are unable (for perfectly good reasons; we aren’t born knowing everything) to grasp the subject matter nonetheless rely on his conclusions. So, to the (admittedly, very limited) extent that I could, I wanted to alert readers to the character of the person in whom they’re placing their trust.

If you’re instead interested in the technical issues to which I referred, you may want to read my posts here and here as well as my comments in the thread spawned by the post here.

I’ve inferred from comments that some of those posts’ content is tough sledding if you haven’t dealt much with electrical circuits, linear systems, or feedback theory. But I assure you that it’s all undergraduate-level stuff for engineers (although my experience is that many who no longer deal with it will long since have forgotten it).

In the comments I made in thread spawned by the last post I mentioned above, you will see numerous specifics of mine about, e.g., the meaning of the hyperbola in his paper’s Fig. 5 and the results of loop gains near unity. In response, rather than address the technical issues I set forth with specificity, Lord Monckton gave us this exhibition of mere conclusory statements (Joe Born is wrong) and empty name-dropping.

As to his conclusory statement about my not comprehending the distinctions among the various basic feedback concepts, there is no nomenclature unanimity in the field, but I’ll repeat here what I said there (and you may want to refer in this connection to Fig. 1 of the first post I mentioned in my previous comment above):

“Frankly, I’m more concerned with substance than with the names people give to things, but I know the terms that experts I’ve spoken with use. They use “open-loop gain” to refer to the gain g that prevails before feedback is applied; without feedback, the relationship between the stimulus x and the response y is y = g x. When a feedback element of gain f is added, the stimulus x in the open-loop equation is replaced by the sum of the stimulus x and the feedback fy, so y = (x + fy) g, implying that y = gx/((1-fg): the resultant “close-loop gain” h such that y = hx is given by h = g/(1-fg).

“And they use “loop gain” for the gain fg in the path from the summing junction’s output port to its feedback input port. You can think of the loop gain as the ratio that the feedback element’s output would bear to the summing junction’s output if the path from the feedback element to the summing junction were severed. When loop gain fg is unity, closed-loop gain h = g/(1-fg) is infinite. The “few math courses” that Lord Monckton disparages are more than adequate for derivation of the basic equilibrium feedback equation. Anyone who doubts that there are those who share my view of the nomenclature term need only Google ‘loop gain.’”

You will see that his disparagement of my grasp of the concepts is just bluster. And you’ll start to see that there is very little relationship between the confidence with which he says something and the likelihood that it’s true.

Sadly the word Trolling is now routinely used by anyone who hears ideas or arguments the don’t want to hear because they don’t want to address them. It’s no different from the politically correct complaint that one should never be exposed to ideas that might cause distress, because it’s disguised micro aggression.

Now attack and insult is called “discussing” by the person who refuses to listen to anyone but themselves. Fascinating. Language is such a fluid thing, except when one insists it’s not. Yes, this is drivel and means nothing. Since the new idea is anyone who disagrees has their rational brain out of order, there appears to be no reason to even attempt to discuss or address anything. Just ramble, annoy, etc. It’s the new way—the only way. Yes, this is drivel and means nothing. You are absolutely RIGHT, RIGHT, RIGHT. There, now I feel better and I know you do too. Are we all happy now in this new insane world where discussion is just insults and no one’s opinion counts if the annointed one disagrees? Yes! We are happy now! Happy, happy, happy. 🙂 🙂 🙂

Fortunately humans are not equivalent to Pavlov’s dogs, or rats in laboratory experiments that self administer cocaine. When an associate once complained to me that he could never have an uninterrupted meal without phone calls, I suggested that perhaps a little impulse control would be sufficient. Although I suspect his yelling at the callers, and then complaining loudly about it the next day, really gave him great pleasure.

There has been an update to the Contest. Briefly, the 1000 series have been regenerated. The reason is discussed athttp://www.informath.org/Contest1000.htm#Notes
Everyone who plans to enter the Contest should ensure that they have the regenerated series.

First, the quality of peer review in general seems to be declining in the past couple of decades. Despite my best efforts (I am not a climate scientist), I waste more intellectual cycle time reading climate science papers than I think is warranted. Unfortunately, when a claim is made based on an abstract of a paper, I am not yet able to resist wanting to see the full paper and look at how the claim was supported. As far as I can tell, some climate science journals do not include anyone from mainstream mathematics or physics in the peer review pool. It is often the case that the claims made in climate science papers need much more caveating and discussion of alternative interpretations than is given. I am not a statistician, but I have had the mixed blessing of working with many, and I believe a working statistician would not have recommended many of the climate science papers for publication. The same often is the case for physics and general mathematics contained in climate science papers. As far as I can tell, training in statistics for climate scientists sometimes may mean taking a math course or two and then reading the documentation for SAS, SPSS, R, or the Matlab statistics toolbox. The authors often seem eager to jump into grinding numbers and forget the most important step, thinking first (and second and third).

Off the soapbox now and back to the point. As a youth, I had some experience with the drunkard’s walk and later in life tried walking meditation, all of which might be why random walks are interesting. About a year ago, I wondered if the (pick your favorite) temperature series could be the result of a random walk. After a little thought I had a few-line Matlab code that generated random (well, kind of pseudo random really, if one insists on being correct) walks over the 134-year period for which I had GISS temps. One has to fiddle a little with what step size and interval suits the walk and whether or not to let p = something other than 0.5 but in the end one can generate lots of random walks. (I think this is what Keenan means by “fit for”. One needs a step size and interval that is more or less commensurate with the magnitude of the time series. For example, a temperature step size of 5 degrees C is too big, a time step size of 1 week is too small, etc.) In fact, after a bit of thought I was able to approach almost arbitrarily closely to the sequence of annual temperatures as the number of trials increased. I blithely measured closeness by the sum of squares of the differences between the walk points and the temp series at each year. As I recall, the sum of squares reduced logarithmically with number of trials. Then a little thought about the error bars on the “global average temperatures” convinced me that I only needed a few trials to get within the error bar level of closeness. More thought about how I really felt about what the actual error bars on “global average temperature” should be and I was convinced the whole exercise had been pointless. Then I found a nice little paper by A.H Gordon, “Global Warming as a Manifestation of a Random Walk,” Journal of Climate, 4:589-597, 1991 and noticed a routine called vratiotest in Matlab that assesses the null hypothesis (of local infamy) of a random walk in a univariate time series, which I subsequently misused mightily. After swimming with these sharks a little I decided that, as Quint (from Jaws) said he would never put on a life jacket again, I would never spend much time fiddling with global average temperature time series again. Thankfully it is not my field.

Finally, I think Doug M’s comments are spot on the right direction. Clarity on randomness would be a good thing. One could argue that any finite sequence of numbers contains NO information about the next number in the sequence – without some additional assumption(s) or information. I can give an example. I am writing down a sequence of numbers and here they are: 1,2,3,1,2,3,1,2,3. Now the question is what is the next number I will write? Or what is the most likely number? Or what is the probability distribution over some set of numbers for the next number I will write. The answer is there is no way of telling what I will write down until I have written it. I thought the next number that would be nice to write would be pi but I could change my mind. I could be using an algorithm that says “Wait until someone guesses a number, then write down a different one.” Without some additional assumption or information on my process of writing down numbers, or making some kind of deal with me, the initial sequence of numbers contains no information about what comes next. If someone actually knew the crazy process(es) I might be using, they likely would refrain from guessing altogether. Assumptions don’t help either, we need to get to what Dr. Briggs calls the causes, I think.

Now finally (I mean it this time), a thought about pseudo randomness. As a result of a conversation in a different post, I was thinking about pi. It seemed to me that I could make a nice random number generator by computing pi on a big computer with lots of memory. I would just keep computing pi out to more and more digits, stopping when I had enough and storing intermediate results so I could start over from where I stopped. This way I would have a sequence of digits after the “3.” in pi, e.g. 14159265358979323846264338327950288419716939937510582097494459230……… That never ended and did not repeat, a real transfinite number. So I just pick the last digit I used and start from there in my random number sequence. When I needed more digits I could just calculate pi out a little further. It is totally “random” – there is no way to predict what the next one I will calculate will be. Sounds great. But after thinking a little, I would sure not want to use this sequence for encryption because anyone could check it to find out that it is pi and then they have the whole sequence. While the sequence itself may not contain information enabling its prediction, knowledge of the “cause” nails it.

FAH, what a fine comment! To repeat remarks made in a previous comment on this post, I think those interested in randomness and its relation to computer algorithms would get a lot of reading Chaitin’s book “The Unknowable”. He’s an IBM Computer/Math/Philosopher(?) geek with a big reputation, and he addresses what might best define a random string by the complexity of the computer program required to generate that string.

This way I would have a sequence of digits after the “3.” in pi, e.g. 14159265358979323846264338327950288419716939937510582097494459230……… That never ended and did not repeat, a real transfinite number. .. While the sequence itself may not contain information enabling its prediction, knowledge of the “cause” nails it.

Many pseudorandom generators have the pratfall of allowing the next number to be known For example, the Mersenne Twister (particularly the MT19937 version), which has the enormous period of 2^19937-1, has predictable numbers after observing 624 outputs in sequence.

Pi is going to be a very poor random number generator, because you always end up with an identical sequence every time you run your generator. Hence it’s actually useless. Hence any useful or semi-useful algorithm is going to require seed values that result in the generation of different sequences.

(But you might conceivably use Pi as your seed for your main algorithm, although at the end of the day there is no way around the deterministic nature of the number sequence.)

Imagine a scenario where in a double blind study everyone figure’s out the first three patients get the placebo, the next two the actual medicine, and so on…