Posted
by
CmdrTaco
on Thursday January 06, 2011 @09:50AM
from the but-magic-is-fun dept.

thomst writes "The New York Times has an article (cookies and free subscription required) about the protests generated by The Journal of Personality and Social Psychology's decision to accept for publication later this year an article (PDF format) on precognition (the Times erroneously calls it ESP). Complaints center around the peer reviewers, none of whom is an expert in statistical analysis."

Why is that erroneous? Precognition and premonition are two facets of Extrasensory Perception. From its wikipedia article [wikipedia.org]:

Extrasensory perception (ESP) involves reception of information not gained through the recognized physical senses but sensed with the mind. The term was coined by Sir Richard Burton,[citation needed] and adopted by Duke University psychologist J. B. Rhine to denote psychic abilities such as telepathy and clairvoyance, and their trans-temporal operation as precognition or retrocognition.

So if you were dealing with anything of the above or anything external to our normal senses, I think that qualifies as ESP and calling it ESP. Sure that acronym has a lot of baggage but from the study itself:

... this is an experiment that tests for ESP (Extrasensory Perception).

That's what the tests subjects were told and I don't think the article is erroneous.

I don't understand this. If the researchers did a proper experiment, respected the rules, followed proper procedures, and did a proper analysis of the data they collected, in a scientific way. Why is it a problem to publish ?
So now we should bar publication that don't agree with our general conception ? If it was done considering the specific guidelines set by the scientific community of how to do things, screw them. Science is not a democratic process, nor it should be politically correct. Science is scie

From the summary, the implication is that the data analysis was not proper - or at least, not shown to be proper. Since the claimed effect is a fairly small artifact only detectable by sophistcated statistics, it seems reasonable that the reviewers should include those who have a deep understanding of such statistics - which, it is claimed, they did not.

They failed that one of your hurdles - they didn't do a proper analysis on their data. Basically, their data conclusively shows that the chances of pre-cog existed COMPARED TO it not existing is extremely minimal (actually quite strongly in favour of it not existing). But they specifically chose only certain analyses to conclude that it *did* exist.

There are many rebuttals at the moment, most linked to in these comments, that you can read but basically - to remove all statistical jargon - they didn't bother to take account of how probable their data was by pure chance. Their "error margin" is actually vastly larger than their data could even escape, so they can't really make any firm conclusions and certainly NOT in the direction they did. Statistics is a dangerous field, and whoever wrote and reviewed that paper didn't have a DEEP grasp of it, just a passing one.

If you calculate the *chance* that their paper is correct versus their paper being absolute nonsense, not even taking into account anything to do with their methods or that their data might be biased, their data can ONLY mathematically support a vague conclusion that their paper is nonsense. To do the test properly and get a statistically significant result (not even a *conclusive* result, just one that people will go "Oh, that's odd") they would have to do 20 times as many experiments (and then prove that they were fair, unbiased etc.).

It's like rolling three sixes on a die and concluding that the particular die you rolled can only possibly roll a six. It's nearly as bas as claiming that so can every other die on the planet.

In one sense, it is a historically familiar pattern. For more than a century, researchers have conducted hundreds of tests to detect ESP, telekinesis and other such things, and when such studies have surfaced, skeptics have been quick to shoot holes in them.

I always thought the hole-shooting was an essential step in the scientific process. If you can't patch up the holes, you aren't really doing "science."

In science, you tell a story. Then everyone says, "no, that's wrong because..." Then you say, "I'm afraid I'm right, because..." It is an imperfect process because people have biases that are hard to overcome. But, if the empirical evidence is strong enough, you will overcome these imperfections. In the end, you build a consensus by presenting enough evidence that no one can argue with. I'm not sure what is more democratic than that.

I don't understand this. If the researchers did a proper experiment, respected the rules, followed proper procedures, and did a proper analysis of the data they collected, in a scientific way. Why is it a problem to publish ?

Shouldn't be any. However, this case isn't relevant to your question, since they didn't do a proper analysis. (Or at least that's what the rebuttals say; I haven't read the paper.)

One of the most glaring problems is that they (reportedly) went fishing for statistical significance in their results, without making the correction that is required for rigor when you do that. When you find significance at the traditonal 95% confidence level, there's a 5% chance that you're finding meaning in noise. If you test for 20 different effects and then go fishing to see whether *any* of them show significance at the 95% confidence level, you have 1 -.95^20 = 64% chance of finding "significance" in noise.

There are simple ways to fix that problem, e.g. the Tukey HSD "honestly statistically different" test. Apparently the authors were either ignorant or dishonest, and the reviewers were either ignorant, dishonest, are careless.

Peer review is not an indication of democracy. It's an indication that your work was found to be fault free by people of similar qualification.
Peer reviewers don't vote, they don't implicated personal agenda on how they review science.
All they do is check for inconsistencies, and flaws.
(I am talking in the broader sense)

Well we don't know any current mechanism by which people can perceive the future (assuming that everyday extrapolation is not included, I guess - there's no mystery about the fact that I can tell a flying ball will soon hit the ground), so wouldn't that mean that if evidence is found for an ability to sense the future, that implies ESP under your definition?

Why do you remember the past and not the future? "Hasn't happened yet" is a perception, not a physical characteristic of the world. The world just exists in 4 dimensions; it doesn't change. Living things perceive a direction of time.

None at all. It is a neat model, and physics seems to be completely explained by it ("seems", as there are still fundamental gaps in the current models), but it does not take into account human beings. It is quite possible that directionality of time and impossibility to predict the future is actually something brought into this universe by human beings or life itself.

The 4-dimensional model does only cover dead matter and it is just a model, not reality.

It's a compelling model that addresses a lot of tricky questions very neatly.

For instance, if you combine this with many-worlds theory, you can eliminate the paradox of free will - that is, when I make a decision, what internal process prompted me to make that decision? And what prompted that? And so on.

If you think of the universe as a static object that at every instant in time (or "the fourth dimension," if you prefer) branches off into multiple possible realities, then you can think of yourself as having made every possible decision, but being able to remember only one, because the state of your brain in this particular branch of the decision tree is only consistent with one past.

It works the same way as the anthropic principle. Why is the universe perfect for supporting life? Because if it wasn't, you wouldn't have asked. Why did I make that particular decision? Because you're thinking about the decision from the perspective of a universe in which that particular decision was made. This also explains why consciousness appears to have a special place in quantum collapse. It's really an illusion, and there is no "collapse" - you have just chosen a particular viewpoint that is only consistent with one specific observation.

Firstly, that bears little relation to what I said. Even taking your post at face value, the fact is that we "living things" still do perceive the past but not the future with the senses currently known by science - if it is discovered that we can also perceive the future, it will be above and beyond our current sensory experience, hence ESP. But anyway, we're really just arguing semantics here.

Entropy is the observation that the universe becomes more disordered with time, establishing a measurable direction for time's arrow. Ultimately the heat death of the universe leaves all matter in the same state at the same temperature, completely disordered. Memory is the reverse - disordered in the past, more state in the future, with ultimately a complete memory of the entire history of the universe available to some future super-being. If entropy were the cause of our perception of moving through tim

Here's an interesting puzzle for you then: Define the mark at which it stops being sufficiently advanced extrapolation and starts being "special" precognition. I mean I know all kinds of interesting things get processed as background tasks in my head, often ones I'm not aware of until I'm walking along and get hit with an "AHA!" moment where I suddenly have an algorithm or a complicated analysis of song lyrics or something of that nature appear full-formed in my conscious mind. I imagine if I could get i

For those who haven't seen it, here's a pretty sharp takedown of this paper, as well as some notes on statistical significance in social sciences in general: www.ruudwetzels.com/articles/Wagenmakersetal_subm.pdf

Ouch. Taken down by two Bayesian tests on whether it's more likely that the paper is true or not. They didn't even need to get out of bed or dig out a big maths book to basically disprove the entire premise of the original paper using its own data.

As they hint at in that rebuttal - As a mathematician and someone of a scientific mind, I would just like to see *ONE* good test that conclusively shuts people up. Trouble is, no good test will report a false result and thus you'll never get the psychic / UFO / religion factions to even participate, let alone agree on the method of testing because they would have to accept its findings.

Never dabble in statistics - the experts will roundly berate you and correct you even if you *THINK* you're doing everything right. When PhD's can't even work out things like the Monty Hall Problem properly, you just know it's not something you can throw an amateur towards.

Dabbling is fine when the results are good. He had 53%. If he'd had 65%, dabbling would have worked. But dabbling is just the start, and that's not just the nature of peer review, it's the nature of collaboration and a University setting. You find something neat by dabbling, and you walk down the hall to visit someone with more stats experience to get some clarity before you publish.

He had 53%. He knew that if he walked down the hall, he'd get told he had squat. So he didn't walk down the hall.

But isn't that simply applying the maxim: extraordinary claims require extraordinary evidence? I.e. it is not asserting the evidence doesn't exist, only that the evidence isn't strong enough to make the assertion that the hypothesis is true with any confidence. There's a subtle difference.

But isn't that simply applying the maxim: extraordinary claims require extraordinary evidence? I.e. it is not asserting the evidence doesn't exist, only that the evidence isn't strong enough to make the assertion that the hypothesis is true with any confidence. There's a subtle difference.

Not all that subtle, since you can say "the evidence isn't strong enough" about any claim you care to pull out of your 455.

I haven't yet had a chance to read the paper fully (it's 50 or so pages), but if they are actually that confident in their evidence that precognition has been found, the James Randi Foundation has a million dollars [randi.org] waiting for them.

There's a lot more about eliminating random number generators (by using this little guy [araneus.fi]) leading to prediction as well as running more tests where they are asked to pick a preference of two identical images. The most interesting part is that these results seemed to hinge on pornography. The individuals only exhibited this "precognition or premonition" when they were picking erotic images or rewarded with erotic images (albeit from the International Affective Picture System).

The skeptic in me is very pleased and excited about this part of the paper:

Accordingly, the experiments have been designed to be as simple and
transparent as possible, drawing participants from the general population, requiring no
instrumentation beyond a desktop computer, taking fewer than 30 minutes per session, and
requiring statistical analyses no more complex than a t test across sessions or participants.

Grad students across the country: get to work!

But you would have to have the lottery involve some sort of erotic pictures containing the known numbers in order for this edge to be garnered. Which would be impossible unless the lotteries changed how they worked. Maybe play blackjack with a set of playboy cards?:-)

There are only seventeen types of pornographic troll comments here in the recent history of Slashdot, and most of them are goatse. Therefore I predict that the third pornographic troll comment to appear seven stories from now will be a goatse. Further, there seems to be a rise in a goatse bot that can't spell which is supplanting classical troll phrasing.*

* Statistics not properly peer reviewed. However, I predict that it is too much work for anyone to properly disprove this comment. Therefore I will get so

Haven't read the paper, don't know the sample size involved and experiment details, and don't have the time to run the math right now, but it looks like the numbers shown in your post would still support the null hypothesis that there is no precognition effect, so entirely opposite conclusions should be drawn from the same data.

something to the effect of 'since there are no rich fortune-tellers, we have to assume that they are all fakes.'

That's just brilliant. If there is someone who has precognition working well enough to become quite rich from it, do you think they'll be announcing that publicly, or just quietly getting rich?

The general rule of thumb: Those who announce publicly that they've got such an ability working well for them probably don't - they're trying to make money not from the ability itself (which they don't have)

I suspect (strongly) that if you have a 3% edge over everyone else, you'll still lose at the lottery. I think the odds in the lottery are so badly kiltered against you that even a real, solid 3% edge would leave you a loser.

That's NOT true of any casino games. Take 3% to a casino and you'd leave a millionaire in short order. (No, wait. Actually you'd get bounced in short order and barred from the casino.)

Within the Challenge, this means that at the time your application is submitted and approved, your claim will be considered paranormal for the duration. If, after testing, it is decided that your ability is either scientifically explainable or will be someday, you needn’t worry. If the JREF has agreed to test you, then your claim is paranormal.

I'm sure that if a bunch of scientists came along and said "we have statistically significant evidence of precognition, and not a damn clue how it works", the Randi foundation would jump at the chance to test them.

I don't believe for a second that these people actually do have any legit evidence, but on the off chance that they are for real then this will be a massive breakthrough. Of course, it will be explainable by science in time, and perhaps "supernatural" is a poor choice of word, but if you read through the entire FAQ [randi.org] you'll see that the foundation sound entirely reasonable, and I don't doubt that they would be willing to test something on the basis that it runs quite counter to currently accepted theory.

Their aim (and one that I applaud) seems to be to either disprove paranormal claims, or to prove them in a scientific manner. Sure, doing so will, by definition, destroy their 'paranormal' status, but it could also revolutionise scientific thinking. As I said though, it's probably a moot point, since I see no reason to believe this paper any more than the thousands that came before it.

That assumes perfect precognition. The effects that I saw claimed were more like, 3% better than random.

For a number of casino games, with "perfect play" (perfect not including black jack card counting, even though it should) the casino advantage is in that range usually. In fact, I believe the payout on slot machines is often close to 98%, again depending on the play. (some of the really pathalogically bad bets give the house much better odds)

At European casinos, if you play red vs. black in the roulette, you have a 18/37 chance, that is 48.64%, of winning double your bet.

A 3% better than random precognition rate would let you get rich in a few hours, while still being random enough to be considered pure luck, so you could get filthy rich before they banned you from all the casinos in the world.

That assumes perfect precognition. The effects that I saw claimed were more like, 3% better than random.

in the PDF rebuttal posted by 246o1, the authors show that with a mere 53.1% advantage on predicting the color a roulette wheel turns up, a psychic could "bankrupt all casinos on the planet before anybody realized what was going on".

This is a no-brainer. If merely one person in a million could predict the future, teleport, move objects with their mind, or any of the other stuff usually claimed, our society simply couldn't operate as it does. There would be new layers of safeguards on darn near everything,

According to this study, people reacted instinctively (arousal) to what would happen (porn photo) mere seconds before hand. There's nothing to indicate that anyone could use that information to change the future. Imagine that the random picture game was a choice game: they will choose only one or the other photo, and the arousal/non-arousal will let them know which they'll receive, but not which they'll choose.
They'd have to find someone who is aroused by watching a specific someone else win, and place

I'm not sure they're that confident in their evidence. Nor should they be - they did a study, they publish their findings, lots of other scientists either put down rebuttals (as has already happened), or repeat the study and see if it's accurate enough to be true. That's the way science is supposed to work.

What's not supposed to happen is "Scientist A does an apparently sound study that appears to demonstrate something that scientists B,C, and D consider silly, and scientists B, C, and D stop scientist A's work from ever seeing the light of day."

I'm not sure they're that confident in their evidence. Nor should they be - they did a study, they publish their findings, lots of other scientists either put down rebuttals (as has already happened), or repeat the study and see if it's accurate enough to be true. That's the way science is supposed to work.

What's not supposed to happen is "Scientist A does an apparently sound study that appears to demonstrate something that scientists B,C, and D consider silly, and scientists B, C, and D stop scientist A's work from ever seeing the light of day."

They shouldn't be confident in their evidence - you are right about the way science should be done, but I think in this case the rebuttals are easy to find because this paper seems to be the result of significance-chasing, with enough simultaneous 'experiments' going on within each individual experiment (i.e., the 'experiment' to determine whether men were affected by Thing Type A was a subset of the experiment of whether anyone was affected by Things Type A,B,C, or D) that it would be surprising if signifi

The challenge I linked allows anyone with a claim of "paranormal ability" to design their own scientific test, to be agreed upon by the foundation. Once agreement is reached, the person is tested under supervision by a panel of impartial experts. Theoretically, they then show an astonishing ability, get a nice bundle of cash, and science is greatly advanced by studying what they can do. So far, though, the result has invariably been that the participant looks slightly sheepish when their 'powers' fail to fu

Basically, a physicist made up some BS and got it published in a journal called Social Text about postmodern cultural studies. He then came out later and revealed the hoax, embarrassing the reviewers and the journal. Lack of intellectual rigour seemed to be the target. This time, it seems to be more specifically aimed at the lack of understanding of statistics in certain subjects.

I'm a Ph.D. candidate in EE, and I'm sometimes invited to review papers for IEEE journals.

I always read the paper carefully at least 3 times, read the important parts of references that are new to me, check all the math and sometimes even reproduce some simpler simulations.

Most reviewers aren't this careful. They either don't have the time or don't have the expertise to find some flaws. Keep in mind that reviewers aren't paid, and are anonymous. Also, the best reviewers are the best researchers, who are usu

You can also have good science rejected by getting three incompetent reviewers. Happened to me several times, the worst one when the program committee attached a note that showed they had not read or understood their own call for papers. I suspect a direct lie to keep me out. Published it later unchanged somewhere else and those people were surprised it got rejected earlier.

In addition to incompetent reviewers, there are also those that are envious or want to steal your ideas. Peer-review is fundamentally broken. One friend who has a PhD in a different CS area thinks 70% of researchers are corrupt, reviewing things positively when they know the authors, no matter the quality and negatively otherwise. Lying in application to research grants is also quite common. The final result is that good researchers have trouble working and often leave research altogether, which may be an explanation for how glacially slow some fields move.

You can also have good science rejected by getting three incompetent reviewers. Happened to me several times, the worst one when the program committee attached a note that showed they had not read or understood their own call for papers. I suspect a direct lie to keep me out. Published it later unchanged somewhere else and those people were surprised it got rejected earlier.

In addition to incompetent reviewers, there are also those that are envious or want to steal your ideas. Peer-review is fundamentally broken. One friend who has a PhD in a different CS area thinks 70% of researchers are corrupt, reviewing things positively when they know the authors, no matter the quality and negatively otherwise. Lying in application to research grants is also quite common. The final result is that good researchers have trouble working and often leave research altogether, which may be an explanation for how glacially slow some fields move.

I am a researcher in micro and nanotech, and I can confirm this trend in my field, as well. In fact, one journal in particular has been especially bad in rejecting my articles with some awful refereeing, which I will save for posterity. I am tempted to rub my published articles under the nose of the (probably equally incompetent or corrupt) editor of that journal.

The word precognition should fall into the same kind of internal contradiction as the word almighty, at least for the future events on which you can change affect whether it happens or not. To predict the future is ok, but really knowing the future should violate some physics rules as going faster than the speed of light.

When I 10 years old, there was a commercial that said that if you roll a 6-sided die 60 times and you correctly guessed the results more than 10 times you were precognitive. Well I guess correctly 11 times, so there are your scientific results. Now I move into my new career as a stock market analyst.

Most people I encounter even have trouble with "postcognition". Yea, I'm looking at you Sarah Palin!

I was thinking, actually, that all of the people (including Palin) who predicted that Obama would turn out to be less or different or worse than the imaginary character that millions of people thought they were votiing for... that that shows a certain level of cognition that a whole lot of post-election-coginition is allowing other people to realize they'd done something silly.

http://science.slashdot.org/story/11/01/02/1244210/Why-Published-Research-Findings-Are-Often-False [slashdot.org]
If you read this piece it talks about a "decline effect." Basically research that gets published has positive results to report and conforms to established opinion. However, further study shows that the effects aren't as strong as before. It talks about research showing pre-cognition before, but later be disproven.
I think it's just fine to publish work in journals on pre-cognition, if the work was done i

Maybe this is a hack. They say he has a sense of humor.... think of this... he did design his studies well, at least the ones that I have read about. The effects of this "time leaking" are fairly small. Perhaps the entire point...is to make a point about statistics.

Added bonus? Put the ESP issue to bed. Him doing this, and specifically doing it so publicly and getting it passed peer review and publication, ENSURES that these studies are going to be replicated by numerous people, for the next several years. That, in and of itself, could produce enough evidence against ESP to really put the issue to bed:)

Say what you want about his paper, the effects reported are as large as many "well accepted" study results. Which may be the scariest part of all.

That said, I am no ESP believer (that may be obvious) but, some of the statements that are made against it are ridiculous too. "Why aren't people winning the lottory with their perfect precognition". The effects he is talking about here are on the order of a few percentage points better than random... which is more than the house advantage at many casino games (assuming optimal play)

Across all 100 sessions, participants correctly identified the future position of the erotic pictures significantly more frequently than the 50% hit rate expected by chance: 53.1%

It's pretty easy to come up with significant results in this field: Just do a sufficiently large number of experiments, and you will inevitably come across some significant results. This works for any definition of significance, though of course it's easier for low standards.

Last week I was explaining to my wife why it was her cousin always did so well at the casino. It’s not that she’s “a positive person, which attracts success in gambling” or any other warm-fuzzy explanation. Consider that over a long run of gambling, someone will have probabilistic periods of “good runs” and “bad runs”; likewise, among multiple people statistics dictate that there will be some who have a lifetime of

I looked at the paper. I don't believe the conclusions. But they seem to present all the necessary data so that you can do your own statistical analyses, and they offer to give you the software.

I don't see any reason for people to get "outraged" over this. Publication in a peer reviewed journal is not a guarantee of correctness (in fact, probably the majority of peer reviewed publications contain significant errors), it merely means that the paper meets basic scientific standards in terms of approach and analysis.

If there is an error in the methodology or results, then people should respond by publishing a paper pointing those out. That way, everybody can benefit from the discussion. So, that's where all those people who are "outraged" should channel their energy.

Yup, that pretty much covers it. Liars, or people who are too dim to understand how their own physiology can mess with their perceptions of what's happening to them, near them, or in incorrectly remembered past moment.

Coming armed with facts and logic is the best way to go - as long as you're not completely blinded by them. Everything has an explanation; it's just a matter of finding it, which can be very difficult. Just because the explanation is not obvious or cannot be found doesn't mean that an event didn't happen.

Atlantis is mentioned by quite a few historical records. We can't find it. Does that mean that it never existed, or does it mean that it existed at one time and we just can't find it?

Sadly, I agree. Psychology is dying. Not because there's not a lot of potentially interesting research to be done, more because psychologists are increasingly both nepotistic and unable to analyse their way out of a card board box. Yet at the same time we should perhaps be thankful for the science of psychology. Imagine what would happen if all those psychologists tried to do something useful in the world! We'd have a lot more Wakefield/Lancet fiascos, for sure. Returning to the point at hand, my advice has