I don’t expect a non-scientist, or even a scientist far outside their area of expertise, to be able to do a detailed analysis of the strengths and weaknesses of a study. That is what peer-review is for. However, there are some basic rules of thumb that could give even a lay person a rough idea how seriously they should take a study. Always ask at least the following questions:

Is the study controlled in some way? Was the treatment group compared to a control group, or was the alleged effect compared to some baseline?

Is the study blinded? Were the primary measurements or assessments performed by someone who was blinded to whether or not the alleged effect is supposed to be present?

Are the outcomes being measured subjective or objective? How are they being measured? What do they really mean?

How large is the study? Studies with small numbers of subjects or measurements (less than 50 per group is a good rule of thumb) are considered small and unreliable.

Is the study an observation or an experiment? Are they just looking for some correlation (in which it is difficult to make statements about cause and effect), or are they controlling for variables and isolating the one factor of interest?

What is the reaction of the scientific community to the study? Are experts generally critical or excited about the results?

6) Most Studies are Wrong or Incomplete

Individual scientific studies vary on a spectrum from garbage to highly rigorous. No individual study is definitive or is the final answer to any scientific question. Researchers have unconscious bias. Rigorous studies are extremely difficult to pull off and scientists almost always make compromises or trade-offs. Scientists also make mistakes or fail to consider subtle effects or sources of error.

Therefore, don’t base any conclusions on individual studies. They just don’t tell us that much.

Studies need to be independently replicated. That is the only way to really know if a phenomenon is real or just illusion or error.

When trying to answer a scientific question, look to systematic reviews or expert summaries that evaluate all of the evidence and come to an overall conclusion.

7) Consensus Matters

When a community of experts largely agrees that the scientific evidence leads us to a certain conclusion, take that conclusion very seriously. Scientific consensus is not infallible (and again, look to the confidence level or error bars) but it is the best we have.

If as a non-expert you do not agree with or understand the scientific consensus you need to understand that it is overwhelmingly likely that you are wrong, rather than the majority of experts who have dedicated their careers to studying a topic in depth are wrong. Seriously – scientists are generally pretty smart people, they are thoughtful, they spend their time thinking about their area of expertise, reading about it, discussing it with their colleagues, debating minor technical points, going to conferences, and doing real research.

The difference between a non-expert’s level of understanding and an expert’s level of understanding is vast. Chances are you don’t even know enough to have a real grasp of the vastness of your own ignorance. It is reasonable to assume they are probably right. If you have thought of a potential problem, then they have thought of that problem. Listen to what they have to say.

It is also important to understand that individual scientists can be wrong and can have quirky opinions and biases. Don’t trust the opinions of one scientist; they may be in the minority. Within a community of scientists, however, individual biases will tend to average out and reliability goes way up.

8) Understand Pseudoscience

There is no clear line of demarcation between science and pseudoscience, but it is important to recognize the features of pseudoscience and why they are pathological. Here are some quick red flags to help recognize pseudoscience:

– An individual scientist or group is an outlier of the scientific community. This is not a guarantee they are wrong, but there are hundreds if not thousands of cranks for every visionary, so if you’re playing the odds, be very skeptical of outliers.

– Scientist is hostile to criticism. Criticism is part of science, it is how the community hammers out what is likely to be true. Extreme hostility to the normal constructive criticism that is part of science is a huge red flag.

– Related to the above point, pseudoscientists tend not to be integrated into the relevant scientific community. They are working alone on the fringe, and not testing their ideas with those who are in the best position to judge them and give them critical feedback.

– Claims seem to be way out of proportion to the evidence. Proponents may claim a huge paradigm shift based on the most preliminary of evidence, reject competing theories without adequate justification, rely exclusively on their own research, and are not skeptical of their own claims.

– It seems as if they are working backward from their desired conclusion, rather than following the evidence. Related to this, it may seem as if they are working under an obvious ideology, and looking for evidence to support that ideology.

9) Understand Denialism

There are various common strategies for denying well-established science. They include:

– Setting impossible standards for evidence, and then moving the goalpost when new evidence comes to light.

– Dismissing the consensus as a conspiracy, dismissing proponents of the real science as shills, and dismissing any evidence as flawed regardless of quality. Deniers will also engage in witch hunts in which the normal everyday connections that scientists and academics make as part of their job are used as evidence for a conflict of interest or a real conspiracy.

This is admittedly a big category, and contains much of what we call skeptical philosophy. However, it is important to understand the basic fact that we cannot rely on casual observation or thinking because human brains are biased and flawed. We need rigorous logic and methods of observation to compensate for those flaws and biases, and that, essentially, is what science is.

Here is a quick list of the most important concepts:

– Confirmation bias. We tend to notice, remember, and accept bits of information that support our current beliefs or narrative, and ignore or reject any information that contradicts our beliefs. This can create the powerful illusion that the facts support our position.

– Memory is flawed and constructed. Every time we remember something we are reconstructing the memory, and adjusting it to fit our current beliefs. Our brains are more concerned with internal consistency than accuracy.

– Our perceptions are constructed fictions that evolved to be useful, but not necessarily accurate.

– Motivated reasoning. We are very good at inventing reasons to support our beliefs. This is why it is critical to challenge your own beliefs (not just seek to support them), to consider alternatives, and to seek out and be open to differing views.

– Placebo effects, superstitious thinking, and pattern recognition. These phenomena are related in that they all reflect a tendency to assume that two or more things that appear to be associated are causally related. If we take a treatment then feel better, we assume the treatment made us better. If we wear our favorite sweater and our sports team wins the playoff game, it was because we wore the sweater. If weird or coincidental things occur, there must have been an underlying reason.

– Logical fallacies. There are many formal and informal logical fallacies, too many to discuss here, but here is a good overview. Just be aware that there are valid and invalid ways of thinking and we need to be careful about our own arguments.

Conclusion

Scientific literacy means not only having a working understanding of the big ideas of science, but also understanding critical thinking and how science works. This list of the basic components of the latter two is certainly incomplete, and I welcome feedback about what else should be included.

These ten categories of literacy regarding the scientific process are meant to be a quick overview of the basics, what I consider to be the minimum that anyone should know in order to be truly scientifically literate.

143 Responses to “Real Scientific Literacy, Part II”

The part I get hanged up on for item 5) analysing scientific studies is the statistics used to represent the significance of an effect. I’m not above taking a course or doing enough research to educate myself, but should this be part of real scientific literacy, or would it be sufficient to wait for a systemic reviews?

I didn’t mention significance because it’s a complex topic. Most studies that are published, if they made any kind of statistical comparison, will be significant. For the lay person I don’t think this says much. Perhaps what is important to note is that statistically significant does not = real.

That’s a funny grammatical error. I have to make a conscious effort to say “hanged”, rather than the more intuitive “hung”, when referring to the method of capital punishment; strange to see it in reverse.

An excellent question. What level of understanding do we need to separate thoughtful acceptance from simple reliance on authority? If it requires solid understanding of stats methods, then we would seem to be setting a bar that very few will clear.

Hmm, now that I think about it, “hanged” is actually more intuitive based on the usual grammatical rule for past participles, it’s just that “hung” is used in every other context and comes to mind more easily. Never mind.

It’s a big mistake to idolize scientists. They are human beings. Lots of non-scientists are smart and thoughtful. Some of the stupidest ideas I ever heard of were generated by famous scientists.

The more highly intelligent and respected a person is, the more likely they are to believe their own stupid ideas. And because they are respected and admired, hoards of people believe them, and then you get a scientific consensus that is bullet-proof.

EVERYONE gets stupid ideas, all the time. Not just me and the dopey uneducated public. All of you.

It is VERY IMPORTANT for skeptics to be skeptical of the scientific consensus.

Historians of science have noted that once a scientific consensus forms it never dies, no matter how stupid, until all the people who believe in it are dead.

Hardnose, you are over-exaggerating the effect of prominent scientists on consensus; their opinions are nearly negligible. Your conclusion about how scientific consensus is arrived upon is false. I’m not surprised you disagree with that one particular bullet point because accepting it means that your completely bogus theories, which fly contrary to real science, are in fact most likely bogus.

HN is the master of strawmen. He has perfected the art of taking one line out of context, and then unfairly assuming all sorts of nonsense.

In this case, nothing I wrote equates to “idolizing” scientists. I have also been entirely consistent in my years of writing to point out that flaws and biases are universal to all people, including scientists and skeptics. I often go out of my way to make that very point.

He also missed, in this very article:

“It is also important to understand that individual scientists can be wrong and can have quirky opinions and biases. Don’t trust the opinions of one scientist; they may be in the minority. Within a community of scientists, however, individual biases will tend to average out and reliability goes way up.”

Given that you in the concept of scientific literacy also include critical thinking, logic and biases (among other things), it would seem that scientific literacy is almost synonymous with scientific skepticism. Would that be correct?

The consensus is crowdsourcing among experts. Evidence shows this is much more reliable than one opinion.

Consensus can become dogma. That depends a lot on the scientific culture (which science, what country, etc.) but even dogmas can be challenged by evidence in science.

And not every consensus is equal. Scientists often know what is accepted by tradition, and what has solid evidence.

What I wrote (if you read it again) is not to worship consensus as infallible, but to take it very seriously, and for the non-expert to assume what appears to be a solid consensus of scientific opinion is probably correct.

Point: “If as a non-expert you do not agree with or understand the scientific consensus you need to understand that it is overwhelmingly likely that you are wrong, rather than the majority of experts who have dedicated their careers to studying a topic in depth are wrong.”

As a lay person, I’m totally copacetic with deferring to consensus.

However, what do you do when your opponent then accuses you of the Appeal to Authority Fallacy?

Seriously, they don’t understand the fallacy then. If it is a true consensus of scientific opinion based upon a large body of research that has been vigorously reviewed and debated, then that is not an appeal to authority. It is a recognition of the evidence and the validity of the process.

You can still challenge the consensus, but then you better have damn good evidence and know (in great technical detail) what you are talking about. Armchair skepticism about rigorous scientific consensus is not a valid position.

Appeal to authority is fine as long as the authority is legit. Refer back to the quotes above for the reasons appealing to the authority of expert consensus is different than appealing to the authority of a lone individual.

Nice line: “If it is a true consensus of scientific opinion based upon a large body of research that has been vigorously reviewed and debated, then that is not an appeal to authority.”

But it really comes down to what one means by “authority.”

One obvious deployment of the fallacy is when someone argues something like: “Well, I think peak oil catastrophe is imminent, because a number of geologists, bankers, and cultural critics think so.” It makes the authority appeal similar to the Bandwagon fallacy.

So, I guess I should learn to say, “Wait: ‘consensus’ is not just ‘authority.’ Consensus concerns the experts, not just ‘authorities.'”

I find the trickiest aspect of understanding science, as a lay person who doesn’t get any of their info from the primary literature, meta-analyses or systematic reviews, is understanding just what the consensus actually is, and just how robust the science is behind what I read. E.g. I’ve read a fair few pop-neuroscience books by people like David Eagleman and VS Ramachandran, and the only glimpse I get of how robust their conclusions are, sometimes, is from reading reviews, making a judgement on how well informed the reviewer seems, and taking on board some of their criticism (if there is any) to try and temper my confidence in what I’ve read.

One example of where I’ve gotten hopelessly lost is Gary Taubes’s The Diet Delusion, which is steeped in apparent skepticism of the current consensus. I walked away from that just having to admit that I can be convinced by scientific sounding arguments without actually understanding the science, and that I just have say, “I don’t know.” when it comes to the relationship between cholesterol and heart disease.

The take-home lesson for me has been to stay humble about what you think you “know”. Finding blogs like this a massive help as well, explaining science in proper context with a lot of emphasis on the conceptual stuff that makes science the best epistemological system we’ve come up with.

“for the non-expert to assume what appears to be a solid consensus of scientific opinion is probably correct.”

That is what I most strongly disagree with about this blog. I have observed the power of personality and of mobs in my own field, as well as in other fields. Scientific consensus on controversial subjects cannot be trusted. It should be considered, but non-experts should feel free to question it.

Non-experts deserve more respect than what you give them.

Yes it is hard for non-experts to understand the terminology, but it is possible to get a pretty good understanding of a research area if you work at it. No you don’t need a PhD and 20 years of experience before you can form an opinion.

Most people don’t care and don’t want to know, but if you are a non-expert who cares about a particular question, you can educate yourself to the point where you understand the controversy.

I wonder where HN gets all that straw to make all those “men”. Must have invented an enormously efficient straw-counterfeiting machine. Or does he simply have supernatural digestive enzymes capable of creating, then regurgitating, straw man fallacies and other nonsense from the reason, logic and science Steve and others feed him on this blog?

If it’s controversial, there isn’t a strong consensus. What hn really means is that he is able to determine when fringe science is correct and the consensus wrong, with no expertise in the relevant field. e.g. “materialism” and natural selection.

Is the study an observation or an experiment? Are they just looking for some correlation (in which it is difficult to make statements about cause and effect), or are they controlling for variables and isolating the one factor of interest?

The juxtaposition of those two sentence suggests that if a study is observational, then it is “just looking for some correlation” and is not controlled. But that is about as far from the truth as you can get. There are certainly some observational studies that fit that description, but on the other hand, there are many high-quality observational studies that are testing specific, well-defined hypotheses, and are statistically controlled for confounding variables. I wish public skeptics would stop glossing over the difference.

Experts and non-experts get the respect they earn and deserve. And your inflation of your own qualifications, your intentional disruption, your ceaseless strawman portrayals of other people’s beliefs, consistently ironic use of an Internet blog to disparage science leads to your receiving the level of respect you’ve earned and deserve.

The teaching of Skepticism / Scientific Literacy is a topic I’ve thought about a lot recently after getting repeatedly frustrated by homeopathy in the UK. I believe that even most science undergraduates don’t really understand what science is (I certainly didn’t when I graduated). My view is that you can fixa lot of the wrongs in society not by attacking them directly, but by inoculating people against them before they get a hold. It’s a fairly long game, but I think it’s the only one that works. Don’t waste your time trying to sink unsinkable rubber duckies, because it just doesn’t work.

I think that the way to approach teaching science may be to explain the many ways that human beings can be wrong, and then to explain how you would best mitigate those. Then Science more-or-less sells itself. The important distinction is between the *process* of science and the discoveries that have been made using it. I think that distinction needs to be emphasised because otherwise people get stuck on the science lessons they hated at school and the maths that they didn’t understand, and they miss the bigger picture.

I did a series of lectures on this. They’re a bit amateurish but they cover the topics that need covering:
Proof, Bayes’ Theorem, Scientific Method, Pseudoscience, Burden of Proof, Conspiracy Theories, Intuition, Logical Fallacies, Science & Scientists, The Supernatural, Special Pleading, The Galileo Gambit, Significance, Scientific Skepticism. I uploaded them to my channel if you want to take a look. https://www.youtube.com/user/ColinFrayn/videos
I also wrote (and am giving away) a book on the topic : http://www.frayn.net/books/newton/index.html
Please get in touch if you have any suggestions for improving either of the above.

“I find the trickiest aspect of understanding science, as a lay person who doesn’t get any of their info from the primary literature, meta-analyses or systematic reviews, is understanding just what the consensus actually is, and just how robust the science is behind what I read.”

mumadadd – This is an important and underappreciated issue for the person actually trying to learn about the world (not even counting those running on motivated reasoning alone). For a layperson consuming information from the most common sources, for some topics it can be difficult to decipher robust consensus from weaker prevailing views or common speculation. This is especially true when there is an apparent controversy outside of the field (e.g., a public one). By layperson I mean everyone, because we are all laypeople about most topics, even if we are expert in a few.

One way to think about this issue is that taking just a few steps can mitigate the problem a great deal, not to the point that it disappears, but to the point that it is no longer detrimental. Once you have a good scientific foundation and literacy, and developed some skeptical skills to help evaluate arguments and identify red flags, you are well on your way to at least have a good perspective on most topics and should be able to identify the consensus on most topics. For the areas that are really tough to evaluate, you then move onto experts in the field who engage the public. At least we live in a time in which access to this type of information is as easy as it has ever been. This blog is a good example, but there are others for many other topics.

People who have very negative views about the deference to consensus of experts tend to view the dichotomy between layperson and expert in an elitist fashion. It is not a matter of handing “those” experts “the keys” to our lives. Experts are not “them” who be more smarter than “us.”

Experts are those of us who have spent a significant portion of our lives in a given field and know it well. Like I mentioned earlier, we are all laypeople in most topics, and many of us are expert in a few. To view the relationship between expert and laypeople in an antagonistic fashion, misses this point entirely.

jt512, In the context of engineering I might agree with you, however, in the context of clinical studies of medical treatments, high-quality observational studies are nowhere near the gold standard: the systematic review of multiple replicated peer-reviewed double-blind randomized placebo-controlled trials.

So let’s just turn over our lives to the experts, since we aren’t qualified to have opinions on scientific subjects.

Okay, you said you were a software developer. So when it is time to evaluate your next Linux Distro, or hot new programming language, or better yet, say you need the best database engine to handle several thousand transactions per second, where do you go to get the most reliable opinions? Do you ask your doctor who loves computers and reads everything he can in his spare time, or perhaps your lawyer friend who “manages” the 7 person “system” in in small law office? Or do you maybe give just a little more weight to the opinions of your peers and and other experts who likely have orders of magnitude more useful and appropriate experience?

Conversely, if you are having severe chest pains, would you rather rely on the consensus opinion of board certified cardiologists or the opinion of your office mate who tells you to just take some antacids for the indigestion because that worked for him?

The thing is, when the people that know the most about a subject are in general agreement, most of the time (not always) they are right. In any event, it is generally the safest bet. I’m positive that you know this, at least intellectually, whether or not you admit it to yourself.

Your problem is that when you don’t like the consensus opinion on a given topic, then you automatically declare a “controversy” and therefore try to promote your (usually ideological) opinion into a position of equal merit. That is not the way the real world works.

So, in answer to your comment, except for our own areas of expertise, very, very few of us are qualified to have an opinion about complex scientific topics. Unless we do exactly as Dr. Novella recommends — do our best to find out what the actual scientific consensus is telling us, and accept that as an actual, [i] informed[/i]opinion which is very probably closer to the truth than a layman’s guess.

“if you are having severe chest pains, would you rather rely on the consensus opinion of board certified cardiologists or the opinion of your office mate who tells you to just take some antacids for the indigestion because that worked for him?”

jt512, In the context of engineering I might agree with you, however, in the context of clinical studies of medical treatments, high-quality observational studies are nowhere near the gold standard: the systematic review of multiple replicated peer-reviewed double-blind randomized placebo-controlled trials.

Pete, you might actually agree with me if you consider what I actually wrote, rather than some claim that I never made.

BTW, how many “systematic review[s] of multiple replicated peer-reviewed double-blind randomized placebo-controlled trials” can you name? I think you have a rather idealized view of biomedical evidence.

“if you are having severe chest pains, would you rather rely on the consensus opinion of board certified cardiologists or the opinion of your office mate who tells you to just take some antacids for the indigestion because that worked for him?”

If the antacids didn’t work that would rule out indigestion, and medical tests would be needed. In a real emergency you should go straight to the emergency room, of course. But aside from that, we can accomplish a certain amount of the diagnosis ourselves.

I agree that medical technology, surgery and prescriptions are sometimes needed. But expecting medical doctors to solve all our health problems with their expertise and reasoning powers is not realistic.

Medical doctors have limited time to spend on each patient. And they tend to not be interested unless it’s a disease they have experience with and know how to treat.

On the other hand, patients are extremely interested in finding a solution and are much more willing to spend time thinking about it and trying things.

So yes, I might listen to my office mate, especially if he had a similar problem, in addition to the official medical information.

One thing I find especially useful on searching for medical answers is the internet medical forums, where patients describe their experiences. It does not always agree with the official medical information, which tends to be repeated mindlessly whether it is evidence-based or not.

I am only saying what I sincerely believe and have learned from experience. There is information in medical forums that shows what people are really experiencing, and it doesn’t always coincide with the the official medical information.

Many things are not well understood, many are not understood at all. No amount of medical knowledge and experience can make someone understand what is not known. Medical professionals tend to dismiss diseases that have not been officially recognized.

For example, if you want to know typical side effects of a drug you can look at WebMD or ask your doctor. Then look at the patient forums and you might get very different stories. If you read lots of them, you may find useful information you could not get otherwise.

“If you read lots of them, you may find useful information you could not get otherwise.”

At best, that information can be used to formulate a hypothesis to test. Even that is assuming a lot about the particular forum. Any more specific extrapolation is likely to be misleading and foolish to act upon, for the most part. Also, some of the worst advice is given in such forums, because the people who gravitate to such places are looking for answers and many *think* they have answers for others.

I am only saying what I sincerely believe and have learned from experience.

The problem is, throughout history and still continuing to the present day, much of what people “sincerely believe and have learned from experience” is WRONG. Individually, all of us are pretty bad at separating fact from fiction.

The ONLY method with a good track record of doing exactly that is the scientific consensus. As you have pointed out yourself, individuals are often wrong. And that includes YOU (and me, although I readily admit it).

You are literally risking your life by trusting patient forums to get any kind of medical information. People are terrible at differentiating between correlation and causation, and they frequently confuse coincidence with evidence.

As a fairly benign example, look at how many people still believe that vitamin C will cure or protect against the common cold. All because of one supposed expert (Linus Pauling) who went against the consensus without any actual evidence.

Obviously, the medical community is far from perfect, but many, many more people have died or been injured by miracle cures and alternative treatments than mistakes made by mainstream medicine. BECAUSE, real medicine actually tests to see what works and what doesn’t.

To steal from Winston Churchill, the Scientific Concensus is the worst way we have to understand the world — except for all the others.

We are genetically programmed to believe anything we are told, as long as it doesn’t contradict what we already know. We are especially likely to believe things we are told by authorities and experts.

This genetic programming was needed for group solidarity when we lived in small tribes. All children were programmed into the tribe’s mythology by the elders. Everyone was on the same wavelength, and that allowed them to live together harmoniously and cooperate, while competing with rival tribes.

No of course we have a completely different world, but we have the same old DNA.

So being a skeptic is very hard work. The scientific consensus is not a refuge for skeptics, and it can often be a trap.

“Appealing to”, or arguing from anything other than “facts and logic” is a fallacy. It is a fallacy no matter who does it.

If you don’t understand a field; that is understand the background facts and logic on which the field is based, then you have no business having an opinion on anything in that field. If you are a skeptic you have to default to “I don’t know”, because without knowing the “facts and logic” of a field, you don’t know and can’t know enough to evaluate anything in that field.

You can’t use social knowledge (knowing who are “experts” and who to trust and how much) as a substitute.

In order to evaluate any claims, you need to know the facts and logic those claims are based on. If those claims are not based on facts and logic, there is no evidence that they are correct. Opinions based on things other than facts and logic are faith based assertions and have negligible evidentiary value.

On Hardnose’s earlier point, and some of the back and forth (which I have not read all of), number 7 in the original post does concern me a bit.

I think about early Chomskian linguistics, which was basically an impenetrable cult of personality dressed in a scientific gown. Skinnerian behaviorism was the same way. These were not isolated individuals, but entire international resource programs with established consensus churning out PhDs. People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.

A healthy mature consensus can handle criticism from intelligent laypeople. It should be “Here is why we believe what we do, and feel free to ask questions at any point.” It may take a book to do it, but it should be possible. Obviously there are highly technical fields where not everyone will be able to follow the math. But beyond such quantitative stumbling blocks, knowledge is democratic.

The entire field has been wrong for 100 years, that the foundation was wrong has been brought up multiple times, now criticisms that the field is wrong can’t get published because “they are not new”.

The fundamental premise of IQ, that there is something like ‘g’ that “intelligence” (which remains undefined” depends on, is wrong, and that ‘g’ can be measured by doing a linear regression analysis is also wrong.

“I think about early Chomskian linguistics, which was basically an impenetrable cult of personality dressed in a scientific gown. Skinnerian behaviorism was the same way. These were not isolated individuals, but entire international resource programs with established consensus churning out PhDs. People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.”

I am very familiar with that and you are on hundred percent correct. Chomsky is an excellent example. He had a couple of good ideas (well actually, he objected to some obviously stupid behaviorist ideas) and he was promoted to infallible source of wisdom on all aspects of linguistics and psychology. Even politics, for some of his fanatical followers.

“In order to evaluate any claims, you need to know the facts and logic those claims are based on.”

Yes, but you don’t necessarily have to know all of the field’s technical terminology. Experts often use terminology to intentionally confuse and awe non-experts. Wow that is complicated stuff! He must be right if he’s smart enough to know what all that gibberish means.

It is often possible to explain things in plain English, or at least to minimize the use of field-specific jargon.

“As a fairly benign example, look at how many people still believe that vitamin C will cure or protect against the common cold.”

Actually there is starting to be good evidence that vitamin C does have some good effects on the immune system.

“To steal from Winston Churchill, the Scientific Concensus is the worst way we have to understand the world — except for all the others.”

Ok, I more or less agree with that. Scientific consensus should not be ignored. On the other hand, it can be a big mistake to put too much faith in the scientific consensus.

“but many, many more people have died or been injured by miracle cures and alternative treatments than mistakes made by mainstream medicine”

I’m not sure if that’s accurate, mainstream medical mistakes are a big killer. But I do acknowledge that most of the so-called miracle cures are probably scams.
# hardnoseon 15 Jan 2016 at 10:47 am

My general opinion about skepticism is:

We are genetically programmed to believe anything we are told, as long as it doesn’t contradict what we already know. We are especially likely to believe things we are told by authorities and experts.

This genetic programming was needed for group solidarity when we lived in small tribes. All children were programmed into the tribe’s mythology by the elders. Everyone was on the same wavelength, and that allowed them to live together harmoniously and cooperate, while competing with rival tribes.

No of course we have a completely different world, but we have the same old DNA.

So being a skeptic is very hard work. The scientific consensus is not a refuge for skeptics, and it can often be a trap.
# daedalus2uon 15 Jan 2016 at 10:51 am

“Appealing to”, or arguing from anything other than “facts and logic” is a fallacy. It is a fallacy no matter who does it.

If you don’t understand a field; that is understand the background facts and logic on which the field is based, then you have no business having an opinion on anything in that field. If you are a skeptic you have to default to “I don’t know”, because without knowing the “facts and logic” of a field, you don’t know and can’t know enough to evaluate anything in that field.

You can’t use social knowledge (knowing who are “experts” and who to trust and how much) as a substitute.

In order to evaluate any claims, you need to know the facts and logic those claims are based on. If those claims are not based on facts and logic, there is no evidence that they are correct. Opinions based on things other than facts and logic are faith based assertions and have negligible evidentiary value.
# edamameon 15 Jan 2016 at 11:00 am

On Hardnose’s earlier point, and some of the back and forth (which I have not read all of), number 7 in the original post does concern me a bit.

I think about early Chomskian linguistics, which was basically an impenetrable cult of personality dressed in a scientific gown. Skinnerian behaviorism was the same way. These were not isolated individuals, but entire international resource programs with established consensus churning out PhDs. People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.

A healthy mature consensus can handle criticism from intelligent laypeople. It should be “Here is why we believe what we do, and feel free to ask questions at any point.” It may take a book to do it, but it should be possible. Obviously there are highly technical fields where not everyone will be able to follow the math. But beyond such quantitative stumbling blocks, knowledge is democratic.
# daedalus2uon 15 Jan 2016 at 11:28 am

A good example of the fallacy of “experts” and “peer review” is in the field of IQ measurement.

The entire field has been wrong for 100 years, that the foundation was wrong has been brought up multiple times, now criticisms that the field is wrong can’t get published because “they are not new”.

The fundamental premise of IQ, that there is something like ‘g’ that “intelligence” (which remains undefined” depends on, is wrong, and that ‘g’ can be measured by doing a linear regression analysis is also wrong.
# hardnoseon 15 Jan 2016 at 11:42 am

“I think about early Chomskian linguistics, which was basically an impenetrable cult of personality dressed in a scientific gown. Skinnerian behaviorism was the same way. These were not isolated individuals, but entire international resource programs with established consensus churning out PhDs. People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.”

I am very familiar with that and you are on hundred percent correct. Chomsky is an excellent example. He had a couple of good ideas (well actually, he objected to some obviously stupid behaviorist ideas) and he was promoted to infallible source of wisdom on all aspects of linguistics and psychology. Even politics, for some of his fanatical followers.
# hardnoseon 15 Jan 2016 at 11:43 am

Freud is another obvious example, but there are countless other examples that are not quite as obvious.

Holy crap … six or eight comments in a row that were not only interesting, there was no outright bullshit — of course, there is a pretty large straw man.

No one here has ever said that we should have blind faith in the “experts”. The essence of skepticism is to question everything, all the time.

And while I somewhat agree with daedalus2u that “If you don’t understand a field; that is understand the background facts and logic on which the field is based, then you have no business having an opinion on anything in that field”, we still have to make decisions in life. Choosing to do what the acknowledged experts recommend is generally the safest course of action in any situation.

“I think about early Chomskian linguistics, which was basically an impenetrable cult of personality dressed in a scientific gown. Skinnerian behaviorism was the same way. These were not isolated individuals, but entire international resource programs with established consensus churning out PhDs. People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.”

I think you’re only partially right here.

Chomsky was wrong about many things, but the notion that qualities of human language like syntax separated our communication form even complicated animal communication is an important idea. Back then, whether animal could learn language was an open question. We now know that language is uniquely human.

Same for Skinner. Oversold behaviorism for sure. And the “cult” in psychology that grew up around his ideas and dominated psychology did slow the field – there’s no question. But the schedules of reinforcement, e.g., was solid science and showed us a lot about how our reward systems work, and this work explains much of the behavior of animals including ourselves. If you look at the state of the field back then, Skinner was a breath of fresh air as far as rigor went (compared to Freud, introspection, etc.).

So I think to look back on these guys with today’s knowledge, pointing only at what they got wrong (which was a lot!) misses their pretty huge contributions.

That said, you’re quite right that they are good examples of when a field becomes insolar vs. the rest of the scientific community, so I do agree there.

Skinner took some common sense ideas from behaviorism and blew them up into a philosophy that tried to explain everything. Of course there is value in behaviorism, but Skinner took it way over the edge.

Then Chomsky came along and had the nerve to disagree with Skinner about language. Chomsky was obviously right, and common sense alone could have told everyone that. But it required a big debate, which of course Chomsky won.

Chomsky was then set for life, nothing he could ever say could be wrong. American linguistics lost many valuable insights from the behaviorists, and from European linguistics. It became sadly impoverished.

Chomsky had a few more good ideas, but not much, and then he got sidetracked onto far left politics.

Now you see people quoting Chomsky’s political ideas and claiming they must be true because Chomsky is a famous linguist.

Edamame, There are two types of peer review:
1. peers within the same field of endeavour;
2. peers in different fields of scientific endeavour who have expertise that is essential for the proper processes of science and peer review, and who have no vested interests in the above field of endeavour.

Type 1 peers will, one presumes, be somewhat biased towards supporting their particular field of expertise and biased against finding its flaws, which is just part of human nature. Type 2 peers will tend to be biased towards supporting the methods of science, and biased against fields of endeavour outside of their own. In other words, type 2 peers are very likely to be the best scientific skeptics for providing useful and honest feedback to the type 1 peers.

Of course, this is why the authors of papers are required to state their conflicts of interest; and the process of systematic review includes validating those declarations.

You wrote: “People overestimate the power of groups to groom out stupid ideas, and underestimate sociological factors in the establishment of scientific consensus.” I certainly agree with the first part. However, I’m inclined to disagree with the second part due to the reasons I’ve stated above.

To me, the term “scientific consensus” means a consensus reached by the whole of the scientific community, not the consensus reached by one or a few branches of the scientific community that are likely to have vested interests in their particular fields of endeavour. E.g. one does not have to be an expert in evolutionary biology to know that evolution by natural selection is by far the best explanation of life that we currently have: this is an overall scientific consensus; it is not simply the consensus from one or a few branches of the huge tree of science. This is why religious fundamentalists have to attack the whole of ‘materialistic science’ rather than try to gnaw away at one branch.

Yes, with insufficient facts and insufficient time to work out how those facts are logically connected, we still have to make decisions.

Decisions made in the absence of sufficient facts and the absence of sufficient logic are not “scientific decisions.” Pretending that they are makes it more difficult to abandon them when more facts and better logic come along.

It is not the case that “The essence of skepticism is to question everything, all the time.”

It is the essence of skepticism to use the most reliable methods available at the time to figure stuff out. Whenever more reliable methods or information comes along, the essence of skepticism is to use those more reliable methods and data.

Conclusions based on more relevant data are always more reliable than conclusions based on sparse data, so long as that data is evaluated fairly.

Yes, but you don’t necessarily have to know all of the field’s technical terminology. Experts often use terminology to intentionally confuse and awe non-experts. Wow that is complicated stuff! He must be right if he’s smart enough to know what all that gibberish means.
It is often possible to explain things in plain English, or at least to minimize the use of field-specific jargon.

I think it is easily arguable that some people do use jargon to confuse and awe non-experts, but cranks and pseudoscientists are, by far, the most guilty of this.

Genuine scientists or experts of any kind use specific terminology primarily because it is precise and concise i.e. to save time and avoid confusion.

Obviously, with enough adjectives or lengthy explanation, any concept could be presented without specific jargon, but usually, that is a pain in the ass. All languages have expanded dramatically over the years to allow people to accurately and succinctly describe new concepts or situations. Even then, many, many words have multiple (sometimes even contradictory) definitions. Even common a word like “theory” means something entirely different to a scientist than it does to a layman.

Like it or not, learning the correct terminology is important. One specific word is often preferable to a whole string of words which are more likely to be misinterpreted.

For a wonderful example, check out “Thing Explainer” by Randall Monroe — complicated things explained using ONLY the 1000 most common words in the English language. It is a fascinating and often hilarious book, but it nicely illustrates the difficulty in getting complex concepts across using only simple words.

It is not the case that “The essence of skepticism is to question everything, all the time.”
It is the essence of skepticism to use the most reliable methods available at the time to figure stuff out. Whenever more reliable methods or information comes along, the essence of skepticism is to use those more reliable methods and data.

A distinction without a difference. “Questioning everything” clearly encompasses the concept of always searching for more information and better ways of acquiring it.

“I am only saying what I sincerely believe and have learned from experience. There is information in medical forums that shows what people are really experiencing, and it doesn’t always coincide with the the official medical information. ”

It doesn’t coincide with the medical documentation most often because the person has self-diagnosed themselves incorrectly. Often in direct conflict of a correct diagnosis from a doctor.

I have an associate who has been tested, and diagnosed with Celiacs. He doesn’t believe he has Celiacs, but instead Chronic Fatigue – because he’s always tired, has trouble thinking, doesn’t sleep well, etc.

He goes to these sort of forums -and tells people that he has ‘atypical’ chronic fatigue – and that his digestive problems are a symptom of it. He tells people that the doctors can’t figure it out, etc. These are the sort of people on the forums you’re talking about – they’re agenda driven.

I was NOT saying we should believe every stupid thing posted on medical forums! You have to read a large number of posts on the subject and look for patterns. It is often possible to tell if the person who posted is nuts.

And of course you have to double and triple check everything.

I read the mainstream medical advice sites, search in pub med, in addition to reading patient forums. I think the patient forums can be extremely helpful.

Steve Cross wrote: “Like it or not, learning the correct terminology is important.” This is not simply important: it is the most deeply fundamental important aspect of science to properly understand. Without this essential understanding, the inevitable result is endless streams of heated arguments that all boil down to category errors aka category mistakes. However, these mistakes are far worse than just making category mistakes as defined by epistemology; in many branches of science, these mistakes are fundamental domain errors that can, and often do, result in cataclysmic-scale errors!

Rather than providing gruesome examples of such mistakes, I shall provide simple examples that everyone will be able to understand. What does the scientific term “volume” mean? In physics, it usually refers to the space occupied by an object, e.g., the volume of a sphere equals 4/3 x pi x the cube of its radius. In audio engineering the term “volume” has various, internationally standardized, ways of measuring and expressing the sound pressure levels of a plethora of audible things. E.g., if you were making a complaint about the noise of wind turbines, would you use a meter that captures peak sound pressure level (SPL), a meter that reads A-weighted average SPL, B-weighted average SPL, C-weighted average SPL, or something else?

There are epistemologically robust reasons why each branch of science and mathematics has to define its branch-specific terminology with a sufficient level of pedantry to make it as unambiguous as humanly possible. This level of precision and openness is the exact opposite of wilful obscurantism for the purposes of elitism. Whereas, science-y sounding elitism belongs to the highly-obfuscated domain of quacks, charlatans, religionists, and fraudsters [apologies for the repetition]. The latter domain makes itself blindingly obvious by not only refusing to define its words and terminology, but also by incessantly redefining both dictionary words and scientific terms for the sole purpose of promoting its anti-science (usually lucrative) ideology.

I agree hardnose, and like you, I try my best to understand the big picture on as many different things as possible. That doesn’t mean that I think I’m qualified to second guess the expert’s conclusion, especially when the consensus seems pretty solid.

On the other hand, the reason I do try to get the big picture is to determine whether the consensus actually is solid — while trying very, very hard to keep my preconceived notions from interfering with my assessment.

A skeptic needs an open mind, but not so open that their brain falls out.

One can be a pseudo-skeptic, or a hyper-skeptic, using demands of proof that exceed all rational requirements. There not much need to be “skeptical” of assertions that homeopathy is valid.

The whole AGW denialism is based on “just asking questions”. It is about generating doubt when there is more than enough reliable evidence to act on. That was true for the question “does tobacco cause cancer”, for “do vaccines cause autism”, “does homeopathy work”, and many other questions.

One can always ask for more data. Declaring that there isn’t enough data in a field you are not expert in is an appeal to your own ignorance. If you are not an expert in a field, you can’t know if there is enough evidence to justify the conclusions of that field.

There is no substitute to becoming expert in a field to understand and to have valid opinions in that field.

There are some very good books by science writers who have explained things clearly for the non-expert. No, you can’t just read these books and declare yourself and authority. But you can get a pretty good education by reading them.

I have a lot of formal education but I learned as much or more about other subjects by reading. Formal education doesn’t shove knowledge and understanding into your brain. Most of what you do while getting a PhD is reading and thinking on your own.

And sometimes knowing about multiple subject areas is an advantage. I know you will all disagree but I personally think computer science is a meta science. I think I have a perspective on biology, for example, that you don’t get if all you ever study is biology.

Geez Hardnose … be careful, you are getting dangerously close to Marcus Morgan territory — if not outright bat shit crazy, then at least seriously delusional.

You really need to read up on the Dunning-Kruger effect — in essence, the less someone knows about any particular subject, the more likely they are to over-estimate their own level of expertise. In other words, true experts realize that no matter how much they know, there is still a lot more to learn, while amateurs literally have no idea what they don’t yet know.

and take it to heart. If I’m wrong, and you really are as smart as you think you are, then please do the world a favor. Quit wasting your time here — go out and win all of the various Nobel Prizes and solve a bunch of the world’s problems for us. You’ll be a hero, and I guarantee that all of the naysayers here will be forced to shut up.

Actually Steve Cross, because you don’t know what HN is talking about, it is you who is exhibiting the Dunning-Kruger effect.

Biology as a field has a lot of problems. Looking at systems in biology from a different perspective can provide a lot of insight into those problems that biologists are unable to perceive.

I am a chemical engineer. I look at biological systems as a chemical engineer would; as if they were a chemical plant. The most important system of a chemical plant is the control system. If the control system isn’t working right, nothing in the chemical plant can work right.

If the control system of physiology isn’t working right, what symptoms would be expected? What type of control faults would be expected? Good control around a bad setpoint? Or bad control around a good setpoint? Which do we see in dysfunctional physiological states? What interventions would one do to fix either of these types of control faults?

I’m 100% in favor of a broad perspective and I’m fully aware of the problems of tunnel vision. Further, I know that many scientific puzzles have been solved by applying a perspective gained from a seemingly unrelated field.

But, if you’ll recall, HN has been trying to make the case that he is qualified to disagree with the consensus opinions of experts simply because he considers himself well read. Certainly it is “possible” that an outsider could have a valuable insight, but with today’s ever more specialized and esoteric fields, it is unlikely as hell that it would happen very often. And that is the point that most of us have been trying to make. Most of the time, your best bet is to follow the advice of the acknowledged experts, unless you truly do understand the field and also have some damn good evidence to back you up.

Also, I think it is disingenuous to downplay the success rate of scientific consensus by giving only examples of “softer” sciences which must, of necessity, be interpreted much more subjectively. All branches of science, but especially those with clearly definable and testable conditions, have make enormous progress in the last few centuries.

I also think it is easy to be mislead by seeing analogies between different fields that don’t really exist, or if they do, they may be far more complex than you expected.

In the example you gave, my limited knowledge of biology (and especially the brain) leads me to suspect that biologists are dealing with feedback and hysteresis effects that could be orders of magnitude more difficult to understand than anything you have had to deal with in the macroscopic world.

And that goes double for HN since, for all practical purposes, the computer world is entirely digital.

It’s good to be widely read so as to get the “big picture”, and you can get this from a combination of books written by experts for the layperson and following and participating in discussions on Internet forums….but it a big mistake to think you can disagree with the consensus of experts when you don’t even understand their terminology! If you don’t understand their terminology, you haven’t got anywhere near the depth of knowledge on a subject to justify criticising the consensus.

“falsifiability versus proving a negative and the null hypothesis, in favor of responding to hardcore’s trolling”

I feel for you.
I was going to reply when I saw your questions, even though I don’t think I’m the best person here to be doing so, but then I got lost in the morass being created here.
I will give it a stab in the hope others might contribute:

Firstly “proving a negative”:
Proof is the provence of mathematics. Science concerns itself only with getting us closer and closer to the truth without probably ever getting there. Even the “facts” of science are not “proven facts”, because any new evidence can change what are regarded as “facts” in science. Everything in science is tentative. Everything in science has a probability of being true and when that probability approaches 1, it becomes a “fact” subject to correction – up or down – as more evidence becomes available.

Secondly “falsifiability”:
Your hypothesis must be falsifiable otherwise you are not doing science. If your hypothesis is not falsifiable, then it is an unnecessary add on to already existing knowledge. Ockham’s razor. Some times a single piece of evidence can falsify a hypothesis. All swans are black is falsified by a single existence of a white swan. More commonly, evidence simply makes your hypothesis less likely or more likely and, in either case, our knowledge is incrementally increased.

Thirdly “null hypothesis”:
If your hypothesis is that “acupuncture helps prevent migraines”, then the result of your clinical trial will need to have the effect of reducing the probability that “acupuncture does NOT prevent migraines” (the null hypothesis).

When I read you post I was not quite sure what you were after, so I hope some of the above is relevant to your questions.

The whole “science can’t prove a negative” thing is actually more of a philosophical statement about the nature of knowledge, rather than a limitation of science. In practical terms, I think it’s actually a false statement. As long as you’re willing to accept certain practical assumptions and definitions, science can indeed prove a negative. E.g. we can prove that I don’t have a dragon in my garage, provided we accept that a dragon would be made of physical stuff and would be detectable, which is pretty reasonable. We can’t, however, prove that I don’t have an invisible, non-corporeal (etc) dragon in my garage, because there is no theoretical way to test that statement or falsify it.

On falsification – any testable claim is inherently falsifiable. I.e.if the test comes back negative the claim has been falsified. Provided you make valid predictions about whatever your hypothesis is, and your tests are well designed, a negative result falsifies a claim.

I should add – that’s a conceptual level explanation. Real life is messier – small effect sizes, difficulty designing tests that properly isolate variables and all sorts of other confounding factors. This blog has a wealth of material on this topic.

PROVING A NEGATIVE
It is often claimed that it’s impossible to prove a negative, which is a false claim. If a positive claim is either true or false then the double negative of the claim is the same claim; and the single negative of the claim must be either false or true.

If a claim has epistemic closure then it is reasonable to assume that all premises that it does not contain are false. E.g. A train timetable for a train that goes from A to D also lists the times for stations B and C: it is reasonable to assume that the train does not stop anywhere else. So, if someone asks “Does it stop at station X?”, we reply “No.”, and they say “Prove it!” we don’t need to sit at station X to prove that the train never arrives. In situations that have epistemic closure, absence of evidence is indeed evidence of absence.

An argument from ignorance (aka appeal to ignorance) means an appeal to lack of evidence to the contrary, which is normally regarded as an informal fallacy. E.g. I claim Santa exists because there’s no evidence to the contrary: I’m shifting the burden of proof and my argument is fallacious. In the case of the train timetable, claiming that the train does not stop at station X is an argument from ignorance, but it is not a fallacious claim.

A fallacious argument from ignorance is often presented as being either true or false, but closer inspection may reveal that it is actually a false dichotomy that has excluded other choices: somewhere between true and false; or unknowable.

I have sort of lost the thread frankly, but I believe it was hardnose wrote:
“Most of what you do while getting a PhD is reading and thinking on your own.”

No.

In my experience this is not the case in the experimental sciences. Maybe in humanities this is true, especially philosophy. Or maybe sometimes when you are waiting to build your setup to start collecting data (e.g., collider). But that is sort of annoying down time. 🙂

Most of what I did as a PhD was experiments, nose to the grindstone hours and hours a day. And you build up experience and intuition doing experiments that book reading simply doesn’t provide. From muscle-memory (e.g., how to pipette, or whatever), to troubleshooting an electrophysiology rig, to pattern recognition of action potentials on the oscilloscope. These are skills and know-how that books cannot give you, and which give you a background set of skills that are *very* important in making you a “good” scientist.

Sure, it is possible to understand the key theoretical results of all this implicit and nonconceptual know how and experimental skill base (which is obviously strongly coupled to theoretical and propositional knowledge). That is, the electrophysiologist can write a paper about their results.

But we don’t sit around reading and thinking on our own while getting a PhD.

“But we don’t sit around reading and thinking on our own while getting a PhD.”

I did experiments but I had to understand the topic first, which required a lot of reading. Before you do research you must do a complete literature review, at least. It’s better to go beyond that though and get a perspective that goes beyond your limited topic.

That is difficult and there is no simple answer. Before deciding there really is no effect you have to make sure your experiments are powerful enough to find an effect if there is one. There are guidelines but no simple rules.

It’s easy to “debunk” research you don’t like by doing weak experiments that of course don’t find anything.

Steve Cross, mostly how biologists deal with communication in biological systems is by postulating mythic systems, such as “homeostasis”.

Information flow (aka communication) is well developed in electrical engineering. Shannon’s mathematical theory of communication is universally valid for all communication, even in biological systems. There is essentially no application of that in biology (that I am aware of).

At the fundamental level, biological systems are discrete too. There are single molecules of DNA that are important, transcription results in a discrete number of molecules of product, there is (essentially) one-to-one mapping of receptors to ligands, it takes a discrete number of ATP molecules to activate something.

You can’t talk about generalities and likelihood that a scientific consensus is wrong. Many times a position is wrongly characterized as a “consensus”. The example I gave above on IQ measurement and psychometrics is an example where the majority of a field is wrong, the majority considers that they are correct, and falsely portray their position as the “consensus” of the field, and blocks any publication that seeks to challenge that false consensus.

I don’t consider myself an expert in psychometrics, but I have read a fair amount of the literature, and I know that the issues that Schonemann brings up are serious, are important, and strike to the heart of the field, and have never been addressed before or since. Until those issues are addressed, there can be no “scientific consensus” in the field.

An important part of what constitutes a “scientific consensus” is the appreciation of the field that there are no “loose ends” or “unexplained phenomena”, or “things that are not understood” that might be relevant. It is extremely rare for a field to be that mature, but there are a few examples; the Germ theory of disease, Evolution and common descent, Greenhouse gas mediated global warming, Quantum mechanics, General Relativity, vaccines as public health measures, radioisotope dating. There are still unknowns in those fields, but there is consensus on what the unknowns are.

There are fields where people falsely claim there is a consensus; Genetic effects on IQ, diet effects on health, Amyloid hypothesis of Alzheimer’s, Diabetes type 2 as “insulin resistance” are ones that comes to mind.

I has been my impression that HN is accurate in revealing that most of what he did while getting his PhD was sitting around reading and thinking. That might partially explain why so much of what he says is not grounded in actual science but reflects cheery-picked reading and thinking. That’s how a non-science based ideology can easily get strengthened and argued with science-y language. Voila: many logical fallacies and pseudo-scientific conclusions.

HN, the null hypothesis is never *accepted* by an experiment or experimenter because it is the default hypothesis. The data must be sufficiently strong to reject it; if the evidence is weak then nothing has changed from the default — i.e. the hypothesis being tested remains invalid/untrue.

“So thank you all for not answering my questions about falsifiability versus proving a negative and the null hypothesis, in favor of responding to hardcore’s trolling”

It looks like you got some sympathy from the others here. It comes across more as childish to me. So people didn’t respond to your question and you get upset? How do you navigate the internet without crying all the time?

“I have a hard time distinguishing between ‘science doesn’t prove negatives’ and ‘hypotheses must be falsifiable.’ (I was a Lit. major.)”

Others have posted their responses, and I will attempt to contribute to the discussion. I will start with the latter question. To say a hypothesis is falsifiable is mostly a theoretical consideration, but there is a practical aspect as well. The following idea is being questioned, but I think it is true, regardless of its implications on what scientists do in practice: falsifiability is a necessary condition of a scientific hypothesis or theory.

What this means is that there needs to be a way of distinguishing an idea of being true from it being not true. The problem is that there are some things that scientists do, e.g., areas of theoretical physics, that are not clearly falsifiable at this time. Some of this is falsifiability in a practical sense (i.e., we don’t yet have the technological capabilities to test the theories), but there do seem to be areas that may not be falsifiable in principle either (i.e., there doesn’t seem to be a way to distinguish them from other theories).

But in most situations the answer is clearer. For example, the idea that the world is exactly like it is, but exists due to a deist god that just set everything into motion, but no longer interacts with the world, is the obvious example. That is not a scientific theory, and is not testable, because there is no way to distinguish that universe from a universe with no deist god.

The statement that “science doesn’t prove negatives” is not helpful, I don’t think, and can lead to confusion. I think the idea comes from the structure of most scientific studies, in that a null hypothesis is assumed, unless it is probabilistically unlikely.

The null hypothesis is a term from inferential statistics, which assumes that the groups in the study come from the same population, and that any differences seen are due to expected sample variation. In other words, it assumes no differences between the groups, unless this is probabilistically unlikely given the results. The quote implies that since the structure of studies are usually that we assume groups are samples from the same population, that we cannot prove this idea. This is technically correct, but practically irrelevant, because science never works that way. We are always making probabiliistic statements about what is likely, and we don’t ‘prove’ ‘positives’ either.

“Information flow (aka communication) is well developed in electrical engineering. Shannon’s mathematical theory of communication is universally valid for all communication, even in biological systems. There is essentially no application of that in biology (that I am aware of).”

The null hypothesis is not the default. Before doing research we don’t know if there will be an effect or not. It is absolutely wrong to accept null before any high quality research has been done. (Oh but what is high quality research, that is often a subject judgment).

The null hypothesis is not the default. Before doing research we don’t know if there will be an effect or not. It is absolutely wrong to accept null before any high quality research has been done. (Oh but what is high quality research, that is often a subject judgment).

Umm … Nope. The null hypothesis makes NO assumptions about whether or not there is any effect. The null hypothesis is merely a statement of what it would look like if there is no effect — which you must know before you can define what conditions actually do indicate a real effect.

My understanding is: hypothesis = ‘x’; null hypothesis = not ‘x’. Null hypothesis says nothing else except ‘x’ has not met its burden of proof. Null is the state of play with no consideration to ‘x’. Is this correct?

mumadadd, If tests on ‘x’ sufficiently reject the null hypothesis this still does not mean that ‘x’ is correct because there could be reasons other than ‘x’ that explain the results — alternative hypotheses ‘y’, ‘z’, etc.

E.g. ‘x’ = rain makes the ground wet. Yep, we know that’s true and we know that if the ground is dry it cannot be raining. But, what about hypothesis ‘y’ = hosepipes make the ground wet; and ‘z’ = water makes the ground wet. Hypothesis ‘z’ is a better explanation than either ‘x’ or ‘y’.

With all due respect, you’ve created a straw man version of the things I’ve actually said. I’ve never said that the scientific consensus is always right or that it should not be vigorously challenged. Quite the opposite in fact, i.e. I said “question everything, all the time” earlier in this comment thread.

Likewise, I’ve stated that it is important to try to understand the actual strength of consensus and I’ve admitted that it is often difficult for a layman to do that.

I’ve also never implied that there is no value in applying different perspectives from various disciplines — I said the exact opposite was true.

All of my comments have been directed at hardnose and his ilk. As you must know, his standard operating procedure is to jump into the conversation with some (often false or misleading) factoid which he claims throws the general consensus into doubt. Inevitably, he will then rant on about how close-minded we are because we aren’t open to alternative viewpoints.

My entire point has always been that two opposing viewpoints are rarely exactly equal in validity, and that the non-expert will (and should) have a difficult time changing the consensus of experts. I’m not saying that the experts are always right or shouldn’t be challenged, but I do believe that, as a general rule, amateurs (such as hardnose in 99% of the things he claims) should expect to be treated with a great deal of skepticism and be prepared to provide a LOT of evidence for their claims — they can’t just plead “reasonable doubt” and expect to be taken seriously.

I’ve always thought of the null hypothesis as just another tool to help determine if the effect you’re seeing is real or not.

For example, suppose someone claims to have psychic powers that can affect the outcome of a coin toss. We know that if I toss a coin 10 times, then I’m most likely to get 5 heads and 5 tails, but I also could get 6/4, 7/3, even 10/0 on occasion. If I repeat the experiment a LOT of times, I’ll get a lot of 5/5, slightly less 6/4 and 4/6, even less 7/3 and 3/7 combos and so on to just a very few 10/0 and 0/10 results. This would be my null hypothesis — in other words, the statistically likely distribution I would expect if there is NO outside effect.

Now, when I repeat the experiment with the “psychic” a sufficiently large number of times, I’ll be able to compare the results to the null hypothesis and determine if there is any significant effect. In this experiment the null hypothesis is simply what you would expect from random chance, but obviously different experiments would have different “no observable effect” conditions.

My understanding is similar to yours, mumadadd. You are testing to see if x is true (ie does acupuncture prevent migraines?). Therefore the default must be ~x (ie acupuncture does not prevent migraines). The default must be ~x, otherwise why are your testing x if you already know it is true. The outcome of your clinical trial will be positive if the results have lessened the probability that ~x is true.

Funny you should say that; I mentally went through my own, more vulgar, version of this earlier when I mentioned Sagan’s dragon (though I released it from my garage for the following):

We could look for poo, because dragons would be biological organisms that eat food. It’s a valid prediction, but finding poo wouldn’t be proof of dragons. And not finding poo wouldn’t be proof that there are no dragons, unless we were entirely sure that dragons produce poo, and we had perfect poo detection abilities deployed across the entirety of the hypothetical dragons’ possible pooing space over a sufficient time. It could then also be possible that dragons bury their poos to avoid detection by their prey, and the massive team of scientists we’ve employed to walk around looking for dragon poo found nothing, yet there were still dragons. And, of course, there could still be solar-powered dragons that don’t poo at all.

“It looks like you got some sympathy from the others here. It comes across more as childish to me. So people didn’t respond to your question and you get upset? How do you navigate the internet without crying all the time?”

And we are assholes for ignoring those questions in favour of continuing to entertain the troll.

Your comment about HN was directed at his assertion that he has insight into biological systems that biologists don’t have by virtue of his computer experience, where you imputed he was exhibiting the Dunning-Kruger effect. That is the specific example I was commenting on, not on anything else.

I don’t hold anyone, you, or HN, or myself to different standards.

It doesn’t matter how many times HN is wrong, you can’t use “HN was wrong about X, Y, Z” to justify that “HN is wrong about W”. They are independent statements and require independent verification.

You can’t dismiss anyone because they are an “amateur”. You can only dismiss them because they are factually incorrect. If someone is factually incorrect, they should be dismissed, even if they are an “expert”.

This is probably the most difficult thing for people who consider themselves “skeptics” to do; to not default to social cues and feelings when dealing with factual matters. A large part of this problem is tied up in how human cognition evolved, and in how humans have hyperactive agency detection. The human default is to impute agency and to impute “causation” by that agent when things happen. People use how they “feel” to infer how reliable a piece of knowledge they think they have actually is. Feelings are a terrible way to impute how reliable a piece of knowledge is. The mechanism by which “feelings” occur is unknown and is completely opaque. It cannot be verified the way that data and logical arguments can be verified.

I try to treat everyone and everything with the same degree of skepticism. I wish people would treat me with the appropriate level of skepticism. Unfortunately most people do not use appropriate levels of skepticism. People who think I am an “expert”, think I am always right, and people who don’t know what I am expert in, think I am wrong.

In terms of how to use ideas that are “the consensus of experts”, that depends on the particular cost/benefit of the action vs non-action. In the case of AGW, whether the “experts” who hold the AGW consensus are correct or not is not the question. The question is (or should be), “is there sufficient evidence that the benefits of taking action to reduce greenhouse gas emissions exceed the costs of taking actions to reduce greenhouse gas emissions”. If that statement was examined and acted on honestly, action would have been taken decades ago. The “cost” to mitigate CO2 emissions is small. The “problem” is that those “cost” disproportionately affect those with fossil fuel assets, so they have a very strong incentive to “game” the system and put out disinformation (which they did).

Perfect information is not necessary to make good decisions. Intellectual integrity is necessary to make good decisions. The most important part of intellectual integrity is not letting your feelings interfere with what you think are facts and logic.

“There are fields where people falsely claim there is a consensus; Genetic effects on IQ, diet effects on health, Amyloid hypothesis of Alzheimer’s, Diabetes type 2 as “insulin resistance” are ones that comes to mind.”

This is misleading at best. Yes, I’m sure that there are “people (that) falsely claim there is a consensus,” but researchers studying those topics, as a whole, acknowledge a lack of consensus about many aspects of those topics (despite having their opinions in those areas).

Rarely is a field so insulated that a conspiracy propping up a false notion can be maintained as a consensus. I’m sure you can look to history and find an example or two early in a field’s history, but it is hard to imagine a scenario in modern times, in which there is not an incredible motivation to tear down wrong ideas in science. There are enough neighboring disciplines in which scientists could make their careers by exposing this.

But the main point is that if there is a consensus that is wrong, then that is determined by doing science and demonstrating that it is wrong. Rarely does this occur when there is a true and robust consensus. For this reason, if only for pragmatic reasons, it is a far better bet for the average person to accept the consensus of experts on robust and mature topics. What is the alternative? A bunch of “opinionated know-nothings.” I saw enough of those in the GOP debate a few days ago, and the result is not good

“In the case of AGW, whether the ‘experts’ who hold the AGW consensus are correct or not is not the question. The question is (or should be), ‘is there sufficient evidence that the benefits of taking action to reduce greenhouse gas emissions exceed the costs of taking actions to reduce greenhouse gas emissions.’ ”

The problem is to answer the latter question, it requires expertise, and a consensus of experts is the best bet as to what is likely to be true. Otherwise, who is evaluating the evidence? From there we determine as a society (via policy makers) what steps we want to take to affect the likely outcomes as described by the relevant experts.

daedalus2u, Ray Hyman talked about using the principle of charity: we assume our “opponent” is acting in good faith until proven otherwise. Motivated reasoning, especially constant attacks on ‘materialistic science’, is a factor to take into consideration in the assessment of arguments.

When someone makes assertions that cannot be checked, the skeptic defaults to “I don’t know”, and so the matter remains unresolved.

How much it costs to reduce greenhouse gas emissions is a completely different question than is AGW a real and serious danger. As a different question, it takes different expertise to answer.

In many areas, electricity from wind and solar is already cheaper than electricity from fossil fuels. If you add any cost for mitigating the effects of CO2 emissions, then wind and solar are cheaper right now.

There is no economic model that puts the cost of mitigation of AGW at zero or negative (although the AGW deniers do like to say that CO2 is a nutrient). At current CO2 levels, Greenland is unstable and will melt. When Greenland melts, sea level will go up 7 meters. Sea level rise will flood Florida. The destruction of all land less than 20 feet above sea level is not a zero-cost event.

This is why AGW denialists need to deny that AGW exists. If they admit it exists, then the questions become those of effects, timing and mitigation costs.

What is the value of all land less than 20 feet above sea level? If the value of that land is $25 trillion, what is it “worth” to delay the destruction of that land by a year? Seems to me that would be something like the net present value of $25 trillion. Plus some more because there are intangibles and human costs that can’t be monetized. Spending a $1 trillion per year does not seem excessive.

With all due respect, I think you’re living in a bit of an ivory tower. The world is complicated and getting more so all the time. Very few of us are likely to be expert (or even competent) in more than one or two relatively narrow fields. Regardless of whether the expert consensus is correct (or even particularly strong), for all practical purposes, amateurs/non-experts have no real alternative to trusting the acknowledged experts in other fields.

At minimum, the experts are considerably better than random guessing, which is pretty much what we would have to rely on for any field out of our own expertise. Obviously, it behooves us to acquire as much general scientific knowledge and critical thinking skills as we possibly can, but when it comes right down to it, for any given question, experts are more likely to be correct than non-experts.

Certainly, you can give examples where the experts were wrong, but generally those fields tend to be highly complex and usually greatly subjective as well, and there is no reason to believe that the non-experts would be any less wrong (although they might be wrong in a completely different way).

In general, science is self-correcting, and fairly quickly at that, especially the branches that are more amenable to objective measurement. But even in the soft sciences, going against the experts is likely to be a losing strategy in the long term.

Regarding hardnose, while it may not have been clear, my comment about Dunning-Kruger was not based on his comment about applying computer science insights to biology — or at least not solely. Rather, I felt that particular comment was yet another in a long line of similar pronouncements hardnose has made that lead to the inescapable conclusion that he feels completely confident that he is fully qualified to cast judgment on countless other expert consensus opinions — in none of which he has any demonstrated expertise. It’s not that I think HN is going to wrong because he has been wrong so many times in the past. It’s just that he has given me no reason at all that makes me think he might be right this time.

And while I have already agreed that approaching problems with new perspectives can often be useful, I still maintain that without expertise in both fields, any insights gained are just as likely to be wrong as they are to be useful. It’s the old saying about hammers and nails. To use your example, biologists can certainly suffer from tunnel vision and overlook something obvious to the right kind of outsider. But if the outsider’s particular “hammer” is incompatible with something of which the “hammer-wielder” is completely unaware, then the “nail” still won’t get pounded.

You are forgetting that the people who oppose a scientific consensus are very often experts in that field.

As a non-expert I can read about and consider both sides, and I might decide that the minority group of experts makes more sense than the consensus group.

You are also ignoring the tendency for a consensus group to become self-perpetuating. Once a group becomes the majority it is assumed to be correct, so more people join it. The opposition is over-powered and ridiculed and deprived of funding.

People, even experts, tend to agree with the majority simply because it is the majority.

Science can be self-correcting, but it can also fall into very deep ruts for very long periods of time.

HN, the null hypothesis is never *accepted* by an experiment or experimenter because it is the default hypothesis. The data must be sufficiently strong to reject it; if the evidence is weak then nothing has changed from the default — i.e. the hypothesis being tested remains invalid/untrue.

You’re describing a sort-of statistics-for-non-majors formulation of significance testing popularized by Fisher. In fact, though, more rigorous forms of hypothesis tests exist that do not give the null hypothesis any special “default” status, and hence we certainly can and do “accept” the null hypothesis. Classical and Bayesian hypothesis tests both permit it.

You are forgetting that the people who oppose a scientific consensus are very often experts in that field.
As a non-expert I can read about and consider both sides, and I might decide that the minority group of experts makes more sense than the consensus group.

Well, that’s comforting … although it does kind of remind me of all the non-expert parents who decide that the “expert” Andrew Wakefield makes more sense than the consensus group of physicians who recommend vaccinations.

You are also ignoring the tendency for a consensus group to become self-perpetuating. Once a group becomes the majority it is assumed to be correct, so more people join it. The opposition is over-powered and ridiculed and deprived of funding.
People, even experts, tend to agree with the majority simply because it is the majority.
Science can be self-correcting, but it can also fall into very deep ruts for very long periods of time.

Unless you have solid proof that it MORE common for the expert consensus to be wrong, then it is still smarter to at least provisionally accept the expert consensus as the most likely to be true.

Besides, if it is so damn obvious that the experts are wrong that even a non-expert can plainly see it, then the consensus will change PDQ. They don’t give Nobel Prizes for agreeing with what we already know.

Yeah, as a non-expert…
…I can read all about the expert consensus on climate change and then I can read all about why this consensus is wrong from the minority of experts who disagree with the consensus view and I can then decide which view is correct.
…And my decision as to which is correct will be worthless – because I am a non-expert!

The truth is that non-experts who insist on taking a view on subjects that require expertise often do so because of stated or unstated underlying motivations.

jt512, I was highlighting the difference between science and pseudoscience. In pseudoscience, all hypotheses plucked out of thin air are valid until science proves them wrong, then another hypothesis is plucked out of thin air. What’s the current one for homeopathy, I’ve forgotten, is it nanobollocks or quantum-woo? Subluxations being the cause of disease is still believed by many chiropractors. Then we have meridians, chi, chakras, and undetectable energy. All examples of the dangerous things that can happen when the null hypothesis is not the default.

I have a hypothesis that homeopathy can be used to strengthen concrete instead of steel. Obviously, the null hypothesis must be the default before rigorous testing has been performed. Steel used to reinforce concrete was originally an hypothesis, now it is an established theory, which means it has become the default hypothesis rather than the null hypothesis. I’m not sure why you find the basics so difficult to understand.

Steve, BillyJoe, mumadad and others: Thanks for the comments, and I apologize for my irritation earlier.

Now that I’ve found this page, I have a lot to think about (I’m a writing teacher who wants to incorporate critical thinking for the lay person into my syllabus).

Some of my questions I thought I already knew the answer to, so it’s nice to hear them confirmed.

Here’s an example of a typical exchange I hear as small-time farmer: “No one has proven that those pesticide residues are safe (ie. not harmful).”

The answer is complicated, but people want simplicity. They want things “proven” safe. It irritates the hell out of me trying to answer them.

The hypothesis to be tested would be: “X amount (below established tolerances) of pesticide residue Y in produce does NOT cause effects.” Batteries of tests are run on mice, rats, dogs, etc. and the results are negative–no effects shown.

Therefore, the conclusion would NOT be pesticide residues are “safe,” but “no effects have been shown below the tolerance listed”; in other words, pesticide residue is for all practical purposes “safe.” There
safe and safe. There’s theory and there’s theory. There’s proven and there’s proven. Science and the popular mind seem to be continually at odds in their languages.

Mike, It’s not about “Is this safe?”, it’s about the risk-benefit and the cost-benefit ratios. Drinking a glass of water carries the risks of choking to death and being injured by the glass, but if we don’t drink anything we will surely die.

Pete, yes. And it’s “interesting” to meet people at farmers markets who are convinced that the risks of eating an apple that was sprayed a month ago with captan and imidan outweigh the benefits of eating fresh fruit. I just don’t argue with them anymore.

Good luck with your course. The world definitely needs better critical thinking skills.

In your search for information, I highly recommend Dr. Novella’a course from “The Great Courses” called “Your Deceptive Mind: A Scientific Guide to Critical Thinking”. The video version is frequently on sale for less then $50.

I recommended it a while ago, sight unseen, because I trusted Dr. N. Since then, I finally “forced” myself to make time over the holidays to view it myself, and I couldn’t be more pleased. Aimed at the layman, it is an excellent introduction to critical thinking, understanding logical fallacies and avoiding the tricks that our minds play on us.

“Pete, yes. And it’s ‘interesting’ to meet people at farmers markets who are convinced that the risks of eating an apple that was sprayed a month ago with captan and imidan outweigh the benefits of eating fresh fruit. I just don’t argue with them anymore.”

MikeB. My response to this evidence-less assertion would be to point out the obvious incompatibility this view has with the best evidence available. Of all the dietary recommendations I can think of, the least controversial and most supported by evidence from various angles, is that diets high in fruits and vegetables are correlated with better outcomes. If the presence of pesticides, in general, could outweigh the benefits of eating fruits and vegetables, then we would have never established this relationship.

If their assertion is specific to certain pesticides, that is harder to refute, but that is also likely due to their assertion being completely devoid of evidence. If they are avoiding particular pesticides, then we need to look at what their alternative is, and if there is any evidence that those are ‘safer’ at the doses present in foods.

Then we can wonder why people worry about things that are relatively low risk, yet take large risks without much thought. For some reasons risks that are invisible are scarier than risks that are not, with little regard to their actual likelihood of harm. I think some of this is because your imagination is required to think about things like pesticides and ghosts, while some hop in a car on a daily basis and are comfortable with it.

I have a hypothesis that homeopathy can be used to strengthen concrete instead of steel. Obviously, the null hypothesis must be the default before rigorous testing has been performed. Steel used to reinforce concrete was originally an hypothesis, now it is an established theory, which means it has become the default hypothesis rather than the null hypothesis. I’m not sure why you find the basics so difficult to understand.

To me, as a statistician, nothing you wrote makes sense. It’s not me who doesn’t understand “the basics.” It seems to me that skeptics:statistics::pseudoscientists:science. The former borrow the terminology of the latter to make themselves sound like they’re actually doing something rigorous.

jt512, I agree with David Colquhoun, who wrote: “Falsifying hypotheses is how science works. Every scientist should be doing their best to falsify their pet hypothesis (the fact that many don’t is one of the problems of science, but that’s not what we are talking about here).”http://chalkdustmagazine.com/features/the-perils-of-p-values/

@Pete A:
Although I agree with the Colquhoun’s thesis in that article (ie, p-values are problematic), he is confused on the point you quote, and thus so are you. Colquhoun wrote:

The p-value does exactly what it claims. If it is very small, then it’s unlikely that the null hypothesis is true. Falsifying hypotheses is how science works. Every scientist should be doing their best to falsify their pet hypothesis…

Testing by using p-values is not an attempt to falsify the researcher’s “pet” hypothesis; it is exactly the converse: an attempt to confirm the researcher’s hypothesis by falsifying its complement, the null.

jt512, My conclusion of all the comments that you have addressed to me on Dr Novellas website thus far:
1. You have occasionally been extremely helpful, which I truly appreciate.
2. You have been mostly insulting, both outrightly and obtusely, which I find intriguing because it says far more about you than it does about me. Having spent my whole career in engineering there is nothing that you could think of saying that would actually offend me. The only people who can possibly offend me are those who I actually give a shit about.

And here you are now, claiming that David Colquhoun is confused. What is the Bayesian probability that you are correct? My estimate probably doesn’t agree with yours! Think very seriously about this the next time you need the expertise of a pharmacist.

You often give the impression that you are an obnoxious authority on statistics who enjoys criticising skeptics who are simply trying their best. In case it hasn’t yet dawned on you, skepticism isn’t all about “actually doing something rigorous”. If you want to be a rigorous skeptic then you must first learn how to be genuinely helpful, rather than keeping your head stuck firmly up your arse.

“The people who make tenure decisions (administrators) are not expert in the cutting edge of science, so they can’t make judgements based on how a particular candidate is advancing it or not.”

The administrators are other professors/researchers, so their judgments can be fair. However they are also human beings so they won’t give tenure to someone they don’t like, or someone who doesn’t “fit in.”

That wasn’t my experience, by the way, since I never tried to get an academic job. I observed the nonsense that went on while in graduate school. It was disillusioning because like so many at this blog, I had a romanticized view of science.

I even heard one new professor state proudly that she was hired because the department chair liked her. I mean, she didn’t even see any problem with that. Also, she had done a very lame dissertation at an ivy league school, and had passed because her advisor liked her. Then of course she got tenure, because everyone liked her so much. I really doubt she ever made any discoveries that contradicted the consensus.

“Actually what you get tenure for is publishing in high tier glamour journals and bringing in lots of research funding.”

I think most people with actual familiarity with the process would agree whole-heartedly. But that still strongly refutes hardnose’s original point, i.e. that the tenure system is biased towards conformity.

Actually, just the opposite is true. One of the most common complaints about the journals is that they are not interested in publishing replication studies — rather, they only want new, “provocative” research. The same thing holds true for the research funding.

Remember, the original purpose of the tenure system was to allow established experts to challenge the orthodoxy without fear of reprisal.

Actually what you get tenure for is publishing in high tier glamour journals and bringing in lots of research funding.

That is, at best, half true. Bringing in funding helps, and may even be a requirement in some departments, and more funding is certainly better than less, but a candidate may bring in “lots of funding” and be denied tenure because their work isn’t excellent, and they may bring in little or no funding, but be granted tenure because their work looks exceptionally promising, or because of their teaching excellence, etc.

The people who make tenure decisions (administrators) are not expert in the cutting edge of science, so they can’t make judgements based on how a particular candidate is advancing it or not.

Yes, the person who makes the official decision is unlikely to be an expert in the candidate’s field; however, by the time the decision reaches that person’s desk, the candidate has already gone through at least two levels of approval, most importantly, by the candidate’s own department, whose members are subject-matter experts. Furthermore, tenure committees rely heavily on outside evaluations of the candidate, from researchers whose work is closely related to the candidate’s, and are thus in the best position to determine the significance of the candidate’s work.

“…she was hired because the department chair liked her… Also, she had done a very lame dissertation at an ivy league school, and had passed because her advisor liked her. Then of course she got tenure, because everyone liked her so much. I really doubt she ever made any discoveries that contradicted the consensus.”

Ah, the old HN narrative: “Mainstream” science is some kinda club that excludes people like him for being too edgy and unorthodox and maverick, while rewarding “likeability.” People do not get into ivy league schools because they are likable, do not “pass” dissertations because they are liked, and do not get tenured for it either. Maybe they are liked, but they must also be f**kin’ smart as balls, good at their jobs, and supremely qualified — you dumbass.

There are analogous high profile science papers that generate a lot of hype, but which do not advance science.

GWAS for intelligence? So far, no genes have been identified.
Treatments for Alzheimer’s? All treatments based on reducing Amyloid have failed.
Why is there a “reproducibility crisis”?
Why are a higher percentage of articles in “high tier journals” retracted?

These are not problems of “science” per se, they are problems of how science funding is allocated and how credit for science is awarded.

“Ah, the old HN narrative: “Mainstream” science is some kinda club that excludes people like him for being too edgy and unorthodox and maverick, while rewarding “likeability.” ”

I made sure to say it wasn’t my experience, so you wouldn’t repeat that same old accusation. But you did anyway.

I never applied for an academic job because I decided to develop software instead. I was never rejected for tenure since I never applied for it. I passed my dissertation, either because it was good research or because the professors liked me. I never made any waves.

When you are an employee at a company, or when you are a graduate student, you have to more or less fit in. That is the nature of life. People who make trouble in any way are not loved and will be gotten rid of sooner or later.

So it can be hard for science to progress.

I actually did some good research but fortunately my professors just skimmed it and saw the p values were under .05, and that’s what you really need to pass.

“Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.”

Karl Popper

The history of science is littered with examples of where consensus has been turned on its head. And in the internet age a usually anonymous majority often ridicules the minority as cranks and trolls.

An example in recent years is the persistence of the amyloid hypothesis for Alzheimer’s disease as noted above. Scientists have repeatedly stated that either plaques or oligomers are neurotoxic but have presented very little evidence for their perspective.

The alternative hypothesis is that Alzheimer’s disease is caused by oxidative stress.

Second is the question of evaluating studies. Antixoxidant compounds in plants have partially reversed Alzheimer’s disease in four cases (aromatherapy with rosemary, lemon, orange, and lavender essential oils, Korean red ginseng, heat processed ginseng, davaie loban–ginger, sweet flag, black pepper, and nut grass). All of these studies were small and only the last one was double blinded and randomized. Unless, however, there is a deliberate manipulation of data, studies of this type only nullify a hypothesis for people who are already predisposed to disbelieve that hypothesis. The neutral way to state would be that these studies are insufficient.

Billions of dollars are spent to prove a hypothesis that has failed in clinical tests; almost nothing is spent on attempting to verify treatments which have shown promise in clinical tests. No one is completely without bias, but if a person rejects results on the basis that they are small, that they do not fit the consensus hypothesis or that they do not fit their world view then they are doing a grave disservice.

You are arguing against a straw man. No one here has ever suggested that anything should be arbitrarily dismissed. Just the opposite actually. The main point of Dr. Novella’s blog is to encourage scientific skepticism.

Steve has stated repeatedly that all science is provisional, all evidence must be examined impartially, and most importantly, we must always be willing to change our mind when the evidence dictates.

Yes, humans are fallible and science does make mistakes. Which is exactly why the scientific method includes correction mechanisms.

We already know the consensus opinion is not infallible, but no one has ever suggested that it is wrong more often than it is right — especially in the long run. You can also make the case that it is getting more reliable over time (although still not perfect) with improved tools and techniques.

All of which strongly implies two things:

A non-expert should do their best to find out what the expert consensus opinion actually is and then follow that advice. Only by doing that will a layman have the best chance of being right more often than they are wrong.

And, while nothing should be arbitrarily dismissed, outlier opinions should have a higher burden of proof simply because most of them will turn out to be wrong.

Never forget that virtually every quack or fringe theorist in history has used the same “science is wrong sometimes, therefore it is wrong now because it disagrees with my pet theory” argument. A few times, of course, it has been used by people that were eventually vindicated.

But that does not mean it is ever a valid argument. Rather, it is simply another common cognitive bias and one of the many ways we can delude ourselves.

The only thing that really matters is the evidence, and I think the tremendous scientific process in the last few hundred years is pretty good evidence that science gets it right more often than not. Of course, like any good skeptic, if you have evidence to the contrary, I’m willing to change my mind.