Unfortunately, the “yoga mat chemical” is an effective meme. Who wants to eat something that can be found in a yoga mat? Many journalists, such as Lindsay Abrams, have bought into the meme without any critical analysis. Abrams helpfully provides a list to her readers of “500 more foods containing the yoga mat chemical.”

Here are some other foods her readers might also want to reconsider:

This popular health food can also be found in industrial lubricants, solvents, cleaners, paints, inks adhesives and hydraulic fluid. It is burned as fuel. It is also used to make foam found in, “coolers, refrigerators, automotive interiors and even footwear.” It is used to make carpet backing and insulation.

Hawaii Senate Bill 2571, which is making its way through the legislature, would require that a large non-removable warning label be attached to the back of every cell phone. Originally the warning label was to read, “This device emits electromagnetic radiation, exposure to which may cause brain cancer. Users, especially children and pregnant women, should keep this device away from the head and body.” A revised version of the bill, however, changed the warning to, “To reduce exposure to radiation that may be hazardous to your health, please follow the enclosed product safety guidelines.”

This seems like an example of clear nanny-state overreach, but worse it is not based on science. According to reports every expert consulted by the relevant committees argued against the measure, but the legislators passed it anyway. The measure has one more committee to get through, and then it would go to the House for a vote.

As I have written before (see also here and here) there is no clear link between cell phone use and brain cancer. The plausibility of a link is low but not zero. Non-ionizing radiation is not energetic enough to break chemical bonds, and therefore should not cause DNA damage that could lead to cancer. However, an alternate physical mechanism cannot be ruled out, and biology is complicated, so I don’t think we can rule out a possible connection on theoretical grounds alone. We can just say it’s unlikely.

A recent column by political commenter, Charles Krauthammer, attacking the notion that global warming is “settled science,” has been getting a lot of attention. Although perhaps he is making a more nuanced argument than most global warming dissidents, Krauthammer is still largely attacking straw men and engaging in tactics of denial. Up front he says he is not a global warming denier nor a believer, but his arguments are certainly mainstream global warming denial.

He begins:

“The debate is settled,” asserted propagandist in chief Barack Obama in his latest State of the Union address. “Climate change is a fact.” Really? There is nothing more anti-scientific than the very idea that science is settled, static, impervious to challenge.

To be fair, Krauthammer is talking about the politics of climate change as much as the science, and politicians often open the door to criticism by overstating the case or glossing over complexity and nuance. That does not, however, justify the same sloppiness by Krauthammer. The language above is virtually identical to that used by creationists to attack the position that evolution is a “settled fact” of science. Both arguments erect a straw man about what we mean by settled.

In both cases (evolution and climate change) there is a core scientific claim that is well-established, with less and less certain details about that basic fact. That life on earth is the product of evolution with common descent is established beyond all scientific doubt, sufficient to be treated as a fact. It would take a great deal of rock-solid evidence to push evolution from its scientific perch.

Classification systems are important in science. They often reflect our fundamental understanding of nature, and are also important for efficient and unambiguous communication among scientists. But there is also an emotional aspect to the labels we attach to things.

Perhaps the most famous example of this from recent history is the “demoting” of Pluto from planet to dwarf planet in 2006. There was a great deal of hand wringing about this decision, which ultimate was based on a practical operational definition – a planet needs to be in orbit around the sun, be large enough to pull itself into a spherical shape, and have cleared out its orbital neighborhood. Pluto failed the third criterion, and so was reclassified a “dwarf planet.”

It is very telling that most news reports discussing the category change characterized it as “downgrading” or “demoting” Pluto. Clearly people felt that being a planet was more special or prestigious than being a dwarf planet. This is not unreasonable – planets are generally larger and have a more dominant presence in the solar system. There are currently 8 planets, and that number is now very unlikely to change. There are only 5 named dwarf planets, but that number can climb very high as new Kuiper belt objects are discovered and named.

Still, it is interesting that what should be a technical issue appealing to those who love the Dewey Decimal System became such an emotional controversy for the general public.

I only have time for a quick entry today, so here is any easy one – another example of pareidolia unrestrained by reality testing. Pareidolia is the tendency for our brains to match known patterns to random sensory noise, most commonly applied to images. The most familiar image to the human brain is the human face, and so perhaps the most common experience of pareidolia is the seeing of a face in the clouds, in a rust stain, tree bark, tortilla shell, a hillside, or on NASA photos of other worlds.

The Face on Mars is a famous example. Low resolution images of the Cydonia region of Mars showed an apparent face, although the image was lit from the side so half the “face” was missing, and the nostril (which added to the overall illusion) was just data loss from the image. Later higher resolution images showed the face for what it was, just another natural formation.

Imagine coming home to your spouse and finding someone who looks and acts exactly like your spouse, but you have the strong feeling that they are an imposter. They don’t “feel” like your spouse. Something is clearly wrong. In this situation most people conclude that their spouse is, in fact, an imposter. In some cases this has even led to the murder of the “imposter” spouse.

This is a neurological syndrome known as Capgras delusion – a sense of hypofamiliarity, that someone well known to you is unfamiliar. There is also the opposite of this – hyperfamiliarity, the sense that a stranger is familiar to you, known as Fregoli delusion. Sufferers often feel that they are being stalked by someone known to them but in disguise.

Psychologists and neuroscientists are trying to establish the wiring or “neuroanatomical correlates” that underlie such phenomena. What are the circuits in our brains that result in these thought processes? A recent article by psychologist Philip Garrans explores these issues in detail, but with appropriate caution. We are dealing with complex concepts and some fuzzy definitions. But in there are some clear mental phenomena that reveal, at least to an extent, how our minds work.

The “reality testing” model discussed by Garran reflects the overall hierarchical organization of the brain. There are circuits that subconsciously create beliefs, impressions, or hypotheses. We also have “reality testing” circuits, specifically the right dorsolateral prefrontal circuitry, that examine these beliefs to see if they are internally consistent and also consistent with our existing model of reality. Delusions, such as Capgras and Fregoli, result from a “metacognitive failure” of these reality testing circuits.

The issue of genetically modified food is an excellent one for skeptics – it is a complex question that mostly revolves around scientific data, popular beliefs are rife with myths, misconceptions, and ideology, there are active and well-funded campaigns of misinformation regarding GM, and it is a hugely important topic for society. The topic is too big to cover in one blog post, which is why I have been writing about it sporadically to cover different angles of this issue (see here, here, here, and here). I also was recently interviewed for Mother Jones, with the result in both article and podcast form. The comments after the article are especially revealing.

Much of the discussion around GMO involves the two most common GM traits, pesticide(Bt) production and herbicide resistance. While these traits can be very useful when used intelligently, the potential for GM technology is perhaps much greater in other realms, including disease resistance. Late blight alone, a disease that affects potatoes and tomatoes, is estimated to cost 3-5 billion dollars per year in the US, Europe, and developing countries, through the cost of fungicide use and crop loss.

A three year trial of a new GM potato variety has just concluded, demonstrating impressive resistance to late blight. The researchers took a gene (Rpi-vnt1.1) isolated from a wild relative of potato, Solanum venturii, and placed in into the potato variety known as Desiree. The results:

A new Rice University survey of 10,000 people explores issues of science and religion. Surveys are always fascinating, giving us a “lay of the land” of what people around us believe. However, they are also very tricky. Results can vary wildly based upon how a question is asked, and what questions surround them. This study was presented at the AAAS meeting, and is not published, so I don’t have access to the actual questions.

With those caveats in mind, here are the main results:

50 percent of evangelicals believe that science and religion can work together, compared to 38 percent of Americans.
18 percent of scientists attended weekly religious services, compared with 20 percent of the general U.S. population;
15 percent of scientists consider themselves very religious (versus 19 percent of the general U.S. population);
13.5 percent of scientists read religious texts weekly (compared with 17 percent of the U.S. population)
19 percent of scientists pray several times a day (versus 26 percent of the U.S. population).
Nearly 60 percent of evangelical Protestants and 38 percent of all surveyed believe “scientists should be open to considering miracles in their theories or explanations.”
27 percent of Americans feel that science and religion are in conflict. Of those who feel science and religion are in conflict, 52 percent sided with religion.
48 percent of evangelicals believe that science and religion can work in collaboration.
22 percent of scientists think most religious people are hostile to science.
Nearly 20 percent of the general population think religious people are hostile to science.
Nearly 22 percent of the general population think scientists are hostile to religion.
Nearly 36 percent of scientists have no doubt about God’s existence.

Azodicarbonamide is the same chemical used to make yoga mats, shoe soles, and other rubbery objects. It’s not supposed to be food or even eaten for that matter. And it’s definitely not “fresh”.

This, of course, is utter nonsense – that is, the notion that because a chemical has multiple uses, included in non-food items, that it is not “supposed” to be eaten. Azodicarbonamide (ADA) is used as a blowing agent in the formation of certain rubbers and sealants. It is used, for example, in sealing the tops of baby food containers, but also in the production of certain plastics and rubbers. It is also used as a bleaching agent for bread, giving it a softer and fluffier quality. None of this says anything about it’s safety at the levels used.

I love learning new terms that precisely capture important concepts. A recent article in Nature magazine by Regina Nuzzo reviews all the current woes with statistical analysis in scientific papers. I have covered most of the topics here over the years, but the Nature article in an excellent review. It also taught be a new term – P-hacking, which is essentially working the data until you reach the goal of a P-value of 0.05. .

The Problem

In a word, the big problem with the way statistical analysis is often done today is the dreaded P-value. The P-value is just one way to look at scientific data. It first assumes a specific null-hypothesis (such as, there is no correlation between these two variables) and then asks, what is the probability that the data would be at least as extreme as it is if the null-hypothesis were true? A P-value of 0.05 (a typical threshold for being considered “significant”) indicates a 5% probability that the data is due to chance, rather than a real effect.

Except – that’s not actually true. That is how most people interpret the P-value, but that is not what it says. The reason is that P-values do not consider many other important variables, like prior probability, effect size, confidence intervals, and alternative hypotheses. For example, if we ask – what is the probability of a new fresh set of data replicating the results of a study with a P-value of 0.05, we get a very different answer. Nuzzo reports:Continue Reading »