Congressional Committee tweets don’t usually get much attention. But when the House Committee on Science, Space, and Technology sent out a link to a Breitbart story claiming a “plunge” in global temperatures, people took notice. The takedowns flew in, from Slateand Bernie Sanders, from plenty of scientists, and most notably from the Weather Channel, which deemed Breitbart’s use of their meteorologist’s face worthy of a point-by-point debunking video.

There is nothing particularly noteworthy about Breitbart screwing up climate science, but the House Science Committee is among the most important scientific oversight bodies in the country. Since Texas Republican Lamar Smith took over its leadership in 2012, the Committee has spiraled down an increasingly anti-science rabbit hole: absurd hearings aimed at debunking consensus on global warming, outright witch hunts using the Committee’s subpoena power to intimidate scientists, and a Republican membership that includes some of the most anti-science lawmakers in the land.

The GOP’s shenanigans get the headlines, but what about the other side of the aisle? What is it like to be a member of Congress and sit on a science committee that doesn’t seem to understand science? What is it like to be an adult in a room full of toddlers? I asked some of the adults.

“I think it’s completely embarrassing,” said Mark Veasey, who represents Texas’s 33rd district, including parts of Dallas and Fort Worth. “You’re talking about something that 99.9 percent—if not 100 percent—of people in the legitimate science community says is a threat….To quote Breitbart over some of the most brilliant people in the world—and those are American scientists—and how they see climate change, I just think it’s a total embarrassment.”

Paul Tonko, who represents a chunk of upstate New York that includes Albany, has also called it embarrassing. “It is frustrating when you have the majority party of a committee pushing junk science and disproven myths to serve a political agenda,” he said. “It’s not just beneath the dignity of the Science Committee or Congress as a whole, it’s inherently dangerous. Science and research seek the truth—they don’t always fit so neatly with agendas.”

“I think it’s completely embarrassing.”

Suzanne Bonamici, of Oregon’s 1st District, also called it frustrating “to say the least” that the Committee “is spending time questioning climate researchers and ignoring the broad scientific consensus.” California Rep. Eric Swalwellcalled it the “Science” Committee in an email, and made sure I noted the air quotes. He said that in Obama’s first term, the Committee helped push forward on climate change and a green economy. “For the last four years, however, being on the Committee has meant defending the progress we’ve made.”

Frustration, embarrassment, a sense of Sisyphean hopelessness—this sounds like a grim gig. And Veasey also said that he doesn’t have much hope for a change in the Science Committee’s direction, because that change would have to come from the chairman. Smith has received hundreds of thousands of dollars in campaign support from the oil and gas industry over the years, and somehow finds himself in even greater climate change denial than ExxonMobil.

And of course, it isn’t just the leadership. The League of Conservation Voters maintains a scorecard of every legislator in Congress: for 2015, the most recent year available, the average of all the Democratic members on the science committee is 92.75 percent (with 100 being a perfect environment-friendly score). On the GOP side of the aisle, the average is just over three percent.

(I reached out to a smattering of GOP members of the Committee to get their take on its recent direction. None of them responded.)

Bill Foster, who represents a district including some suburbs of Chicago, is the only science PhD in all of Congress (“I very often feel lonely,” he said, before encouraging other scientists to run for office). “Since I made the transition from science into politics not so long ago, I’ve become very cognizant of the difference between scientific facts, and political facts,” he said. “Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.” Witness the 52 percent of Republicans who currently believe Trump won the popular vote, and you get the idea.

I’m not sure “climate change isn’t happening” has reached that “political fact” level, though Smith and his ilk have done their damndest. Recent polls suggest most Americans do understand the issue, and more and more they believe the government should act aggressively to tackle it.

“Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.”

That those in charge of our government disagree so publicly and strongly now has scientists terrified. “This has a high profile,” Foster said, “because if there is any committee in Congress that should operate on the basis of scientific truth, it ought to be the Science, Space, and Technology committee—so when it goes off the rails, then people notice.”

The odds of the train jumping back on the rails over the next four years appear slim. Policies that came from the Obama White House, like the Clean Power Plan, are obviously on thin ice with a Trump administration, and without any sort of check on Smith and company it is hard to say just how pro-fossil fuel, anti-climate the committee could really get.

In the face of all that, what is a sane member of Congress to do? Elizabeth Esty, who represents Connecticut’s 5th district, was among several Committee members to note that in spite of the disagreements on climate, she has managed to work with GOP leadership on other scientific issues. Rep. Swalwell said he will try and focus on bits of common ground, like the jobs that come with an expanding green economy. Rep. Veasey said his best hope is that some strong conservative voices from outside of Congress might start to make themselves heard by the Party’s upper echelons on climate and related issues.

An ugly and dire scenario, then, but the Democrats all seem to carry at least a glimmer of hope. “It’s certainly frustrating and concerning but I’m an optimist,” Esty said. “I wouldn’t run for this job if I weren’t.”

Dave Levitan is a science journalist, and author of the book Not A Scientist: How politicians mistake, misrepresent, and utterly mangle science. Find him on Twitter and at his website.

When people’s political beliefs are challenged, their brains become active in areas that govern personal identity and emotional responses to threats, neuroscientists have found.

The amygdala — the two almond-shaped areas hugging the center of the brain near the front — tends to become active when people dig in their heels about a political belief. Credit: Photo/Courtesy of Brain and Creativity Institute at USC

A USC-led study confirms what seems increasingly true in American politics: People become more hard-headed in their political beliefs when provided with contradictory evidence.

Neuroscientists at the Brain and Creativity Institute at USC said the findings from the functional MRI study seem especially relevant to how people responded to political news stories, fake or credible, throughout the election.

“Political beliefs are like religious beliefs in the respect that both are part of who you are and important for the social circle to which you belong,” said lead author Jonas Kaplan, an assistant research professor of psychology at the Brain and Creativity Institute at USC Dornsife College of Letters, Arts and Sciences. “To consider an alternative view, you would have to consider an alternative version of yourself.”

To determine which brain networks respond when someone holds firmly to a belief, the neuroscientists with the Brain and Creativity Institute at USC compared whether and how much people change their minds on nonpolitical and political issues when provided counter-evidence.

They discovered that people were more flexible when asked to consider the strength of their belief in nonpolitical statements — for example, “Albert Einstein was the greatest physicist of the 20th century.”

But when it came to reconsidering their political beliefs, such as whether the United States should reduce funding for the military, they would not budge.

“I was surprised that people would doubt that Einstein was a great physicist, but this study showed that there are certain realms where we retain flexibility in our beliefs,” Kaplan said.

The study was published on Dec. 23 in the Nature journal, Scientific Reports. Study co-authors were Sarah Gimbel of the Brain and Creativity Institute and Sam Harris, a neuroscientist for the Los Angeles-based nonprofit Project Reason.

Brain response to belief challenges

For the study, the neuroscientists recruited 40 people who were self-declared liberals. The scientists then examined through functional MRI how their brains responded when their beliefs were challenged.

During their brain imaging sessions, participants were presented with eight political statements that they had said they believe just as strongly as a set of eight nonpolitical statements. They were then shown five counter-claims that challenged each statement.

Participants rated the strength of their belief in the original statement on a scale of 1-7 after reading each counter-claim. The scientists then studied their brain scans to determine which areas became most engaged during these challenges.

Participants did not change their beliefs much, if at all, when provided with evidence that countered political statements such as, “The laws regulating gun ownership in the United States should be made more restrictive.”

But the scientists noticed the strength of their beliefs weakened by one or two points when challenged on nonpolitical topics, such as whether “Thomas Edison had invented the light bulb.” The participants were shown counter statements that prompted some feelings of doubt, such as “Nearly 70 years before Edison, Humphrey Davy demonstrated an electric lamp to the Royal Society.”

The study found that people who were most resistant to changing their beliefs had more activity in the amygdalae (a pair of almond-shaped areas near the center of the brain) and the insular cortex, compared with people who were more willing to change their minds.

“The activity in these areas, which are important for emotion and decision-making, may relate to how we feel when we encounter evidence against our beliefs,” said Kaplan, a co-director of the Dornsife Cognitive Neuroimaging Center at USC.

“The amygdala in particular is known to be especially involved in perceiving threat and anxiety,” Kaplan added. “The insular cortex processes feelings from the body, and it is important for detecting the emotional salience of stimuli. That is consistent with the idea that when we feel threatened, anxious or emotional, then we are less likely to change our minds.”

Thoughts that count

He also noted that a system in the brain, the Default Mode Network, surged in activity when participants’ political beliefs were challenged.

“These areas of the brain have been linked to thinking about who we are, and with the kind of rumination or deep thinking that takes us away from the here and now,” Kaplan said.

The researchers said that this latest study, along with one conducted earlier this year, indicate the Default Mode Network is important for high-level thinking about important personal beliefs or values.

“Understanding when and why people are likely to change their minds is an urgent objective,” said Gimbel, a research scientist at the Brain and Creativity Institute. “Knowing how and which statements may persuade people to change their political beliefs could be key for society’s progress,” she said.

The findings can apply to circumstances outside of politics, including how people respond to fake news stories.

“We should acknowledge that emotion plays a role in cognition and in how we decide what is true and what is not true,” Kaplan said. “We should not expect to be dispassionate computers. We are biological organisms.”

Arguing in a Boston courtroom in 1770, John Adams famously pronounced, “Facts are stubborn things,” which cannot be altered by “our wishes, our inclinations or the dictates of our passion.”

But facts, however stubborn, must pass through the trials of human perception before being acknowledged — or “canonized” — as facts. Given this, some may be forgiven for looking at passionate debates over the color of a dress and wondering if facts are up to the challenge.

Carl Bergstrom believes facts stand a fighting chance, especially if science has their back. A professor of biology at the University of Washington, he has used mathematical modeling to investigate the practice of science, and how science could be shaped by the biases and incentives inherent to human institutions.

“Science is a process of revealing facts through experimentation,” said Bergstrom. “But science is also a human endeavor, built on human institutions. Scientists seek status and respond to incentives just like anyone else does. So it is worth asking — with precise, answerable questions — if, when and how these incentives affect the practice of science.”

In an article published Dec. 20 in the journal eLife, Bergstrom and co-authors present a mathematical model that explores whether “publication bias” — the tendency of journals to publish mostly positive experimental results — influences how scientists canonize facts. Their results offer a warning that sharing positive results comes with the risk that a false claim could be canonized as fact. But their findings also offer hope by suggesting that simple changes to publication practices can minimize the risk of false canonization.

These issues have become particularly relevant over the past decade, as prominent articles have questioned the reproducibility of scientific experiments — a hallmark of validity for discoveries made using the scientific method. But neither Bergstrom nor most of the scientists engaged in these debates are questioning the validity of heavily studied and thoroughly demonstrated scientific truths, such as evolution, anthropogenic climate change or the general safety of vaccination.

“We’re modeling the chances of ‘false canonization’ of facts on lower levels of the scientific method,” said Bergstrom. “Evolution happens, and explains the diversity of life. Climate change is real. But we wanted to model if publication bias increases the risk of false canonization at the lowest levels of fact acquisition.”

Bergstrom cites a historical example of false canonization in science that lies close to our hearts — or specifically, below them. Biologists once postulated that bacteria caused stomach ulcers. But in the 1950s, gastroenterologist E.D. Palmer reported evidence that bacteria could not survive in the human gut.

“These findings, supported by the efficacy of antacids, supported the alternative ‘chemical theory of ulcer development,’ which was subsequently canonized,” said Bergstrom. “The problem was that Palmer was using experimental protocols that would not have detected Helicobacter pylori, the bacteria that we know today causes ulcers. It took about a half century to correct this falsehood.”

While the idea of false canonization itself may cause dyspepsia, Bergstrom and his team — lead author Silas Nissen of the Niels Bohr Institute in Denmark and co-authors Kevin Gross of North Carolina State University and UW undergraduate student Tali Magidson — set out to model the risks of false canonization given the fact that scientists have incentives to publish only their best, positive results. The so-called “negative results,” which show no clear, definitive conclusions or simply do not affirm a hypothesis, are much less likely to be published in peer-reviewed journals.

“The net effect of publication bias is that negative results are less likely to be seen, read and processed by scientific peers,” said Bergstrom. “Is this misleading the canonization process?”

For their model, Bergstrom’s team incorporated variables such as the rates of error in experiments, how much evidence is needed to canonize a claim as fact and the frequency with which negative results are published. Their mathematical model showed that the lower the publication rate is for negative results, the higher the risk for false canonization. And according to their model, one possible solution — raising the bar for canonization — didn’t help alleviate this risk.

“It turns out that requiring more evidence before canonizing a claim as fact did not help,” said Bergstrom. “Instead, our model showed that you need to publish more negative results — at least more than we probably are now.”

Since most negative results live out their obscurity in the pages of laboratory notebooks, it is difficult to quantify the ratio that are published. But clinical trials, which must be registered with the U.S. Food and Drug Administration before they begin, offer a window into how often negative results make it into the peer-reviewed literature. A 2008 analysis of 74 clinical trials for antidepressant drugs showed that scarcely more than 10 percent of negative results were published, compared to over 90 percent for positive results.

“Negative results are probably published at different rates in other fields of science,” said Bergstrom. “And new options today, such as self-publishing papers online and the rise of journals that accept some negative results, may affect this. But in general, we need to share negative results more than we are doing today.”

Their model also indicated that negative results had the biggest impact as a claim approached the point of canonization. That finding may offer scientists an easy way to prevent false canonization.

“By more closely scrutinizing claims as they achieve broader acceptance, we could identify false claims and keep them from being canonized,” said Bergstrom.

To Bergstrom, the model raises valid questions about how scientists choose to publish and share their findings — both positive and negative. He hopes that their findings pave the way for more detailed exploration of bias in scientific institutions, including the effects of funding sources and the different effects of incentives on different fields of science. But he believes a cultural shift is needed to avoid the risks of publication bias.

“As a community, we tend to say, ‘Damn it, this didn’t work, and I’m not going to write it up,'” said Bergstrom. “But I’d like scientists to reconsider that tendency, because science is only efficient if we publish a reasonable fraction of our negative findings.”

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.