‘Dr. Fraud’ gets a lot of job offers

“It’s our pleasure to add your name as our editor in chief for this journal with no responsibilities.”

That was just one of the wild responses a researcher received after applying to be an editor with several academic journals. Wild, because the researcher wasn’t a real person. Her name was Anna O. Szust and she was invented by a group of academics from Poland — oszust meaning “a fraud,” in Polish.

The academics were tired of being approached by so-called predatory journals, known for publishing nearly anything for a fee. So they designed a sting operation, detailed in this week’s Nature, to find out how many would accept a fictional researcher as an editor, even with a subpar CV.

“The fact that she had absolutely no editorial experience and hadn’t even written a research paper just made her essentially the worst candidate that you can imagine as an editor,” Kasia Pisanski, one of the academics behind the experiment, told us.

They sent the fake CV to 360 publications, a mix of legitimate titles as well as 120 suspected predatory journals.

The responses began coming back within hours. And in the end, the fictional academic had been accepted as editor by 48 titles, 40 from suspected predatory journals. Many of them wanted Szust to help them make money by recruiting others to pay for submissions, or to even just directly pay them a fee.

The authors said they hoped the findings will prevent unwitting academics from falling victim to these journals, and lead to greater scrutiny of publications during consideration of promotions and tenure.

“The idea is to get people talking about it. It really is a call to action,” Pisanski told us.

You can listen to our full interview with Pisanski about predatory journals and the sting operation here:

Canada wins a major victory and (almost) nobody noticed

McGill law professor Richard Gold expected to see big headlines this week following news that Canada won a major trade victory against a U.S. pharmaceutical giant.

But curiously there was almost no media coverage, which is especially odd because the story was covered by major media outlets when the challenge was launched four years ago.

That’s when Eli Lilly accused Canada of violating the North American Free Trade Agreement (NAFTA) by not honouring drug patents. Eli Lilly launched the NAFTA challenge after Canadian judges threw out the patents on two Eli Lilly drugs — Strattera (atomoxetine) for ADHD and the antipsychotic Zyprexa (olanzapine). Those court decisions allowed generic companies to make copies of the drugs.

Gold, who had no official role in the case, said the NAFTA victory supports Canada’s right to create a made-in-Canada patent regime. “The result is the opposite of what Eli Lilly wanted, and the pharmaceutical industry wanted,” he told us, adding the decision suggests that countries have more freedom in developing patent law than perhaps they knew.

‘We won big,’ against a U.S. pharmaceutical giant, says Richard Gold. (CBC)

“We won big,” Gold said. Eli Lilly has been ordered to reimburse Canada for $4.4 million, which covers the cost of administering the arbitration and 75 per cent of Canada’s legal costs..

A second look at bias in science

Stanford University researcher Daniele Fanelli started his career as an idealistic PhD student who believed in the scientific process. But as he pursued his studies in evolutionary biology, he said, the reality he witnessed was “frustrating and disappointing.” So he changed direction and began studying bias and scientific misconduct.

In a paper published this week, he had good news for science. He concluded that the problem isn’t as widespread as it seems.

There is bias, he said, but it appears intermittently with certain factors increasing the risk.

“Small study effects were by far the most widespread issue They were more likely to exaggerate the effect or overestimate the importance of a given phenomenon.”

Fanelli also believes he has falsified the hypothesis that authors who publish often are at higher risk of exaggerating their findings.

“It turns out that researchers who publish a lot are actually less likely to overestimate their results so, in a sense, they are better scientists,” he told us.

Fanelli searched for the fingerprints of bias in the bundles of papers that are grouped together for meta-analysis in different scientific areas. He found stronger bias signals in social science compared to biological and physical science. He said his next project is to study why the risk seems higher in certain fields.

Ironically, Fanelli said, the impression there is widespread bias in science could also be the result of bias — publication bias. In other words, the papers that report evidence of bias are given a higher profile, creating the impression that the problem is worse than it is.

Warning: False health news?

This headline raised eyebrows: “New Alzheimer’s Test Can Predict Age When Disease Will Appear.” It ran this week in the Guardian and other news outlets.

Anyone going to a doctor to ask for that test will be disappointed. That’s because there is no test. The story was based on preliminary research looking at genetic associations with Alzheimer’s disease.

“That’s a loaded headline and people whose families are devastated by this illness, they do grab onto things,” said Dr. Preeti Malani, gerontology specialist and former journalist. “We’re so far away from a test that can be used clinically.”

Even if there were a test, there are ethical issues in screening for a disease with no treatment and no cure. “There’s a reason why doctors don’t just send off genetic tests. What do you do with that information?”

So what does she think about the headline? “It probably made people read it,” she said. And in the era of social media, that can be a problem. “It’s tweets and posts, and they haven’t read the news article much less the actual paper.”

“If the Guardian had simply changed “can” to “may,” the headline would have been a bit more suitable, in my opinion.”

The scientist responsible for the research told us that he believes the headline could prompt people to ask their doctors for the test. They’re even asking him.

“I have had a few individuals emailing me to find out how they can get this test for themselves,” Dr. Rahul Desikan told us. “As a practising physician, my main concern would be improper use and interpretation of our new test. Given the need for additional validation, our test right now is primarily intended for research and clinical trial use.”

Songs to save a life to

What do Sweet Home Alabama, Dancing Queen, and Stayin’ Alive all have in common? They’re all 100 beats-per-minute, the recommended rate for chest compressions during CPR.

The New York-Presbyterian Hospital has put together a full playlist of “Songs to do CPR to.” Its 40 selections span musical eras and genres, so there’s something for everyone.