William Hollingworth and his colleagues must been pleased when they were notified that their manuscript had been accepted for publication in the prestigious (Journal impact factor =18!) Journal of Clinical Oncology. Their study examined whether screening for distress increased cancer patients’ uptake of services and improved their mood. The study also examined a neglected topic: how much did screening cost and was it cost-effective?

These authors presented their negative findings in a straightforward and transparent fashion: screening didn’t have a significant effect on patient mood. Patients were not particularly interested in specialized psychosocial services. And costing $28 per patient screened, screening did not lower healthcare costs, nor prove cost-effective.

This finding has significant implications for clinical and public policy. But the manuscript risked rejection because it violated proponents of screening’s strictly enforced confirmation bias and obliged conclusion that screening is cheap and effective.

Whomp!

Hollingsworth and colleagues were surely disappointed to discover that their article was accompanied by a negative editorial commentary. They had not been alerted or given an opportunity to offer a rebuttal. Their manuscript had made it through peer review, only to get whomped by a major proponent of screening, Linda Carlson.

After some faint praise, Carlson tried to neutralize the negative finding

despite several strengths, major study design limitations may explain this result, temper interpretations, and inform further clinical implementation of screening for distress programs.

And if anyone tries to access Hollingworth’s article through Google Scholar or the Journal of Clinical Oncology website, they run smack into a paywall. Yet they can get through to Carlson’s commentary without obstruction and download a PDF for free. So, easier to access the trashing of the article than the article itself. Doubly unfair!

Pigs must wear lipstick to win acceptance

Advocates from professional organizations insist on conclusions supporting screening and that negative findings be made up to appear to support their views as conditions for getting publishing. Reflecting these pressures, I have described the sandbagging of a paper I had been invited to submit, with reviewers insisting I not be so critical of the promotion of screening.

Try this experiment: Ignore what is said in abstracts of screening studies and instead check the results section carefully. You will see that there are actually lots of negative studies out there, but they have been spun into positive studies. This can easily be accomplished by authors ignoring results obtained for primary outcomes at pre-specified follow-up periods. They can hedge their bets and assess outcome with a full battery of measures at multiple timepoints and then choose the findings that make screening looked best. Or they can just ignore their actual results when writing abstracts and discussion sections.

Especially in their abstracts, articles report only the strongest results at the particular time point that make the study looked best. They emphasize unplanned subgroup analyses. Thus, they report that breast cancer patients did particularly well at 6 months, and ignore that was not true for 3 or 12 month follow up. Clever authors interested in getting published ignore other groups of cancer patients who did not benefit, even when their actual hypothesis had been that all patients would show an improvement and breast cancer patients had not been singled out ahead of time. With lots of opportunities to lump, split, and selectively report the data, such results can be obtained by chance, not fraud, but won’t replicate.

When my colleagues and I undertook a systematic review of the screening literature, we were unable to identify a single study that demonstrated that screening improved cancer patient outcomes in a comparison to patients having access to the same discussions and services without having to be screened. But there are four other reviews out there, all done by proponents of screening, that gloss over this lack of evidence for screening. The strong confirmatory bias extends to reviews.

Doing wrong by following recommended guidelines.

Hollingworth and colleagues implemented procedures that followed published guidelines for screening. They trained screening staff with audiovisual aids and role-playing. They developed guides to referral sources. They tracked numbers of discussions with distressed patients and referrals. Mirroring clinical realities, many other screening studies involve similar levels of training and resources. Unless cancer centers have special grants or gifts from donors, they probably cannot afford much more than this. And besides, advocates of screening have always emphasize that it is a no or low cost procedure to implement.

The invited commentary.

Carlson’s title seems to represent a revision in what implementation of screening requires and may represent more than many settings can afford.

Screening Alone Is Not Enough: The Importance of Appropriate Triage, Referral, and Evidence-Based Treatment of Distress and Common Problems

The 16 references of Carlson’s invited commentary involve eight citations of her work and her close colleague Barry Bultz. She is fending off a negative finding and collecting self-citations too.

Like many such commentaries. Carlson’s creates false authority by selective and inaccurate citation. If you take the trouble to actually look at the work that is cited, you will find much of it does not present original data and citations are not accurate or relevant– although it is not obvious from the commentary.

Our various studies in the Netherlands find that the proportion of cancer patients seeking specialty mental health services after diagnosis is about the same as the proportion who were getting those services beforehand. We find it takes about 28 hours of screening to produce one referral to a specially psychotherapy. Not very efficient.

The big picture.

Invited commentaries represent one form of privileged access publishing by which articles come to be found in prestigious, high impact journals with no or only minimal peer-reviewed. When they are listed on PubMed or other electronic bibliographic resources, there are typically no indications that commentaries evaded peer review, nor is there usually any indication provided in article itself. One has to learn to be skeptical and to look for evidence, like gratuitous inaccurate citations.

Invited commentaries come about with reviewers indicating a wish to comment on an article that seems likely to be accepted. Most typically, there is a certain cronyism in lavishing praise on articles done by colleagues doing similar work. Carlson’s commentary is less common in that it is intended to neutralize the impact of a manuscript that was apparently going to be accepted.

We need to better understand such distortions in the process by which “peer review” controls which papers get published and what they are required to say to get published. Articles published in high impact journals are not necessarily the best papers. They do not necessarily represent an adequate sampling of available data.

The Hollingworth study is only one example of a transparently negative study that made it through the editorial process at a high impact journal. But it is also an example of a study successfully defying confirmation bias and getting whomped. It remains to be seen whether this study suffers a subsequent selective ignoring in citations, like some other negative studies in psycho-oncology.

A black swan, a member of the species Cygnus atratus (from wikipedia)

We don’t know how many such studies don’t get through. Or in order to get through had to get a makeover with selective reporting, perhaps at the insistence of reviewers. It is thus impossible to quantify the distorting impact of confirmatory bias on the published literature. But sightings of black swans like this one clearly indicate that not all swans are white. We need to be skeptical about whether published studies represent all of the available evidence.

I recommend skeptical readers look for other commentaries, particularly in Journal of Clinical Oncology. I have documented this high impact journal as not having the best and most accurately reported psychosocial studies of cancer patients. It is no coincidence that many of the flawed studies about which I’ve complained were accompanied by laudatory commentaries. Check and you will find that commentators have often published similarly flawed studies with a positive spin.

What’s a reader to do?

Readers can write letters to the editor, but Journal of Clinical Oncology has a policy of allowing authors to veto publication of letters critical of their work. Letters to the editor are usually impotent form of protest anyway. They are seldom read by anyone except for the authors who are being criticized. And authors do agree to be criticized, they get the last word, often even ignoring what is said in a critical letter to the editor.

But fortunately, there is now the option of continued post publication peer review through PubMed Commons. Once you register, you can go to PubMed and leave comments about both the Hollingworth study and the unfairness of the commentary by Carlson. And others can express approval of what you write or add their own comment. Look for mine already there, challenging the unfair editorial commentary and expressing concern for the unfair treatment of the paper by Hollingworth and colleagues. You can come and dispute or agree with what I say.

The journals no longer control the post-publication review process. Linda Carlson can get involved in the discussion at PubMed of Hollingworth’s article in JCO, but she cannot have the last word.

About James Coyne PhD

James C. Coyne, PhD is Professor of Health Psychology at University Medical Center, Groningen, the Netherlands where he teaches scientific writing and critical thinking. He is also Visiting Professor, Institute for Health, Health Care Policy & Aging Research, Rutgers, the State University of New Jersey.
Dr. Coyne is Emeritus Professor of Psychology in Psychiatry, where he was also Director of Behavioral Oncology, Abramson Cancer Center and Senior Fellow Leonard Davis Institute of Health Economics. He has served as External Scientific Advisor to a decade of European Commission funded community based programs to improve care for depression in the community. He has written over 350 articles and chapters, including systematic reviews of screening for distress and depression in medical settings and classic articles about stress and coping, couples research, and interpersonal aspects of depression. He has been designated by ISI Web of Science as one of the most impactful psychologists and psychiatrists in the world. His books include Screening for Depression in Clinical Settings: An Evidence-Based Review edited with Alex Mitchell (Oxford University Press; 2009). He also blogs and is a regular contributor to the blog Science Based Medicine and to the PLOS One Blog, Mind the Brain. He is known for giving lively, controversial lectures using scientific evidence to challenge assumptions about the optimal way of providing psychosocial care and care for depression to medical patients.

As promised, this issue of Mind the Brain explains how the British Psychological Society Division of Clinical Psychology ’s Understanding Psychosis could have been much more credible and trustworthy. I point to well-founded skepticism about like-minded, self-selected groups representing single … Continue reading »

Does Understanding Psychosis and Schizophrenia exploit, disrespect, and marginalize service users? Genre confusion. The 180-page Understanding Psychosis and Schizophrenia produced by the British Psychological Society Division of Clinical Psychology is a puzzling document. We need to know its genre to … Continue reading »

Concluding installment of NIMH biomarker porn: Depression, daughters, and telomeres Pioneer HPA-axis researcher Bernard “Barney” Carroll’s comment left no doubt about what he thought of the Molecular Psychiatry article I discussed in my last issue of Mind the Brain: Where … Continue reading »

Does having to cope with their mother’s depression REALLY inflict irreversible damage on daughters’ psychobiology and shorten their lives? A recent BMJ article revived discussion of responsibility for hyped and distorted coverage of scientific work in the media. The usual … Continue reading »

Smile or Die – the European retitling of Barbara Ehrenreich’s realist, anti-positive-psychology book Bright Sided:How Positive Thinking Is Undermining America – captures the threat of some positive psychology marketers’ advice: if you do not buy what we sell, you will … Continue reading »