Tuesday, 1 June 2010

Orwellian prize for journalistic misrepresentation

I am offering a prize each year for an article in an English-language national newspaper that has the most inaccurate report of a piece of academic work.

The prize will consist of a certificate and statuette and I would welcome suggestions for the design of both of these

.

The prize will be awarded in January of each year.

Rules

1. The article must purport to report results of academic research, and judgement will be based on a points scoring system, as follows:

* Factual error in the title: 3 points
* Factual error in a subtitle: 2 points
* Factual error in the body of the article: 1 point

2. Factual errors must be ones that can be judged against publicly available documents – i.e. not just opinions or reports of interviews.

3. Nominations must be posted on this blog. The nomination should contain:

* Web addresses for both the nominated article and the academic source that is misrepresented.
* Name and email contact of the nominator. Anonymous nominations are not allowed
* A scored copy of the article, as illustrated below

If a nominated article is not available electronically, then the nominator should provide a list of the points used to score the article, and retain a photocopy of the article, which should be provided to the judges on request.

4. If there is more than one plausible candidate for the prize, then additional criteria will be used, such as:

* The seriousness of the error, e.g. could it damage vulnerable groups?
* Relevant undisclosed vested interests by journalist or his/her newspaper
* The ratio of accurate to inaccurate content
* The presence of irrelevant but misleading content
* The size of the readership

and mitigating circumstances, such as

* Whether there was a misleading press release from the academic's institution
* Whether a scientist colluded in 'talking up' the findings and going beyond data

Nominators are encouraged to comment on these points also, but final judgement will be made by a panel of judges.

Illustrative example of nomination:

The following article is a strong contender for the prize in 2010, and illustrates the scoring system.

Boys aged eight to 11 who were given doses [once or twice a day](1) of docosahexaenoic acid, an essential fatty acid known as DHA, showed [big improvements in their performance during tasks involving attention](1).

Dr Robert McNamara, of the University of Cincinnati, who led the team of American researchers, said their findings could help pupils to study more effectively and potentially help to tackle both attention deficit hyperactivity disorder (ADHD) and depression. The study, reported in the American Journal of Clinical Nutrition, is important because a lack of DHA has been implicated in ADHD and other similar conditions, with poor maternal diet sometimes blamed for the child's deficiency.

ADHD affects an estimated 4%-8% of Britons and can seriously impair a child's education because they have trouble concentrating and are often disruptive in class. A lack of DHA has also been associated with bipolar disorder and schizophrenia.

"We found that, if you take DHA, you can enhance the function of those brain regions that are involved in paying attention, so it helps people concentrate," said McNamara. "The benefit is that it may represent an intervention that will help children or adults with attention impairments."

The researchers gave 33 US schoolboys 400mg or 1,200mg doses of DHA or a placebo every day for eight weeks. [Those who had received the high doses did much better in mental tasks involving mathematical challenges](1). Brain scans showed that functional activity in their frontal cortex – which controls memory, attention and the ability to plan – increased significantly.

The results, and fact that many people eat too little fish to get enough DHA through their diet, meant it could help all children to improve their learning, added McNamara. "The primary benefit is to treat ADHD and depression, but it could also help people with their memory, learning and attention," he said.

New nomination by Jon Simons, who has pointed out to me that there is a word limit on Comments, which makes it difficult to post nominations there. If you have a nomination, please save as a text file and send to me (email above) and I can post it here.

Scientists have developed a scan that can [measure the maturity of the brain](1), an advance that someday might be useful for [testing whether children are maturing normally](1) and for [gauging whether teenagers are grown-up enough to be treated as adults](1).

A federally funded study that involved [scanning more than 12,000 connections](1) in the brains of 238 volunteers ages 7 to 30 found that the technique appeared to accurately differentiate between the brains of adults and children and determine roughly where individuals scored in the normal trajectory of brain development.

While much more work is needed to validate and refine the test, the technique could have a host of uses, including [providing another way to make sure children's brains are developing properly](1), [in the same way doctors routinely measure other developmental milestones](1). [The scan could, for example, identify children who might be at risk for autism, schizophrenia and other problems because their brains are not maturing normally](1).

"If you are worried about a kid's development, in five minutes you could do a scan and it would spit out a measurement of their brain maturity level," said Nico Dosenbach, a pediatric neurology resident at St. Louis Children's Hospital who helped develop the technique described in Friday's issue of the journal Science. "That's sort of the future."

The technique developed by Dosenbach and his colleagues uses magnetic resonance imaging, already commonly used to measure activity in the brain by correlating increases and decreases in [blood flow](1) to various brain regions. The scans are considered safe because they do not use radiation.

In this case, the technique was called functional connectivity magnetic resonance imaging, or fcMRI, because it [measured connections](1) in the resting brains of the subjects. The researchers used a computer program to analyze how [connections in the brain changed as the mind matured](1), pinpointing 200 to produce an index of maturity. They found that [close connection weakened while distant connections strengthened as the brain matures](1), until about age 21 or 22.

Dosenbach estimated they were able to distinguish between the brain of children ages 7 to 11 and that of adults ages 25 to 30 with 90 percent accuracy. They were able to differentiate between adolescents and adults with 75 percent accuracy, Dosenbach said in an e-mail.

But Dosenbach warned that it would be premature to start using the technique to measure individual maturity levels.

25 comments:

Great idea - would be wonderful training for PhD students and Post-docs. Perhaps a prize for the nominator of the winning article? Our department offers graduate students a course on science communication - I will set them this challenge

As a journalist and editor turned psychology student, I just want to say: this is a brilliant idea, and hopefully the existence of the competition will give one or two fellow hacks cause to stop and think before they write.

(The only thing I'd caution against is necessarily placing too much blame on the bylined journalist. Reporters very rarely write their own headlines, remember, and sometimes the stories that end up in print bear scant resemblance to what was originally filed. This is in no way a criticism of the competition or the way it's described. It's just a point I feel is worth flagging up, speaking as someone who has almost certainly -- shamefully -- misrepresented reporters' stories in the past.)

Thanks to all supporters. I will take on board Simon's comment re headlines. Maybe award should go to the newspaper, not the journalist. In writing science papers, we are always told that the most important thing is the title (read by 100% of those who see your work), then abstract (read by maybe 5%) and then article (read by maybe 1%) - so getting title and abstract right are really important as they determine who reads on. Idea of letting someone else write them seems crazy, but I guess the idea is that headline-writers know how to capture attention. Would a journalist complain if the headline-writer makes factual errors in the headline? Or don't they care so long as it means someone reads the article? They should worry as the world at large will assume they are responsible for the lot.

I am not a scientist, but an art's graduate. I first became aware of the major problem with nonsense when subjected to the drivel of post-modern philosophers. We would be given these incomprehensible text, full of impenetrable pseudo-scientific waffle, with no explanation as to what we were supposed to get from them.

Obviously there was no real meaning to them, and I am sure the lecturers at my university had no understanding of what they were supposed to say. There just seemed to be an agreement that whatever they mean, it must be important and true - because its dense and looks "sciency".

I was angry about the prevalence of nonsense being "taught" in arts/humanities at my university, and regretful that I did not have a functioning science literacy with which to properly express my annoyance.

In my art essays I began knowingly referencing the text used in the Sokal hoax, then abandoned any reference to post-modern philosophy and art theory in favour of the popular science that was by that point sparking my interest.

I discovered the writings Dawkins, Dennett, Feynmann, Hawking - and learned more about truth and beauty than I ever could from any post-modern writing.

Then I learned about the massive disparity between the popular reporting of science, and reality. And although I am still a long way from being scientifically literate, I am going someway to understanding the meaning and value of evidence, and the ability to distinguish between the valuable and the worthless.

I will keep a keen eye on the Orwellian prize - it's a great idea - although I fear I will be unable to make any nominations as most reports of academic research are not easy for me to digest! I will however begin work on some designs for the certificate and statuette.

Dave, that is spectucularly bad - why put such a horrific non-linear scale on the diagram? It doesn't actually help to make the point any better. I've mocked up a numerate version of the graph, it's not pretty, but I think you can see that the trends are just as visible - so why did they choose the innumerate version?

sorry, accidentally double-posted that due to my computer caching things in a bizarre manner - feel free to delete the message telling everyone that I deleted that message (and then this one as well)..., or alternatively to leave the record of my incompetence.

"Would a journalist complain if the headline-writer makes factual errors in the headline? Or don't they care so long as it means someone reads the article?"It's not so much that they wouldn't care, it's more that they wouldn't notice - in my experience a reporter is unlikely to be shown a page proof of their article before printing, unless the editor or news editor wants to ask them something about it. And by the time it comes out they will probably have moved on to other things - they are writing so many stories per day. Also, a reporter may well be younger than/junior to the production editor and may feel, unless someone complains to them about the headline, that the production editor knows best. Not in all cases of course.

Great idea, but there is a snag that you don't mention. When exaggerated claims are made they often cone straight from press release and (in cases I have looked at) that has been checked with authors, who must therefore take part of the blame.That may well be the case in http://www.guardian.co.uk/science/blog/2010/aug/12/autism-brain-scan-statistics