Mapping Media Responses to Research

My research is drawing to a close just in time for classes to start. My advisor, Professor Linneman, is helping me to formalize and flesh out a brief paper that I’ve written about my findings. In it, I describe the types of coverage that I found most commonly, which include

Redundancy (or, more kindly, mirroring the press release). I describe a problem that these types of articles create here.

Only describing one finding

Ignoring findings and criticizing the research for not considering those aspects

Reporting science incorrectly

Incorrect methods (numerical or incorrectly describing the process)

Incorrect findings

Attributing ideas not explored in the study as being one of the conclusions of the study

Incorrect authors

Misleading wording

With the exception of the first and second on this list, I counted these things as mistakes when I found them in an article. From the 340 articles/videos I studied, 59 of them included errors. Environment and life science articles had the lowest rate of errors (10% and 10.6% respectively), while social science articles had the highest incidence of errors (20%).

I mapped out the articles to show how the research traveled, and marked articles that contain errors with an asterisk. I also color coded them to show where they are in the dissemination process. Here are some examples:

Orange indicates that it is one degree away from the original research, blue is two degrees, pink is three degrees, and green is four degrees. My hope is that these maps make it clear where the errors arise. The majority of errors are two degrees away from the original study, meaning that most errors arise after the research has passed through at least one other news source.

I detail my methods and findings more in my paper, but I hope that this brief explanation has given you some idea of what I spent my summer doing. I look forward to seeing everyone’s projects at the research showcase!

Comments

I find this research very fascinating, and quite honestly disheartening. As someone trying to enter the scientific field, I am interested in how to try to prevent this. From the work you did did you happen to notice a trend in the types of papers that held more errors? For example did a article having to do with health (like HIV article) or other topics (vocal attractiveness) seem to have more errors? Also, what kind of errors did you see most frequently?

I really like your graphics so I thought I’d comment. I think your research is very interesting and I like this visual representation of your data. I’m not surprised so many errors occurred as the chain of contact progressed. I’m baffled though as to why social science research is more prone to errors when reported in the mainstream media. Perhaps the technical nature of environmental and life science articles make errors less common when data is reported, and the more wordy and abstract conclusions of social science research allows results to be twisted in secondhand accounts. I obviously don’t know the answer, but I think it would interesting to look into. Another thing I noticed is that on some nodes an error free article will disseminate from another article with an error. For example, in the first map, Web MD and University Herald do not contain errors, but branch off of Health Day which has an error. Could it be that the twice removed articles referenced another source to clarify their information before presenting their account of the original research? Maybe they just didn’t include the erroneous portion of their source article in their report. Anyway, I think you’ve done some important research. Most people receive information from journals and mainstream news and not original sources of data. Seeing that this original data is often misreported in subsequent reports, I think there should some concern. Unfortunately, it’s a difficult problem to solve, and I’ll leave it at that.

This is incredibly interesting, and I must commend your use of graphics to describe a complex idea. Perhaps this information is present elsewhere, but in viewing and taking in this post, it makes me wonder how exactly you determined the source of each new phase of research dissemination. Certainly the easy way is to look at the citations of a particular media outlet, but did you search any of the body of the text for redundant wording (aka plagarism?) Certainly that would be a difficult task, but I’m curious!

I’m really impressed with this research idea– it’s a problem that has occurred to me in passing (often when glancing through a copy of Popular Science) but I’d never considered doing such a comprehensive study. As another hopeful contributor to scientific research, this makes me aware of another factor I’ll have to consider as I learn about the studies of others and keep an eye on if ever I discover anything notable.