Okay, not really. But seriously, it’s a lot harder to feel like a rock star because someone has read and used your work if, as Malcolm Wright and J. Scott Armstrong suggest, they probably didn’t read it and if they did, they probably read it wrong.

That might be a little bit strong, but not by much. So what does it mean when a published, peer-reviewed article in a real life journal kicks off its final, concluding paragraph with this sentence – Authors should read the papers they cite.

!

This isn’t a library tutorial aimed at fifth-graders writing their first research paper, after all. This is a paper talking about what professional scholars, people responsible for the continued development of knowledge in disciplines, should do. It can’t mean anything good. Here’s the original article:

Nutshell – Dr. Anderson wrote one of the more impact-heavy articles in his discipline, and the only article that analyzes and explains how to correct for non-response bias in mail surveys (that’s bias caused by people who do not respond at all to the survey). By analyzing 1. how often research based on mail surveys includes a citation to this article, and 2. how often the later researchers seem to interpret and apply the original article correctly the authors conclude that many, many researchers are not reading all of the relevant literature. More disturbingly, many, many researchers aren’t even reading all of the articles they themselves cite.

Now, on one level this isn’t a shocker – anyone who has read moderately deeply in any body of literature has probably looked at at least one bloated literature review and said “hey – this person probably didn’t really read all of these books and articles.” This article suggests that it’s more complex than just lit-review padding, that scholarly authors also mis-cite and mis-use the resources they use to support the methods they use and the conclusions they draw.

Working on the assumption that if your research uses a mail survey, you should at least be considering the possibility of nonresponse bias, they found that:

…far less than one in a thousand mail surveys consider evidence-based findings related to nonresponse bias. This has occurred even though the paper was published in 1977 and has been available in full text on the Internet for many years.

Working on the further assumption that someone who makes a claim about nonresponse bias, and who reads, understands and cites an article that outlines a particular method for correcting nonresponse bias to support that claim, will follow the method outlined in the article they cited, the authors conclude that many authors are either not reading or are not understanding the articles they cite:

The net result is that whereas evidence-based procedures for dealing with nonresponse bias have been available since 1977, they are properly applied only about once every 50 times that they are mentioned, and they are mentioned in only about one out of every 80 academic mail surveys.

Most of the research that seriously digs into how well researchers use the sources they cite has come out of the sciences, particularly the medical sciences. This is one of the first articles I’ve seen dealing with the social sciences, and I think it’s worth reading more closely because this very rough and brief summary doesn’t really do justice to the issues it raises. But right now, I want to turn to the authors’ conclusions because I think they get at some of the things we’ve been talking about around here about how new technologies and the read/write web might have an impact on scholarship.

The first two outline author responsibilities:

First – read the sources you cite. I think we can take that as a given – a bare-minimum practice not a best practice.

Secondly, “authors should use the verification of citations procedure.” Here they’re calling for authors to contact all of the researchers whose work they want to cite to make sure that they’re citing it correctly. I’m going to come back to this one.

The second two put some of the burden on the journals:

Journals should require authors to attest that they have in fact read the work they cite and that they have performed due diligence to make sure their citations are correct. That seems a sad, largely symbolic, but not unreasonable precaution.

Finally, journals should provide easily accessible webspaces for other people to post additional work and additional research that is relevant to research that has been published in the journal. Going to come back to this one too because I think it’s actually related to the one above.

Basically – both of these recommendations suggest that more communication and more transparency would be more better for knowledge creation. And what is the read/write web about if not communication and transparency, networking and openness?

Some of the commenters on the IHE article expressed, shall we say, polite skepticism that an author should be obligated to contact every person they cite before citing them. These concerns were also raised by one of the formal comment pieces attached to the Interfaces article. And I have to say I agree with these concerns for a few reasons. Anderson made the claim more than once that he does this as an author, with good results, and that the process is not too onerous. But that doesn’t really address the question of how onerous it would be for a prolific or influential author to have to field all of those requests.

And I’ll also admit to having some author is dead reactions to this. What if I contact Author A and say I’m planning to use your work in this way and they say “well I didn’t intend it to be used in that way so you shouldn’t.” Does that really mean I shouldn’t? Really? It’s hard to see this kind of thing not devolving quickly into something that actually hinders the development of new knowledge because it hinders new researchers’ ability to push at and find new connections in work that has come before.

But not to throw everything out with this bathwater – the idea that more and better and faster communication between scholars (more and better and faster than can be provided within journals and the citation-as-communication tradition) makes better scholarly conversations and better scholarship – that’s something I think we need to hold on to. Anderson points out how talking to the researcher who really knows the area described in the thing you’re citing can point you to other, less cited but more useful resources – how they can expand your knowledge of the field you’re talking about:

We checked with Franke to verify that we cited his work correctly. He directed us to a broader literature, and noted that Franke (1980) provided a longer and more technically sophisticated criticism; this later paper has been cited in the ISI Citation Index just nine times as of August 2006.

This is an area where the transparency, speed and networking aspects of the emerging web might have a real impact on the quality of scholarship even if there are no material changes in the practice of producing journal articles. I might not be sure about the idea of making this communication a part of citation verification but it should be a part of knowledge creation. And it’s tied as well with the final recommendation – that journals should provide webspaces for some, not all but some, of this communication to happen.

The types of conversations between similarly interested scholars that Anderson is describing is nothing new – the emerging web offers some opportunities for those conversations to move off the backchannel. Or maybe it’s the idea that it’s still a backchannel, but the back channel being visible is interesting. Whether the journal has its own backchannel for errors, additions, omissions and new ideas to be posted, or whether that backchannel exists on blogs, in online knoweldge communities, or networking spaces doesn’t matter so much as it can exist. We certainly have the technology.

And the journal Interfaces itself I think provides a suggestion as to why this kind of addtional discourse and conversation is valuable. You may have noticed that what looks like a fifteen page article is really an eight page article with six pages of response pieces, followed by an authors’ response. The responses challenge parts of the original article, and enrich other parts with additional information and examples. They illustrate the collaborative nature of knowledge production in the disciplines in a way that citations alone cannot. I couldn’t find anything on the journal’s website about this practice – if it’s a regular thing, how responses are solicited, or more. These responses are a spot of openness in a fairly closed publication.

And that as well points to the last point to make here because this is far too long already – I don’t think we have to change everything to fix the problems raised here – and I don’t think if we did change everything it would fix all of the problems raised here. There’s that scene in Bull Durham where Eppie Calvin gets his guitar taken away because he won’t get the lyrics right. And that’s the connection between FemaleScienceProfessor and Anderson and Wight — who can feel like a rock star if they’re singing your songs but getting them wrong?

There will always be Eppie Calvins out there inside and outside of academia -for them, women are wooly because of the stress. But injecting just some openness, making some communication visible – won’t stop Eppie Calvin, but might keep the next person from replicating his mistakes. And that’s a good thing.

Post navigation

5 thoughts on “Why we should read it before we cite it — no, really!”

I also think that a question could be asked about what it means “to have read” a source before citing it. As we’ve discussed in a number of conversations, academic reading often involves closely reading select chapters of a book or even sections of an article. Of course, skimming introductory and concluding chapters/sections to ensure that the closely read pieces are properly contextualized is also part of such reading, but in no case are we talking about reading a source from beginning to end.

Shaun – yes. Some of the commenters and responders also made similar points about that. I think we’d all agree that one should read it well enough to get it right (or to have made a reasonable interpretation) but beyond that it becomes problematic.

One important distinction, I think, is between those things you cite to contextualize your study (i.e. the lit review) and those things you cite to provide evidence for important claims about the method or results of your study. Those would definitely be different kinds of reading. It is also this factor that you bring up that makes me think that Wright and Anderson have a very specific type of citation in mind that they think is necessary and they’re not really thinking of those types of citations where a critical strategic reading of the original source is enough.

I also wonder if there are disciplinary distinctions in play – that I think more things should be cited than they do because of the disciplines in which I was trained? In their response piece, for example, they recommend eliminating extraneous citations – anything that is not providing evidence or a direct quote. I kind of shuddered to think of the resulting prose if we directly quoted from everything we cited – maybe that’s a disciplinary distinction.

OK, but does this also apply to books? Weren’t we just saying that we can reassure students in WR 121 that they really do not have to read the entire book in order to use some background facts or to use some opinion. What do you think? And as for mis-reading, that’s part of the goal of WR 121 – reading comprehension – as when we give them Gould’s essay “women’s brains” and see if they can be sure what he says relative to what the others he quotes say. I would argue that making an interpretation might consist of mis-reading or not depending on how much evidence and how it is interpreted. but that’s more literary, perhaps.

Well I think I’d say of course this applies to books. I don’t think the authors are saying “this person only read the relevant chapter of this book and even though they read it really well, that’s bad.” They ‘re talking about situations where authors are saying, essentially, “this person is the authority on this topic so I’m going to cite them even though I haven’t read their article.” Pretty different I’d say. And the authors are also not, I think, talking about something that *could* be called misreading but is in fact an interpretation — they’re talking more about “this article says to do X and these authors did Q instead.”

Where the article doesn’t get there for me, actually, is in it’s very black-and-white view of writing and citing. It reduces the process to something more mechanical and more definable than I’m comfortable with. But where I think this research is really interesting and really important in the conversations you and I have had about FYC is in the concept of the conversation – there’s some pretty compelling evidence here that citations are very frequently not representing conversations and that’s an issue. Basically, these scholars aren’t engaging in dialogue with their sources any more than FYC students some of the time – and the FYC students have a much better excuse. I think realizing this is important – both for how we talk about the process to the students, and also in our expectations for what the student will be bringing to the process. If established scholars aren’t living up to that concept, we can’t expect students to get there if we don’t deliberately show them, and model for them, what that process looks like?

I like how you poit out that what appears to be a conversation among scholars may not actually be one. On the other hand, for our students, as you say, any misreading is accidental. And they are freshmen, not supposed experts. But maybe you think that for some scholars the misrepresentation is due to sloppy or lazy work or to deliberate misrepresentation? All of which means maybe we need to add something to our ILP revision for next fall?
Also something about the Wikipedia page count research I just posted on?