Friday, August 21, 2015

It seems like forever ago that Sarah H. posted a link to an article on Times Higher Education titled The worst piece of peer review I’ve ever received.The article doesn't seem to be behind a paywall so it's worth going and having a read either before or after you read this blog post. As I was reading this article my own thoughts about peer review, and now being a journal editor, sort of surfaced anew. I wish I had taken some notes while I was reading so that this blog post could be more rich, but I'll just have to go from memory.

One of the things that stood out to me was this: if your peer reviewers are not happy with your submission you are doing something right. OK, this is quite paraphrased, but I hope I retained the original meaning. I am not so sure I agree with this. I've done peer review for articles and when I am not happy (well, "convinced" would be a better word) is when there are methodological issues, or logical fallacies, or the author hasn't done a good enough review of the literature. In thinking of my role as a peer reviewer, or even a journal editor (still doesn't feel "real") my main goal isn't to dis someone's work. My goal is geared more toward understanding. For instance, if an article I review has logical fallacies in it, or is hard to follow (even if logical), then what hope is there for the broader journal audience if I have problems with the article? I see the role of the editor and the reviewer NOT as gatekeeper but as a counselor. Someone who can help you get better "performance" (for lack of a better word).

Now this article brought up some other general areas as well which I have made into categories:

Peer Review as quality assurance
This concept to me is completely bunk. It assumes, to some extent, that all knowledge is known and therefore you can have reasonable quality assurance. The truth is that we research and publish because all knowledge isn't known and we are in search of it. This means that what we "know" to be "true" today may be invalidated in the future by other researchers. Peer review is about due diligence and making sure that the logic followed in the article is sound. We try to match articles with subject experts because the "experts" tend to read more about that topic and can act as advisors for researchers who are not always that deep into things (everyone needs to start somewhere, no?).

Peer Reviewers are Experts, or the experts
I guess it depends on how you define expertise. These days I am asked to peer review papers on MOOCs because I am an expert. However, I feel a bit like a fraud at times. Because I've been working on projects with the Rhizo Team, and I've been pursuing my doctorate, my extracurricular reading on MOOCs has drastically declined. I have read a lot on MOOCs, but I still have a a drawer full of research articles on MOOCs that I have only read the abstracts of. The question that I have then is this: How current should an expert be? Does the expert need to be bleeding edge of research or can he lag behind by 18 months?

Validity of peer review
Peer review is seen as a way of validating research. I think that this, too, is bunk. Again, unless I am working with the team that did the research, or try to replicate it, I can't validate it. The best I can do is to ask questions and try to get clarifications. Most articles are 6,000-9,000 words. That is often a very small window through which we look to see what people have discovered. This encompasses not only the literature review, and the methods, but also the findings and the further research section. That's a lot! I also think that the peer reviewer's axiology plays a crucial role as to whether your research is viewed as valid or not. It's funny to read in class about the quant vs. the qual "battles". Now that this is over with (to some extent anyway), the battle rages as to what is an appropriate venue for publication, and the venue determines the value of the piece you authored. If your sources are not peer reviewed articles, but rather researched blog posts from experts in the field, all that some peer reviewers will see is blog posts, and those are without value to them. To some extent it seems to me that peer reviewers are outsourcing the credibility question. If we see blog posts in the citation list the work us thrust upon us to verify what people are using as their arguments (which makes more work for peer reviewers). If some something is in a peer reviewer journal we can be more lazy and assume that the work passes muster (then again, I've seen people claim that I support the concept of digital natives when in fact I was quoting Prensky and setting up an argument against the notion...lazyness)

Anonymity Peer Review
I think anonymity is an issue. Peer review should never be anonymous. I don't think that we can ever reach a point of impartial objectivity, and as such we can never be non-biased. I think that we need to own our biases and work toward having them not influence our decisions. I also think that anonymous peer reviews, instead of encouraging open discussion, are just walls behind which potential bad actors can hide. I think it's the job of editors to weed out those bad actors, and there should be standards for review where both strong and weak aspects of the article can be addressed.

Peer Review as a yay or nay
Peer review systems have basically 3 decisions: accept with minor revisions, accept with major revisions, reject. While this may have worked in the print days of journals and research, it doesn't work today - or at least it doesn't work for me. Peer reviewers are stuck with a yay or nay decision on articles, and so are journal editors. There are articles that I've spent time giving feedback to the authors (as a peer reviewer). Since it wasn't a minor revision, I chose "major" revision. Other peer reviewers either selected major decision or deny. There have been cases where the major revisions warranted a re-evaluation of the article (IMHO) after the revisions were done, but they were denied by the editors. I don't know if editors of those journals had more article submissions than what they knew what to do with, but having peer review as a yay/nay decision seems quite wrong to me. I believe that if resources exist to re-review an article after updates are made, the journal should re-review.

Peer Review Systems suck
This was something that was brought up in the THE article as well. My dream peer review system would provide me with something like a Google Docs interface where I could easily go and highlight areas, add commentary in the margins, and provide people with additional readings that could help them. The way systems work now, while I can upload some document, I can't necessarily easily work in an word processor to add comments. What I often get are PDFs, and those aren't easy to annotate. Even if I annotate them, extracting those comments is a pain for the authors. The systems seem built for an approve/deny framework and not for a mentoring and review framework.

Time-to-publication is insane
I hate to bring this up, but I have to, and at the same time I feel guilty as a journal editor. In my own world I would accept an article for review, have people review it, and if it passes muster (either right away or eventually) it would go up on a website ready to be viewed by the readers. The reality is that articles come in, and I get to them when I have free time. Getting peer reviewers is also time consuming because not everyone responds right away, so there is some lag there. If there are enough article candidates for an issue of the journal, I get to these sooner. If there are only one or two submissions I get to them later. I would love to be able to get to them right away, but the semiotics of academic journals favor the volume# issue# structure, which implies that at least x-many articles need to be included in every issue. Given the semiotics of the IT system that publishes our journal, I feel a bit odd putting out an issue with 1 or 2 articles at a time.

So, I, and other researchers, will work hard to put together something, only to have it waiting in a review queue for months. This is just wrong. However - at least on my end - it's also a balancing of duties. I do the journal editing on top of the job that pays the bills, so journal editing is not my priority at the moment. I also want to work on my own exploration of ideas with people like the rhizo folks, so that also eats up my time (eats up my time sounds so negative, I actually like working with the rhizo folks - alternative words for this are welcomed in the comments section of this blog post). I would hazard a guess that other journal editors, who do editing for free, also have similar issues. So, do we opt for paid editors or do we re-envision what it means to research and publish academic pieces?

I think I wrote a lot. So, I'll end this post here and ask you: what are your thoughts on this process? How can we fix it?

It seems like forever ago that Sarah H. posted a link to an article on Times Higher Education titled The worst piece of peer review I’ve ever received.The article doesn't seem to be behind a paywall so it's worth going and having a read either before or after you read this blog post. As I was reading this article my own thoughts about peer review, and now being a journal editor, sort of surfaced anew. I wish I had taken some notes while I was reading so that this blog post could be more rich, but I'll just have to go from memory.

One of the things that stood out to me was this: if your peer reviewers are not happy with your submission you are doing something right. OK, this is quite paraphrased, but I hope I retained the original meaning. I am not so sure I agree with this. I've done peer review for articles and when I am not happy (well, "convinced" would be a better word) is when there are methodological issues, or logical fallacies, or the author hasn't done a good enough review of the literature. In thinking of my role as a peer reviewer, or even a journal editor (still doesn't feel "real") my main goal isn't to dis someone's work. My goal is geared more toward understanding. For instance, if an article I review has logical fallacies in it, or is hard to follow (even if logical), then what hope is there for the broader journal audience if I have problems with the article? I see the role of the editor and the reviewer NOT as gatekeeper but as a counselor. Someone who can help you get better "performance" (for lack of a better word).

Now this article brought up some other general areas as well which I have made into categories:

Peer Review as quality assurance
This concept to me is completely bunk. It assumes, to some extent, that all knowledge is known and therefore you can have reasonable quality assurance. The truth is that we research and publish because all knowledge isn't known and we are in search of it. This means that what we "know" to be "true" today may be invalidated in the future by other researchers. Peer review is about due diligence and making sure that the logic followed in the article is sound. We try to match articles with subject experts because the "experts" tend to read more about that topic and can act as advisors for researchers who are not always that deep into things (everyone needs to start somewhere, no?).

Peer Reviewers are Experts, or the experts
I guess it depends on how you define expertise. These days I am asked to peer review papers on MOOCs because I am an expert. However, I feel a bit like a fraud at times. Because I've been working on projects with the Rhizo Team, and I've been pursuing my doctorate, my extracurricular reading on MOOCs has drastically declined. I have read a lot on MOOCs, but I still have a a drawer full of research articles on MOOCs that I have only read the abstracts of. The question that I have then is this: How current should an expert be? Does the expert need to be bleeding edge of research or can he lag behind by 18 months?

Validity of peer review
Peer review is seen as a way of validating research. I think that this, too, is bunk. Again, unless I am working with the team that did the research, or try to replicate it, I can't validate it. The best I can do is to ask questions and try to get clarifications. Most articles are 6,000-9,000 words. That is often a very small window through which we look to see what people have discovered. This encompasses not only the literature review, and the methods, but also the findings and the further research section. That's a lot! I also think that the peer reviewer's axiology plays a crucial role as to whether your research is viewed as valid or not. It's funny to read in class about the quant vs. the qual "battles". Now that this is over with (to some extent anyway), the battle rages as to what is an appropriate venue for publication, and the venue determines the value of the piece you authored. If your sources are not peer reviewed articles, but rather researched blog posts from experts in the field, all that some peer reviewers will see is blog posts, and those are without value to them. To some extent it seems to me that peer reviewers are outsourcing the credibility question. If we see blog posts in the citation list the work us thrust upon us to verify what people are using as their arguments (which makes more work for peer reviewers). If some something is in a peer reviewer journal we can be more lazy and assume that the work passes muster (then again, I've seen people claim that I support the concept of digital natives when in fact I was quoting Prensky and setting up an argument against the notion...lazyness)

Anonymity Peer Review
I think anonymity is an issue. Peer review should never be anonymous. I don't think that we can ever reach a point of impartial objectivity, and as such we can never be non-biased. I think that we need to own our biases and work toward having them not influence our decisions. I also think that anonymous peer reviews, instead of encouraging open discussion, are just walls behind which potential bad actors can hide. I think it's the job of editors to weed out those bad actors, and there should be standards for review where both strong and weak aspects of the article can be addressed.

Peer Review as a yay or nay
Peer review systems have basically 3 decisions: accept with minor revisions, accept with major revisions, reject. While this may have worked in the print days of journals and research, it doesn't work today - or at least it doesn't work for me. Peer reviewers are stuck with a yay or nay decision on articles, and so are journal editors. There are articles that I've spent time giving feedback to the authors (as a peer reviewer). Since it wasn't a minor revision, I chose "major" revision. Other peer reviewers either selected major decision or deny. There have been cases where the major revisions warranted a re-evaluation of the article (IMHO) after the revisions were done, but they were denied by the editors. I don't know if editors of those journals had more article submissions than what they knew what to do with, but having peer review as a yay/nay decision seems quite wrong to me. I believe that if resources exist to re-review an article after updates are made, the journal should re-review.

Peer Review Systems suck
This was something that was brought up in the THE article as well. My dream peer review system would provide me with something like a Google Docs interface where I could easily go and highlight areas, add commentary in the margins, and provide people with additional readings that could help them. The way systems work now, while I can upload some document, I can't necessarily easily work in an word processor to add comments. What I often get are PDFs, and those aren't easy to annotate. Even if I annotate them, extracting those comments is a pain for the authors. The systems seem built for an approve/deny framework and not for a mentoring and review framework.

Time-to-publication is insane
I hate to bring this up, but I have to, and at the same time I feel guilty as a journal editor. In my own world I would accept an article for review, have people review it, and if it passes muster (either right away or eventually) it would go up on a website ready to be viewed by the readers. The reality is that articles come in, and I get to them when I have free time. Getting peer reviewers is also time consuming because not everyone responds right away, so there is some lag there. If there are enough article candidates for an issue of the journal, I get to these sooner. If there are only one or two submissions I get to them later. I would love to be able to get to them right away, but the semiotics of academic journals favor the volume# issue# structure, which implies that at least x-many articles need to be included in every issue. Given the semiotics of the IT system that publishes our journal, I feel a bit odd putting out an issue with 1 or 2 articles at a time.

So, I, and other researchers, will work hard to put together something, only to have it waiting in a review queue for months. This is just wrong. However - at least on my end - it's also a balancing of duties. I do the journal editing on top of the job that pays the bills, so journal editing is not my priority at the moment. I also want to work on my own exploration of ideas with people like the rhizo folks, so that also eats up my time (eats up my time sounds so negative, I actually like working with the rhizo folks - alternative words for this are welcomed in the comments section of this blog post). I would hazard a guess that other journal editors, who do editing for free, also have similar issues. So, do we opt for paid editors or do we re-envision what it means to research and publish academic pieces?

I think I wrote a lot. So, I'll end this post here and ask you: what are your thoughts on this process? How can we fix it?