¶ 1Leave a comment on paragraph 10
This paper has explored what impact means, and the details of both tradmetrics and altmetrics as tools for understanding the impact of scholarly research. Tradmetrics are built upon decades of study. Altmetrics aim to take advantage of a wealth of newly available data that describe impact from an alternative viewpoint. This paper constructs an argument that the value of social learning communities (of practice) are not represented by either approach, and hypothesises that further investigation may show that this impedes the ability to provide a multidimensional understanding of impact.

¶ 2Leave a comment on paragraph 24
Technical innovations, some of which are core to the availability of data that altmetric evaluations are based upon, may well be repurposed to explore the hypothesis. The ability to mash-up data from multiple sources, and a movement toward semantic nanopublishing (García-Castro et al. 2012; Groth & Gurney 2010; Hori et al. 2003; Corcho 2006), provide the technological underpinning to do this. Other approaches to publishing data openly and semantically (e.g. linked data, Database Wiki Project) may also be applied to publishing (TED 2009; Bizer et al. 2009; Buneman et al. 2011).

¶ 3Leave a comment on paragraph 33
Although this paper is concerned with exploring a impact metrics using CoP theory, this exploration is part of a much larger picture and more complicated picture. The closed peer-review process utilised by the majority of academic journals arguably helps to prolong a preoccupation with citation-based impact factors (Curry 2012, cited in Galligan & Dyas-Correia 2013). A very recent example of how this can be problematic is the story of Reinhart, Rogoff and Herndon. The latter is a graduate student, who studied a notoriously cited paper by the two eminent economists (a paper oft-cited by finance ministers as an argument for austerity). Herndon noted some fundamental errors in the calculations used in the paper (Alexander 2013, Herndon & Pollin 2013), which had been peer-reviewed, and published in well ranked journal (SCG n.d.).

¶ 4Leave a comment on paragraph 41
Although the sentiment is very hard to support empirically some suggest this kind of event could be the product of a highly politicised publishing community: “maybe if peer review was about methodology instead of politics this wouldn’t keep happening” (Ralph 2013). Assuming this is a politicised process, it maybe out of self-interest some scholars don’t shout louder about the issues. What is empirically supported (and referenced in this paper) are the structural flaws of how impact is calculated, and metrics interpreted. Because these practices are entrenched in long-established power and authority structures built atop the current publishing paradigms, altering behaviour is a multilateral issue.

¶ 5Leave a comment on paragraph 52
Academic blogging and post-publication peer-reviews are good examples of the benefits of negating or augmenting the traditional peer-review/publication process (Galligan & Dyas-Correia 2013; Groth & Gurney 2010). Also evidence appears to conclude that open-access publications (according to tradmetrics) have wider influence than closed-access publications (Harnad & Brody 2004). Linking these studies to the main argument the paper presents, ifCoP theories can be shown to have relevance to academic publishing communities – and to be a valid factor in a holistic impact metric – then the next logical line of enquiry would be to establish whether open review, open access publishing, and publishing outside of traditional venues can stimulate better CoP than traditional models.

Comments

Paul Ralph – quoted here – has taught me, and is a young academic that I have huge respect for and in some ways has inspired my interest in this field. I have not asked for permission to quote him, so I am hopeful that he doesn’t take any sort of offence (the quote is from a public facebook page).

I wonder how can the sentiment embodied in Paul’s words be translated into an empirical study, that might be able to move peer-review towards being ‘about methodology’ (as Paul puts it) rather than politics. This might be the best strategy for attacking the entrenchment…

Nanopublishing is an area I haven’t explored at all, although it is certainly related to this area. One example of its utility is that a ‘citable unit’ could be a paragraph, rather than a whole paper. Applying this technolog could be a game-changer for scholarly publishing.

Nanopublishing could easily be build using linked data, and the ideals of the semantic web. The TED talk and the Bizer et. al. paper are explaining how the semantic web ‘vision’ is developing (at 2009).

These are all big ideas, and could have a profound effect on ‘impact’.. they are also long-term projects, but deserve due consideration now.

I refer to my a paper my brother contributed to for the ‘Database Wiki Project’. Am I citing the paper because I’d like to help my brother enhance his citation counts, or am I citing it because it is relevant? I believe the answer is both. Some of the ideals of the database wiki project could be applied to the problem of curating scholarly literature.

The story of Thomas Herndon is current. It is also (I think) may be being exaggerated by the media somewhat. Nonetheless it does show that a paper that has had some impact, was peer-reviewed, and yet contained very basic errors. Presumably a more junior scholar would have had their workings scrutinised more than Reinhart and Rogoff did.

It also highlights the fact that although this was a paper based on numbers and data, the actual data wasn’t included in the paper (or in the review process). How thoroughly can a review process be, where the actual data isn’t even viewable?

It is a testament to Renhart and Rogoff that they shared their data with Herndon, there was obviously no harm intended by either of them.

Could an alt-review system be constructed so that it both mitigated this kind of risk, and also facilitated communities of practice?

Here I’m trying to point out that if CoP are relevant to how much impact a paper carries, and also if openness (of review, access, etc) are relevant to these CoP… then potentially a fantastic strategy for cultivating (and measuring) impact would be to move towards much more open publishing (in terms of review and access).

Hey Joe, interesting paper which summarises the failings (and uses) of traditional mentrix in classifying impact. I am assuming that the paper (by mentioning alternative forms of impact) attempts to draw out these altmethods by mentioning CoPs in academia. I miss a more comprehensive overview of emergent impact metrics although places like Mendeley are mentioned. while PhD students understand text well, there is a lack of images/illustrations/summary tables. rock on

Hey Joe, interesting paper which summarises the failings (and uses) of traditional mentrix in classifying impact. I am assuming that the paper (by mentioning alternative forms of impact) attempts to draw out these altmethods by mentioning CoPs in academia. I miss a more comprehensive overview of emergent impact metrics although places like Mendeley are mentioned. while PhD students understand text well, there is a lack of images/illustrations/summary tables. rock on

Here I’m trying to point out that if CoP are relevant to how much impact a paper carries, and also if openness (of review, access, etc) are relevant to these CoP… then potentially a fantastic strategy for cultivating (and measuring) impact would be to move towards much more open publishing (in terms of review and access).

Paul Ralph – quoted here – has taught me, and is a young academic that I have huge respect for and in some ways has inspired my interest in this field. I have not asked for permission to quote him, so I am hopeful that he doesn’t take any sort of offence (the quote is from a public facebook page).

I wonder how can the sentiment embodied in Paul’s words be translated into an empirical study, that might be able to move peer-review towards being ‘about methodology’ (as Paul puts it) rather than politics. This might be the best strategy for attacking the entrenchment…

The story of Thomas Herndon is current. It is also (I think) may be being exaggerated by the media somewhat. Nonetheless it does show that a paper that has had some impact, was peer-reviewed, and yet contained very basic errors. Presumably a more junior scholar would have had their workings scrutinised more than Reinhart and Rogoff did.

It also highlights the fact that although this was a paper based on numbers and data, the actual data wasn’t included in the paper (or in the review process). How thoroughly can a review process be, where the actual data isn’t even viewable?

It is a testament to Renhart and Rogoff that they shared their data with Herndon, there was obviously no harm intended by either of them.

Could an alt-review system be constructed so that it both mitigated this kind of risk, and also facilitated communities of practice?

I refer to my a paper my brother contributed to for the ‘Database Wiki Project’. Am I citing the paper because I’d like to help my brother enhance his citation counts, or am I citing it because it is relevant? I believe the answer is both. Some of the ideals of the database wiki project could be applied to the problem of curating scholarly literature.

Nanopublishing could easily be build using linked data, and the ideals of the semantic web. The TED talk and the Bizer et. al. paper are explaining how the semantic web ‘vision’ is developing (at 2009).

These are all big ideas, and could have a profound effect on ‘impact’.. they are also long-term projects, but deserve due consideration now.

Nanopublishing is an area I haven’t explored at all, although it is certainly related to this area. One example of its utility is that a ‘citable unit’ could be a paragraph, rather than a whole paper. Applying this technolog could be a game-changer for scholarly publishing.

I really like the idea of more ‘living’ documents. For this specific text however, I think a paragraph about the possibilities and perspectives for this new kind of text – both in academia, popular media and elsewhere.

Hey Joe, interesting paper which summarises the failings (and uses) of traditional mentrix in classifying impact. I am assuming that the paper (by mentioning alternative forms of impact) attempts to draw out these altmethods by mentioning CoPs in academia. I miss a more comprehensive overview of emergent impact metrics although places like Mendeley are mentioned. while PhD students understand text well, there is a lack of images/illustrations/summary tables. rock on

this all seems to be more part of the literature analaysis and not yet of the discussion. putting the alt metrix definition, limitations and advantages much further would be useful to link it up more closely with the discussion on impact and to then focus on new metrix and CoP. also, I think your discussion on the definition of impact might be made shorter just by focusing on the main point that we need some way of “measuring impact” — one old one are the trad methods… bring them closer to the impact section. I start to repeat myself

I agree to your above comment as this paragraph appears to be a key statement for your paper and I would recommend to use it as an introduction to the discussion of new metrix in combination with the stuff on CoPs that you got earlier….

it might be worth to use the same thing here and use it at the beginning when talking about the h-index etc. you might have done that – beg you pardon if you did, since my memory is exhibiting holes on occasion