Menu

learning technology journal rankings

With journal rankings, the development of body mass index (BMI) and insurance I find to be a relevant reference i.e. it was useful commercially to have a measure which would then be applied by insurers that increased the adoption of it rather than the value to health research and helping humans improve their health.

Issues and points of interest with journal ranking include:

validity of h index across different disciplines where articles from older / more established journals, citations and number of them may be more academically ‘useful’ than others in terms of potential career enhancement for an academic; or just contributing something new, interesting and useful to a discipline1

with positive and negative citations, the number of citations as a measure is not necessarily an indicator of quality in itself e.g. if citations not included from books, chapters or conference proceeding for example 2

difference between e.g. ISI journal citations report and impact factor, databases such as Web of Science, Web of Knowledge, Scopus, SCIMAGO rankings, Google Scholar and to some extent – websites like ResearchGate and Academia.Edu3

Web of Science and Google Scholar do not properly handle articles which have non English characters or diacritics 4

open access still seen as inferior to subscription models even if open access journals charge fees to authors for publication – based on e.g. academics submitting flawed papers which have still been published without sufficient peer review 5

open access publication vs subscription journals included / excluded from indices such as ISI or general versus specialist e.g. in an ICT for Development study, they found

Positive: For many disciplinary journals, at least for academic authors, there is greater
kudos in tenure and promotional terms from publishing in one of the disciplinary journals,
as these appear in academic ranking tables, whereas ICT4D journals tend not to.
Neutral: Much eroded since the move from paper publication to online search
accessibility, it is still likely that different journals reach somewhat different audiences.

Negative: It is likely that rejection rates are higher in disciplinary journals. Even if the
paper is accepted, the time and effort required to achieve publication will be higher than
in ICT4D journals, despite refereeing…This means that (notwithstanding access
programmes such as INASP’s PERii) they are less accessible to audiences outside
industrialized country academia; yet such an audience is potentially a prime one for
ICT4D writing, particularly for those seeking to impact policy and practice. Very much
related to this, the non-citation-based impact of items published in open access journals is
likely to be higher than for subscription journal publication.6

An analysis of statisticians found variability in perception of journals according to geographical origin, research interests and employment type7

An analysis of economics journals and impact found differences in the impact and influence when looking beyond the impact factor for economics as a discipline compared to economics more broadly in social sciences 8

An interesting perspective through a Marxist lens. It also reflects what is being measured and its value as per BMI example:

…the ranking list forces the intellectual laborer into the role of the “maximized worker,” i.e., the list represents the model to which the worker must conform to in order for the institution to insure a baseline margin of profitability…If the researcher is required to publish her work in a limited number of outlets, her work must conform to, and to some degree be shaped and limited by, the concerns of those outlets…while the exposed ideology of ranking lists runs counter to the professed motivations of science, it functions correctly as a means through which capitalist society successfully reproduces labor power and perpetuates the status quo (Althusse,2001)9

So with existing pressures on academic staff to publish research, will using ranking and indices be more beneficial either for additional status (staff, dept, institution) or additional value to learning technology as a field. Except that learning technology has many fields as per the spreadsheet – psychology, computing, education. How could statisticians and economists wanting adoption of alternative methods of ranking either positively or negatively influence ‘knowledge’ and its usefulness.

For staff looking to collaborate with others in developing countries, how much additional time and resource will be needed either to publish open-access or in a subscription based journal, particularly to influence policy or practice in each of the countries. For example collaborators are not simply researching in a developing country but also learning about how their research study is perceived and evaluated in the developing country as ‘useful’…