Changing the way we think about universities

Month: July 2018

comments

A critique of the aims/achievements of OER in the context of openness and inclusion, equity debates. The author argues that OER is still part of and replicates neo-liberal educations systems and is not yet disruptive. This may be possible if OER can be developed at local levels involving students and educators and those who do not currently have an opportunity to produce knowledge for educational purposes.

Abstract

As a phenomenon and a quandary, openness has provoked conversations about inequities within higher education systems, particularly in regards to information access, social inclusion, and pedagogical practice. But whether or not open education can address these inequities, and to what effect, depends on what we mean by “open” and specifically, whether openness reflexively acknowledges the fraught political, economic, and ethical dimensions of higher education and of knowledge production processes. This essay explores the ideological and rhetorical underpinnings of the open educational resource (OER) movement in the context of the neoliberal university. This essay also addresses the conflation of value and values in higher education – particularly how OER production processes and scholarship labor are valued. Lastly, this essay explores whether OER initiatives provide an opportunity to reimagine pedagogical practices, to reconsider authority paradigms, and potentially, to dismantle and redress exclusionary educational practices in and outside of the classroom. Through a critique of neoliberalism as critically limiting, an exploration of autonomy, and a refutation of the precept that OER can magically solve social inequalities in higher education, the author ultimately advocates for a reconsideration of OER in context and argues that educators should prioritize conversations about what openness means within their local educational communities.

Author Biography

JCLIS is open access in publication, politics, and philosophy. In a world where paywalls are the norm for access to scholarly research, the Journal recognizes that removal of barriers to accessing information is key to the production and sharing of knowledge.

Authors retain intellectual property and copyright of manuscripts published in JCLIS, and JCLIS applies a Creative Commons (Attribution-NonCommercial) license to published articles. If an article is republished after initially publication in JCLIS, the republished article should indicate that it was first published by JCLIS.

Comments: A detailed article providing reviews on OA and Impact Metrics, and discussions on their misconceptions and misunderstandings. A review on OA mandates and policies is also provided. Other interesting discussions include those on Altmetrics, Eigenfactor, SNIP, JOI. An extensive list of potentially useful references are given.

Abstract: Access to research results is imperative in today’s robust digital age, yet access is often prevented by publisher paywalls. Open Access (OA) is the simple idea that all research should be free for all to access, use, and build upon. This paper will focus on three critical areas of the OA landscape: its impact on scholarship and the public, the obstacles to be overcome, and its advancements. The impact of OA actions and initiatives has been difficult to quantify, but the growing number of studies on OA have shown mostly overwhelmingly positive results. Cultural norms within academia, such as the reliance on the journal Impact Factor (IF) to assess the quality of individual research articles, have impeded the progress of OA. Conversely, federal mandates and institutional policies have supported the OA movement by requiring that scholarly publications be deposited into institutional or subject repositories immediately following publication. As information professionals, library and information science (LIS) professionals have a responsibility as practitioners, authors, and editors to support OA and encourage other academics to do the same.

Comments: This article is a small sample case study comparing meta data from Google Scholar and Scopus. Although the study only covers 36 articles in 12 journals (and the resulting ~7000 citations), it proposes some interesting methodologies. In particular, the methods for dealing with match-merging, citation duplicates and indexing speed may be of interest.

Abstract: A new methodology is proposed for comparing Google Scholar (GS) with other citation indexes. It focuses on the coverage and citation impact of sources, indexing speed, and data quality, including the effect of duplicate citation counts. The method compares GS with Elsevier’s Scopus, and is applied to a limited set of articles published in 12 journals from six subject fields, so that its findings cannot be generalized to all journals or fields. The study is exploratory, and hypothesis generating rather than hypothesis-testing. It confirms findings on source coverage and citation impact obtained in earlier studies. The ratio of GS over Scopus citation varies across subject fields between 1.0 and 4.0, while Open Access journals in the sample show higher ratios than their non-OA counterparts. The linear correlation between GS and Scopus citation counts at the article level is high: Pearson’s R is in the range of 0.8–0.9. A median Scopus indexing delay of two months compared to GS is largely though not exclusively due to missing cited references in articles in press in Scopus. The effect of double citation counts in GS due to multiple citations with identical or substantially similar meta-data occurs in less than 2% of cases. Pros and cons of article-based and what is termed as concept-based citation indexes are discussed.

Comments: This is a short article giving a quick literature review and a summary of criticisms on impact factors and university rankings.

Abstract: In this essay we explore parallels in the birth, evolution and final ‘banning’ of journal impact factors (IFs) and university rankings (URs). IFs and what has become popularized as global URs (GURs) were born in 1975 and 2003, respectively, and the obsession with both ‘tools’ has gone global. They have become important instruments for a diverse range of academic and higher education issues (IFs: e.g. for hiring and promoting faculty, giving and denying faculty tenure, distributing research funding, or administering institutional evaluations; URs: e.g. for reforming university/department curricula, faculty recruitment, promotion and wages, funding, student admissions and tuition fees). As a result, both IFs and GURs are being heavily advertised—IFs in publishers’ webpages and GURs in the media as soon as they are released. However, both IFs and GURs have been heavily criticized by the scientific community in recent years. As a result, IFs (which, while originally intended to evaluate journals, were later misapplied in the evaluation of scientific performance) were recently ‘banned’ by different academic stakeholders for use in ‘evaluations’ of individual scientists, individual articles, hiring/promotion and funding proposals. Similarly, URs and GURs have also led to many boycotts throughout the world, probably the most recent being the boycott of the German ‘Centrum fuer Hochschulentwicklung’ (CHE) rankings by German sociologists. Maybe (and hopefully), the recent banning of IFs and URs/GURs are the first steps in a process of academic self-reflection leading to the insight that higher education must urgently take control of its own metrics.