One of the most thoughtful writings on the topic that I have read is a conversational series of articles initiated by Kevin Kumashiro, called Thinking Collaboratively about the Peer-Review Process for Journal- Article Publication and published with Harvard Educational Review. This is an excellent piece of writing and even though it was published in 2005 it is as relevant today as it ever was. For example, here’s a sample from one of my favorite authors, William Pinar, that appears in this paper:

For professors of education, working pedagogically should structure all that we do, not just what happens in our classrooms or in our offices. Working pedagogically should structure our research as we labor to teach our students and our colleagues what we have understood from study and inquiry. It must also structure our professional relations with each other, especially during those moments of anonymity when we are called upon to critique research and inquiry that is under consideration for publication in our field’s scholarly journals. When we are anonymous, we are called upon to perform that pedagogy of care and concern to which we claim to be committed. The ethical conduct of our professional practice demands no less.

Peer-review will continue to receive attention and interest, as higher education is facing formidable technological and socio-cultural pressures. We wrote about this issue in the past in one of our papers (p. 770-771), and I am going to quote it at length here because of its relevance:

“Peer review is the first example of how seemingly non-negotiable scholarly artifacts are currently being questioned: while peer review is an indispensable tool intended to evaluate scholarly contributions, empirical evidence questions the value and contributions of peer review (Cole, Cole, & Simon,1981; Rothwell & Martyn, 2000), while its historical roots suggest that it has served functions other than quality control (Fitzpatrick, 2011). On the one hand, Neylon and Wu (2009, p. 1) eloquently point out that “the intentions of traditional peer review are certainly noble: to ensure methodological integrity and to comment on potential significance of experimental studies through examination by a panel of objective, expert colleagues”, while Scardamalia and Bereiter (2008, p. 9) recognize that “like democracy, it [peer-review] is recognized to have many faults but is judged to be better than the alternatives”. Yet, peer review’s harshest critics consider it an anathema. Casadevall and Fang (2009) for instance, question whether peer review is in fact a subtle cousin of censorship that relies heavily upon linguistic negotiation or grammatical “courtship rituals” to determine value, instead of scientific validity or value to the field, while Boshier (2009) argues that the current, widespread acceptance of peer review as a valid litmus test for scholarly value is a “faith-” rather than “science-based” approach to scholarship, citing studies in which peer review was found to fail in identifying shoddy work and to succeed in censoring originality… The challenge for scholarly practice is to devise review frameworks that are not just better than the status quo, but systems that take into consideration the cultural norms of scholarly activity, for if they don’t, they might be doomed from their inception. A recent experiment with public peer review online at Nature, for example, revealed that scholars exhibited minimal interest in online commenting and informal discussions with findings suggesting that scholars “are too busy, and lack sufficient career incentive, to venture onto a venue such as Nature’s website and post public, critical assessments of their peers’ work” (Nature, 2006, { 9). Shakespeare Quarterly, a peer-reviewed scholarly journal founded in 1950 conducted a similar experiment in 2010 (Rowe, 2010). While the trial elicited more interest than the one in Nature with more than 40 individuals contributing who, along with the authors, posted more than 300 comments, the experiment further illuminated the fact that tenure considerations impact scholarly contributions. Cohen (2010) reported that “the first question that Alan Galey, a junior faculty member at the University of Toronto, asked when deciding to participate in The Shakespeare Quarterly’s experiment was whether his essay would ultimately count toward tenure”. Considering the reevaluation of such an entrenched and centripetal structure of scholarly practice as peer review, along with calls for recognizing the value of diverse scholarly activities (Pellino et al., 1984), such as faculty engagement in K–12 education (Foster et al., 2010), we find that the internal values of the scholarly community are shifting in a direction that may be completely incompatible with some of the seemingly non-negotiable elements of 20th century scholarship.”

This seminar will bring together some of my current and past research. A lot of my work in the past examined learners’ experiences with conversational and (semi)intelligent agents. In that research, we discovered that the experience of interacting with intelligent technologies was engrossing (pdf). Yet, learners often verbally abused the pedagogical agents (pdf). We also discovered that appearance (pdf) may be a significant mediating factor in learning. Importanly, this research indicated that “learners both humanized the agents and expected them to abide by social norms, but also identified the agents as programmed tools, resisting and rejecting their lifelike behaviors.”

A lot of my current work examines experiences with open online courses and online social networks, but what exactly does pedagogical agents and MOOCs have to do with each other? Ideas associated with Artificial Intelligence are present in both the emergence of xMOOCs (EdX, Udacity, and Coursera emanated from AI labs) and certain practices associated with them – e.g., see Balfour (2013) on automated essay scoring. Audrey Watters highlighted these issues in the past. While I haven’t yet seen discussions on the integration of lifelike characters and pedagogical agents in MOOCs, the use of lifelike robots for education and the role of the faculty member in MOOCs are areas of debate and investigation in both the popular press and the scholarly literature. The quest to automate instruction has a long history, and lives within the sociocultural context of particular time periods. For example, the Second World War found US soldiers and cilvilians unprepared for the war effort, and audiovisual devices were extensively used to efficiently train individuals at a massive scale. Nowadays, similar efforts at achieving scale and efficiencies reflect problems, issues, and cultural beliefs of our time.

I’m working on my presentation, but if you have any questions or thoughts to share, I’d love to hear them!

I spent part of last week in Dallas at the annual Emerging Technologies for Online Learning conference, organized by SLOAN-C. I describe my presentation at the conference in this post, but the sessions below were all relevant to my work:

Jim Groom’s keynote. Jim’s Domain of one’s own work resonates with me. Providing students with digital tools that will enable them to learn the ways of the web is significant, but the idea also resonates with me in the context of digital scholarship, which is one of my research strands. In particular, I see Jim’s project being applicable for PhD students who should be equipped with the tools, skills, and experiences to understand networked, open, and digital scholarship. I’ve met Jim briefly in the past, but we never had a chance to chat much, so it was great to be able to spend some more time together.

Amy Collier’s and Jen Ross’ plenary. The session focused on giving insightful descriptions of the messy and compromised realities of learning in contrast to the narratives of efficiency and ease suggested by numerous educational technology providers.

Rolin Moe organized a number of fantastic panels on issues pertaining to the field and I was excited to participate in the one focused on academics in educational technology, along with Jen Ross, Amy Collier, Jill Leafstedt, Jesse Stommel, and Sean Michael Morris. We had a wonderful conversation, but 50 minutes are never enough to cover this topic. The Sloan-C organizing committee should consider making this session a longer (free-to-attend) workshop.

I took the following two pictures in two recent trips of mine. Similarities and differences abound, but one difference (other than the language) stands out for me. And that difference reminds me of an unfortunate state of affairs in the learning technologies field.

Look at the photo below. It’s from a menu that I came across in Dublin.

And the next one: It’s from a menu that I came across in Stockholm.

Other than the differences in the language, do you notice anything else? (Hint: Look at the typography.) Wouldn’t it be amazing if instructional/learning designers paid that much attention to the details as well? Yes, beauty and aesthetics are probably the least of our problems (so say the critics), but they count, and they count more and more in a world where beauty (constructed as it may be) surrounds us.

This is another one of those mini posts related to the changing nature of the work that academics do; specifically, publishing. I wrote this after being directed to the Public Library of Science site from Tony Hirst‘s tweet:

If you visit the website mentioned (here) you will see that the Public Library of Science will be making available a number of metrics intenting to evaluate the reach of published articles (I played with a similar concept here). These metrics (which will accompany each article) include reader notes and comments, ratings, social bookmakrs, citations in the academic literature, and so on. Not only is this a step toward transparently assessing the value of a publication, it provides another impetus for academics to seriously consider engaging with and participating in social media spheres. In an age where ongoing debate, collaboration, interaction, participation, and engagement are daily buzz words when envisioning improved education, shouldn’t the same ideas apply to our publications? If you are interested in these issues you may like to look at this cloudwork (and especially the comments made by Giota on the credibility, resistance, legitimacy, and power structures). It’s an interesting conversation.

[This posting is divided into 2 parts. This is part 2 and it provides an exercise in popularity metrics for online open access journals. The first part of this posting, providing an editable spreadsheet of online open access journals, is available here.]

In this post I demonstrate several points that I have been playing with over the years. On the one hand, the post takes a simple concept (the popularity of academic journals) and attempts to rethink it in the context of the digital, interconnected space. On the other hand, it demonstrates the power of the “cloud” and the opportunities provided by posting information in online spaces that are accessible via standardized formats (such as XML). The posting also serves as an example of what kinds of opportunities mashups can provide to universities/education. And finally, I just wanted to learn how to remix data via online services

As you may have seen in my previous posting, we collected a list of all the open access online journals that we could find that are focused on publishing educational technology research. While having the list online in an open spreadsheet format allows anyone interested to update it, it also allows us to manipulate and remix the data. As a simple example, consider the issue of journal rankings. I’ve seen it debated on ITForum, on twitter, at the University of Minnesota where I did my PhD, and at the University of Manchester where I currently work. The issue is that “top tier” journals are good for tenure, but there are debates on what constitutes “top tier.” Is it readership? Rejection rates? Quality? Citations? All the above? I could link to a few different resources here, but the only one I will refer interested readers to is the European Science Foundation ERIH listings that I personally use as a guide.

My intention in this post is to rank the online open access journals according to “popularity.” As I see the rolling eyes through the tubes of the internet, let me say that popularity in this case refers to the number of sites that link to a particular page. Higher numbers denote more inbound links (= higher popularity). If you want to see the popularity metrics without reading the details of how this was done, the end result (that is generated every time you click on the link) is available on this page. At the time of writing, the least linked-to journal had 0 inbound links and the most linked-to journal had 31,534 links.

To be fair (or, “a word of caution”): The popularity index is not without it’s faults. Popularity doesn’t mean quality or even readership. The number of inbound links can be easily manipulated. The measure leaves our RSS subscriptions and number of individuals receiving TOC alerts. Also, inbound links carry equal weight regardless of where they come from. Another issue relates to journals changing URLs. For example, the Journal of Computer-Mediated Communication used to be hosted an Indiana University but is now part of the Wiley InterScience group (and is still open access). Also, the URL we used to link to a journal might not be the most appropriate one. To fully understand and see the problems with this method, one has to dive under the hood of the whole process, and that’s what I am doing next.

The implementation in detail

The journal URLs are posted in a google spreadsheet that allows data to exist online in a variety of formats (e.g. csv and html files). Those files can then be read into Yahoo Pipes (essentially, a drag-and-drop mashup tool). Once Yahoo pipes has a list of journal URLs, those URLs are send through the Yahoo Site Explorer API which generates “information about the pages linking to a particular page or pages within a domain.” That information includes the magic numbers used in this exercise (i.e. the number of pages linking to a particular journal via its url). Once the numbers are generated, Yahoo Pipes exports them as an RSS feed. That feed can then be imported back to a Google Spreadsheet. And that’s it. Whenever a journal url is added to the spreadsheet, the pipe generates a popularity number for it without anyone needing to do anything. A new journal appears? No problem, just add the url and its inbound links will be counted automatically. If you want the full details, feel free to grab the actual yahoo pipe that does all the work and clone it (at this point I should thank Mat Morisson and Tony Hirst, whose postings on yahoo pipes and online data manipulation helped me rethink how I was doing this). If you don’t have a yahoo account and are interested in how the implementation looks, the image at the top of this post is the actual pipe created.

A final word of caution

This is not a valid method to decide where to send your next paper :). Yet, as I see more and more conversations online about open access (e.g., BJET published an editorial on the topic on Aug 12, 2009) and alternative ways to evaluate ones contribution to his/her chosen field, this simple example may ignite ideas for evaluating journal contributions (in the UK at least the issue of journal impact is currently being debated as we await the transformation of the Research Assessment Exercise). Also, the ranking is less interesting to me than the implications behind our ability to remix available data to think about journal “impact”. Finally, if you are managing an online open access journal and you feel that the URL used is not representative of where users link to, please feel free to correct the url by visiting the original listing. If we used an erroneous link, we apologize and we thank you for helping us correct it.

This past week, my colleague and I had the pleasure of having with us a group of 25 faculty members from the Kingdom of Saudi Arabia. In cooperation with the National Center For e-learning and Distance Learning, we held a two week workshop/training session for them on e-learning, digital technologies and education. Our conversations over these days touched upon multiple aspects of online and distance learning, ranging from cultural issues to techno-social affordances, to LMS evaluation, quality assurance, creativity, and pedagogical transformation. While I had a curriculum designed for my workshop days, I followed about half of it. The rest was revised on the spot according to what we felt we needed to cover and the needs that arose. In reality, the workshop wouldn’t have been successful had the curriculum was set in stone, but, if you are reading this far, I am probably preaching to the wrong choir

Below, is a list of items/ideas surrounding workshop issues. Other than being helpful to me, they might also be helpful to you if you are planning on leading a workshop/training session:

People seem to like lists. I don’t know why, but they do. I think it was Curt Bonk who had wrote that people like lists and acronyms (probably because they are memorable), but the last item that I gave to my colleagues before they left today was a list of 10 things to keep in mind when using technology in education.

This group was especially interested in learning from our experience with e-learning. Frequent questions were: How does the University of Manchester do e-learning? How do you train instructors/professors in using technology in education? What is your e-learning agenda? How do you convince instructors to adopt technology? What went wrong and what did you learn?

Pedagogy and technology-enhanced pedagogy should be central and this should be made explicit from the very beginning. By George (!) enough with pedagogy-enhanced technology!

University networks are just plain weird. On the one hand, my computer (that is registered on the network by its mac address, which is a unique identifier) would not connect to the network via ethernet. On the other hand, more than 1 person can log on the lab machines using the same username and password. The reason why the first issue arises while the second issue is ok is baffling me.

Practical activities and discussion trump theory.

People also seem to like to explore the courses that others have created and investigate specific design ideas or specific things that worked well or didn’t. I had my own courses to showcase and a few other open courses, but I wasn’t able to invite others to talk about their own experiences/courses. Perhaps the next time.

Every university is different and it’s always difficult to give specific input on what might work in a specific situation. Recipes for success are generally recipes for disaster. For example, in some of these universities, the university’s budget is a non-issue. Yes, you read this right. In this economic climate. This was something new for me. To be more specific, it doesn’t matter if Blackboard costs money and Moodle doesn’t.

Studying your learners helps. Did you know that online learning and distance education are pressing matters in Suadi Arabia due to the fact that 38% of the country’s population is between the ages of 10-14 and the country needs to provide higher education to these people? It’s an exciting time for our field in this part of the world.

Respectfulness, politeness, openness, appreciation, and kindness (along with a desire to improve education) go a long way.

I will end by posting a link to a twitpic posting that occurred during class time when we were trying to explore how the college of applied arts could promote student work online. And, in the spirit of the cross-cultural learning that transpired during the sessions, I look forward to visiting my newfound colleagues in the near future in Saudi Arabia. Inshallah (which, incidentally is a common Cypriot expression and is not derived from a specific religion)… oh, the things that this blog’s visitors learn are never-ending