Badges, learning analytics, and big data

Digital Badges in Education: Trends, Issues, and CasesÂ is a book that’s heavily US-focused. This isn’t anything new, and is particularly the case with the Open Badges ecosystem. It does, however, mean that for those of us in other countries and continents, some of the discussion can seem a little… insular.

Chapter 10 is entitled ‘Digital Badges, Learning at Scale, and Big Data’. As often happens with edited collections of articles, there’s a lot of overlap between this and previous chapters. The authors make similar points to others around the value of badges over grades, as well as breaking down courses into more granular chunks. Importantly, they also talk about the value of e-portfolios as a place to store the evidence behind badges:

While universities often provide support for e-portfolio creation, little is done at scale to help support the generation of rich, meaningful content that fully encapsulates the totality of the college experience. Yes, you might have a platform to share you experiences, but what exactly should you say? Where should you begin? Badges can act as a starting point from which to share your experiences. If a badge is the proof, the e-portfolio can be the context that connects the dots between validated experiences. A university commitment to badges creates a foundation from which all students can easily develop a rich, evidence-based portfolio, supporting content creation in the same way that we support technology.

Explaining that MOOCs (Massive Open Online Courses) are “the dominant form of learning at a large scale” the authors explain that traditional success measures and metrics do not work in the same way as at institutions: “of the thousands (or tends of thousands) of students who enroll in a MOOC, many enter the coruse without necessarily intending to complete the course in its entirety.” The big problem with using badges with MOOCs, contend the authors, is in “the accurate assessment of knowledge on a massive scale”. They suggest the ‘credibility index approach’ and the ‘human-computing approach’ as ways around this.

Credibility index approach: the peer review scores given by students who score higher on multiple-choice quizzes are given a greater weight than others.

Human-computing approach: machine-graded student-submitted essays are assigned both a score and a confidence interval and then passed to a number of peers to also review the essays.

While I’m a big fan of peer learning and assessment, this kind of approach is very institutional and conservative. I’d like to see better and deeper forms of assessment, and a different understanding of ‘rigour’. The problem seems to be perceptions of what employers and university admissions offices will deemd ‘valid and reliable’ whereas I think we can think of badges as being on a different spectrum to existing qualifications and credentials.

A badge platform will be a repository of big data related to badge earners and creators. The platofrm will capture the metadata for everything related to digital badges, as well as how users navigate through the platform, browsing and earning badges. Hence, it is important to design effective strategies to manage and utilize these data. Fields such as learning analytics provide a starting point for exploring these data.

Learning analytics in a badges context would mean “measuring, collecting, analyzing, and reporting the metadata created by badge earners” – something that is explicitly not done by the Mozilla badge backpack, in order to respect user agency and privacy.

However, in an opt-in way, learning analytics could make badges much more discoverable than they are at the moment:

A platform might begin to recommend a collection of badges based on data from previous badge earners. These badges might be intentionally related to a signle collection (for instance, all the badges on a specific topic are created by the same source), or the platform analytics engine might identify collections over time as users cluster around subsets of badges.

Playing a form of buzzword bingo, the authors also manage to discuss ‘adaptive learning’ which seems to be the notion of ‘personalised learning’ but with added data. “Adaptive learning might use aspects of the data to create a personalized learning guideline for each individual, presenting the most efficient path to each learner’s goals”.

There are, of course, concerns about the quality of badges. Popularity is no indication of utility, and adaptive environments still require human judgement. The authors hand-wave towards this, saying that “many questions still exist about how badges, learning at scale, and big data will impact education”. As a result, this chapter is much less useful than it could be. Â