EFQUEL are currently running a twelve-week series of blog posts on MOOCs,[1] I am due to write a post in a couple of week’s time, this blog post is a draft, comments welcome! This post argues that the current discourse around the concept of xMOOCs (primarily based around interaction with content and essentially adopting a behaviourist learning approach), and cMOOCs (which focus on harnessing the power of social media and interaction with peers, adopting a connectivist learning approach), is an inadequate way of describing the variety of MOOCs and the ways in which learners engage with them. It will introduce an alternative means of categorising MOOCs, based on their key characteristics. It will describe how this can be used as part of the 7Cs of Learning Design framework (Conole 2013), to design more pedagogically informed MOOCs, which enhances the learner experience and ensure quality assurance.

There are a number of general teaching and learning national quality agencies. Specifically, in relation to quality and e-learning, EFQUEL[2] is Europe’s professional body for quality in e-learning. EFQUEL’s mission ‘to promote excellence and innovation in education in order to achieve qualitative learning opportunities in Europe and beyond’.[3]

A fundamental aspect of ensuring a good learner experience is the quality of the course. It is important to distinguish between three main aspects of quality: quality audit, quality assurance and quality enhancement.

In general quality can be defined as ‘the standard of something as measured against other things of a similar kind; the degree of excellence of something: quality of life’.[4] Therefore arguably quality in e-learning is the degree to which it measure up to ‘good learning’ (although that might be construed as a somewhat contentious statement). It certainly points to the notion of excellence and worth.

Quality assurance mechanisms are now requirements in most formal educational institutions and indeed many countries have a requirement for institutions to undergo externally reviewed quality audits on a regular basis. Institutional quality audit aims ‘to contribute, in conjunction with other mechanisms, to the promotion and enhancement of high-quality in teaching and learning’.[5]

The Quality Assurance Agency in the UK describes quality assurance as ‘the means through which an institution ensures and confirms that the conditions are in place for students to achieve the standards set by it or by another awarding body’ (QAA 2004), and quality enhancement as ‘the process of taking deliberate steps at institutional level to improve the quality of learning opportunities…. Quality enhancement is therefore seen as an aspect of institutional quality management that is designed to secure, in the context of the constraints within which individual institutions operate, steady, reliable and demonstrable improvements in the quality of learning opportunities’ (QAA 2006).

Ehlers et al. (Ehlers, Ossiannilsson et al. 2013) argue that quality is very much the condition which determines how effective and successful learning can take place.

They go on to pose the following questions in relation to the quality of MOOCs:

What are MOOCs actually aiming at?

Can the quality of MOOCs be assessed in the same way as any defined university course with traditional degree awarding processes? Or do we have to take into account a different type of objective with MOOC learners?

Are the learners mostly interested in only small sequences of learning, tailored to their own individual purpose, and then sign off and move to other MOOCs because their own learning objective was fulfilled?

Discussing MOOCs and quality, Downes argues that:

When we are evaluating a tool, we evaluate it against its design specifications; mathematics and deduction tell us from there that it will produce its intended outcome. It is only when we evaluate the use of a tool that we evaluate against the actual outcome. So measuring drop-out rates, counting test scores, and adding up student satisfaction scores will not tell us whether a MOOC was successful, only whether this particular application of this particular MOOC was successful in this particular instance (Downes 2013).

Therefore quality is a fundamental facet that needs to be considered in relation to both the design and delivery of MOOCs. We need to develop better metrics to understand the way in which learners are interacting with MOOCs and hence their experience of them.

Whilst mechanisms to ensure quality are well established in formal education institutions, such mechanisms are not in place, certainly not in any formal sense, for MOOCs. And arguably this is a key issue that needs to be address if MOOCs are going to valuable and viable learning experiences and be sustainable in the longer term.

As mentioned earlier, to date, MOOCs have been classified as either xMOOCs or cMOOCs. I want to argue that such a classification is too simplistic and in this section put forward an alternative mechanism for describing the nature of MOOCs.

I want to suggest that a better classification of MOOCs is in terms of a set of twelve dimensions: the degree of openness, the scale of participation (massification), the amount of use of multimedia, the amount of communication, the extent to which collaboration is included, the type of learner pathway (from learner centred to teacher-centred and highly structured), the level of quality assurance, the extent to which reflection is encouraged, the level of assessment, how informal or formal it is, autonomy, and diversity. The last two are taken from Downes (2010). MOOCs can then be measured against these twelve dimensions (Table 1). The following MOOCs are shown to illustrate how different MOOCs map to these twelve dimensions:

1.Connectivism and Connective Learning 2011 (CCK).[6] The course took part over twelve weeks. The course uses a variety of technologies, for example, blogs, Second Life, RSS Readers, UStream, etc. Course resources were provided using gRSShopper and online seminars delivered using Elluminate. Participants were encouraged to use a variety of social media and to connect with peer learners, creating their own Personal Learning Environment and network of co-learners.

2.Introduction to Artificial Intelligence (AI) 2011 (CS221).[7] The course ran over three months and included feedback and a statement of accomplishment. A small percentage of participants enrolled registered for the campus-based Stanford course. The course was primarily based around interactive multimedia resources. The course is now based on the Audacity platform.

3.OLDS (Learning Design) (OLDS) 2013.[8] The course ran over eight weeks, with a ninth reflection week. It was delivered using Google Apps, the main course site being built in Google Drive, Google forums and Hangouts were also used. Cloudworks[9] was used as a space for participants to share and discuss their course artefacts and to claim credit for badges against course achievements.

4.Openness and innovation in elearning (H817).[10] The course is part of the Masters in Open and Distance Education offered by the Open University UK. H817 runs between February and October 2013 months, however the MOOC component of the course consists of 100 learning hours spread over seven weeks from March 2013 and is open to a wider audience than those registered on the OU course. The course adopts an ‘activity-based’ pedagogy. There is an emphasis on communication through blog postings and the forum. Participants have the opportunity to acquire badges for accomplishments.

5.Introduction to Openness in Education (OE).[11] The course tutor advocates that “learning occurs through construction, annotation and maintenance of learning artifacts,” which is the philosophy that underpins the design of the course. Participant could acquire badges for various accomplishments.

The table demonstrates that, in terms of the twelve dimensions, the five MOOCs illustrate examples of low, medium and high degrees of each. I would argue that at a glance this classification framework gives a far better indication of the nature of each MOOC than the simple classification as xMOOCs and cMOOCs.

The MOOC criteria described in this blog fits under the Conceptualise C of the 7Cs of Learning Design framework. It can be use to plan the design of the MOOC against these twelve criteria. Table 2 shows how these criteria can be used to characterise a Continuing Professional Development course for Medics. The course is informal and is aimed at Medics in a local authority in the UK.

Table 2: Example of using the MOOC criteria in the design of a course

Dimension

Degree of evidence

Open

High - The course is built using open source tools and participants are encouraged to share their learning outputs using the creative commons license.

Massive

Low – The course is designed for Continuing Professional Development for Medics in a local authority.

Use of multimedia

High – The course uses a range of multimedia and interactive media, along with an extensive range of medical OER.

Degree of communication

Medium – The participants are encourage to contribute to a number of key debates on the discussion forum, as well as keeping a reflective blog of how the course relates to their professional practice.

Degree of collaboration

Low – The course is designed for busy working professionals, collaboration is kept to a minimum.

Learning pathway

Medium – There are two structured routes through the course – an advanced and a lite version.

Quality Assurance

Medium – The course is peer-reviewed prior to delivery.

Amount of reflection

High – Participants are asked to reflect continually during the course, their personal blogs are particularly important in this respect.

Certification

Medium – Participants can obtain a number of badges on completion of different aspects of the course and receive a certificate of attendance.

Formal learning

Low – The course is informal and optional.

Autonomy

High – Participants are expected to work individually and take control of their learning, there is little in the way of tutor support.

Diversity

Low – The course is specialised for UK medics in one local authority.

The 7Cs framework can be used both to design and evaluate MOOCs. The tools and resources associated with each of the Cs enable the designer to make more informed design decisions. The evaluation rubric under the Consolidate C enables them to ensure that the design is fit for purpose, hence ensuring the quality of the MOOCs and the ultimate learner experience.

It is evident that there are a number of drivers impacting on education. Firstly, universities are increasingly looking to expand their online offerings and make more effective use of technologies. Secondly, there is increasing demand from higher student numbers and greater diversity. Thirdly, there is a need to shift from knowledge recall to development of skills to find and use information effectively. In this respect, there is a need to enable learners to develop 21st Century digital literacy skills (Jenkins 2009) to equip them for an increasingly complex and changing societal context. Finally, given the proliferation of new competitors, there is a need for traditional institutions to tackle new competitive niches and business models.[13] MOOCs represent a sign of the times; they instantiate an example of how technologies can disrupt the status quo of education and are a forewarning of further changes to come. Whether or not MOOCs will reach the potential hype currently being discussed is a mote point, what is clear is that we need to take them seriously. More importantly, for both MOOCs and traditional educational offerings we need to make more informed design decisions that are pedagogically effective, leading to an enhanced learner experience and ensuring quality assurance.

Finally, the key value of MOOCs for me is that they are challenging traditional educational institutions and having to make them think about what they are offering, how it is distinctive and what the unique learner experience will be at their institution. As Cormier states:

When we use the MOOC as a lense to examine Higher Education, some interesting things come to light. The question of the ‘reason’ for education comes into focus (Cormier 2013).

Furthermore, UNESCO estimate that more than 100 million children can’t afford formal education,[14] MOOCs provide them with a real lifeline to get above the poverty line. This, and the fact that MOOCs provide access to millions. As Creelman notes:

Whatever you think of them they are opening up new learning opportunities for millions of people and that is really the main point of it all (Creelman 2013).

So for me the value of MOOCs to promote social inclusion, coupled with them making traditional institutions look harder at what they are providing their students, signifies their importance as a disruptive technology. For me therefore, whether they survive or not, if they result in an opening up of education and a better quality of the learner experience that has got to be for the good.

References

Conole, G. (2013). Current thinking on the 7Cs of Learning Design. e4innovation.com.

This entry was posted
on Saturday, May 25th, 2013 at 2:44 pm and is filed under General.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

8 Responses to “A new classification for MOOCs”

Grainne, this is a really useful analytical framework and I can easily apply it to the Brookes MOOC #FSLT13, which is currently ongoing, thank you. Great for my PHd too
Jenny Mackness, George Roberts, Liz Lovegrove and I have had a paper accepted for the JOLT special edition on MOOCs. I am just doing the revisions now but it would be really useful to cite your framework in our paper as it adds greatly to current understanding of MOOCs.

Great post; the twelve dimensions you are proposing here, could work as a helpful grid for evaluating different MOOCs. Do you see those dimensions of equal importance or do you consider some to be more important than others? My feeling is that depending on the MOOC, some of those dimensions may be more critical than others, but then again mapping one’s MOOC against all these criteria will be a useful exercise as its designers would have to explicitly state their choices - as in the example you have provided - and its evaluators would have a useful report to base their evaluation on.

Also, have you devised this ‘MOOC 12 dimension grid’ primarily for informal, non-assessed MOOCs, that may provide some sort of proof of completion such as badges, but not credit? My point is that, while for informal, non-credit bearing MOOCs the grid would be perfect, when it comes to credit-bearing MOOCs, assessment and feedback would probably need to be two additional dimensions in the grid.
But even for informal learning, I was thinking that perhaps assessment and feedback could be added as separate dimensions from ‘certification’ and could refer to any type of assessment employed - self diagnostic, formative, summative - via quizzes, short essays/blogposts, digital artefacts etc. and feedback, including automated feedback and peer feedback.

Hi Timost thanks for these comments! Imagined the use of this for both formal and informal MOOCs but agree may need to think more about assessment. In terms of weight I think some are more important than others I agree,not sure how to represent this though….

Exciting to see MOOCs being discussed in a rational way that will allow them to be known in a way that can be measured and tested. My worry is the needs of institutions restricting what MOOCs could be if they were somehow unaffiliated with the general agreement on what constitutes education, which seems to be the limiting factor here.

This is likely unhelpful but could we be neutral enough to determine whether MOOCs (connectivist type for now) result in something–make a noticeable mark on the environment before we decide what that something is and how to teach it?

I would ask: how are we measuring how open a MOOC is? I agree with how the sample MOOCs in this blog post are mapped on the grid, but I’d like to make explicit what our openness measures are. I’ll have a go:

Whether there are barriers to enrol
Whether there are technological barriers to access the materials (can I get them on a smartphone for example)
Whether the materials are licensed CC for reuse
Whether the platform will allow reuse of the materials (by anyone)
Whether the goings-on of the course can be observed by those not enrolled