MOOCs and ‘facilitation’

What are your thoughts on this – moderation and/or facilitation of MOOCs?

Considering the time, effort, and cost of developing these free courses (more information is available here or here or here, among other sources), what are your thoughts on how we manage the course, the comments and discussion during the run, and the subsequent comments and discussion during re-runs?

Do you have support, from technical and/or academic backgrounds monitoring the course to keep comments on track and answer pertinent questions? Are these paid positions or part of their role? Do you actively check the comments? If so, what for, why, and what do you do?

Do you design-in an element of real-time collaboration on the course (facilitation of discussion, round-up videos, Google Hangouts, etc.), and if so are these sustainable over multiple runs of the course? If you’ve done these before, but then designed them out of the course for re-runs, why?

All comments and feedback welcome – I’m trying to understand how we move MOOCs forward and maintain institutional ‘control’ where there is little (financial) reward.

Post navigation

hi David, first of all, my comments will be based on some of my findings, not really based on FutureLearn realities, as I am only looking at specific learning interactions inside some Futurelearn courses. But, considering you are looking for any feedback, here is my two cents. I am increasingly hearing about the use of automated tools to read assignments (just a bit here on roboreaders http://ignatiawebs.blogspot.be/2015/01/robo-readers-towards-automated-mooc.html , as well a the teacherbot approach http://ignatiawebs.blogspot.be/2015/05/sian-bayne-keynote-on-teacherbot.html) and there seems to be an increased use of teachers assistants (we all know why that is: lower wages), and I remember a talk on this during the past eMOOCs from I think the university of Louvain la neuve, but I cannot find it.

Then on another note, it cannot be a coincidence that peer reviewing is used as much as it is (and with sound research basis, as in fact peer reviews tend to provide more accurate evaluations then facilitator gradings, there is a research on this by the Sorbonne… sorry not sure where I hid it in my readings…ah, wait here: http://www.slideshare.net/bachelet/emoocs2015-does-peer-grading-work-r-bachelet

From my research I do see that learners tend to learn individually in moocs at first (at first meaning, based on personal character – liking to do it for themselves, without the help of others), and once those learners that also have a social streak get stuck, they open up to other learners (either within or outside of the course), which tends to provide them with new answers to unsolved questions. Most learners only turn to facilitators for tech or course help, not as much for content help (only very few). And having said that, there are teachers among the mooc learners anyway, so there is a bit of teacher support whether formal or not http://er.educause.edu/articles/2015/2/enrollment-in-mitx-moocs-are-we-educating-educators

Hope this makes some sense, now back to tackling chapter on data analysis and findings… which seems to be never ending… please send over some of your wonderful and quick brain to help me get my chapter done.

Thanks for this Inge. There is quite a push in MOOC circles, not just FL from my understanding, to run courses more often. This is possibly coming from different angles, not just the ‘keep the University of XYZ on the course list page’ attitude, but also to maximise both effort and learner numbers taking their course.

Of course, running a course more often results in greater workload to make sure the contents are fresh, up to date, and accurate, as well as keeping the expectations of an institution’s (live) involvement.

I have also seen a growth in the number of courses that use more learner-centric tools like peer review to ‘grade’ (for want of a better word) other learner’s involvement or understanding. Is this how we can design-out an Institutions’ involvement / responsibility for the day-to-day comments?

All interesting thoughts .. I wonder where the MOOC model will go if, once designed, we end our active involvement there?

Tamsyn Smith

Hi David! We have a variety of support in place when a MOOC is live. Most MOOCs here are ‘managed’ by a central team, but the one that I’ve worked on has a dedicated team of PhD students who are paid a small amount to help with facilitation. These students also contribute to course materials and write blog posts. Academics involved in the MOOC do not receive any additional payment, but spend some time on facilitation.

We originally had Google Hangouts, but these were stressful and not as successful as we’d hoped as very few people watch them live. We have continued with a 15-25 minute long video answering learners’ questions each week. The viewing figures for these has been significantly higher than for the hangouts, but still makes the course feel current.

Thanks Tamsyn. Is this level of moderation having an impact on how often you’re planning courses and re-runs?

Tamsyn Smith

Not especially – that decision mostly comes down to the lead academic and how it fits with their workload (or recruitment dates for particular courses).

Terese Bird

Hi David. I don’t work on MOOCs so cannot directly answer your direct question about first or second runs and associated discussion. But I thought you might find it helpful to consider the comments on the facilitator’s involvement in DS106 Connectivist MOOC http://geoffcain.com/blog/ds106/ds106-open-pedagogy-or-personality-cult/ which doesn’t follow the FutureLearn type model of MOOC but I would think the comments here will inform your inquiry nonetheless.