Mcdonaldisation? Or using scale to improve learning…

Some things do not scale well. I know quite a few small restaurants that serve Great Food in a lovely setting (try this one), but it seems like all large-scale chains resemble McDonalds. Or think clothes – I don’t think Dries Van Noten can scale to H&M dimensions without degradation?

Much of the critique agains technology in learning in general, and MOOCs in particular, assumes that scaling up education or learning will have the same effect: that lovely professor lecture with intensive student interaction will be replaced by a US dominated one-size-fits-all superficial talking head. The Mcdonalidisation of higher education…

An obvious counter argument is that the ‘lovely professor lecture with intensive student interaction’ is mostly a fiction anyway: many lectures are formal going-through-the-motions events in front of already large groups of students who rarely interact beyond the occasional question about ‘do we need to know this for the exam?’.

But there is another point which I find much more intriguing. Some things actually improve as they scale up. A search engine like Google would probably not work very well if there were only a few hundred documents on the Web. An encyclopedia like Wikipedia would not be similar in quality to Encyclopedia Brittanica if there were only four of us who contribute to it. The Long Tail would not be very long without scale. Amazon can recommend books because of its scale. Etc. (I don’t think we actually really understand very well when scale helps, how and why – Barabasi is one researcher who studies this… Fascinating!)

So, I’m really interested to hear about efforts that try to leverage scale in MOOCs, that use scale as an opportunity, rather than work around it as a problem… Maybe peer and self assessment work better at massive scale (see this recent paper)? No doubt, recommending content or activities will work better in large-scale deployments? Visualisation of learning analytics in dashboard applications for learners or teachers will also make more sense (help users to make more sense) at scale? Would love to hear about other ways people try to leverage scale for learning…

2 Comments

As always, this made me think. So let me speak up for once. One element of scale is quantity, but another element is quality. In the latter sense a MOOC has the scale of a course. I wonder if on that scale we can find the most meaningful data. People (& I believe with somebody like Gigerenzer that people are naturally good at statistics) do not identify with courses. They don’t say: “I was in the same course as such and such.”, or, “People that followed such and such course have shown a great insight in that.” As far as I know they will say such things about their school.

So, why not talk about or create Massively Open On-Line Schools? It will generate as much data (well, more) but the data will in my view be more relevant precisely because they tie into some context which we know to be significant. I don’t think this is as true for MOOCs. People relate to that as much as they relate to McDonalds, if you’re hungry and don’t have time or don’t want a fuzz, you take the easy path to eating (certainly if you have brainwashed sugar-addicted children😉.

I’d see MOOCs going this way too and that’s not bad; not worse than cheaper clothes anyway. That brings me to the lovely professor lecture: it’s an intrinsically expensive concept to come to. There is value in paying lovely professors more whilst at the same time charging promising students less and still allow for quality interaction. Not many are able to sail the seven seas for a lecture with the lovely professor; hopefully MOOCs will solve this. Most of us will save a lifetime to get (the kid) to the best possible school; we should get technology to leverage motivation at that scale better.

To me, a very nice example of the benefits that MOOCs might deliver. The abstract explains it all:
“Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.”