Un-Fathom-able: The Hidden History of Ed-Tech #CETIS14

Blog Logo

Audrey Watters

on
18 Jun 2014

read

Here are the notes and the slides from my keynote today at CETIS 2014 (which was just an amazingly great event).

A couple of years ago, a friend sent me an exasperated email on the heels of an exclusive technology event he’d attended in Northern California — not in Silicon Valley, but close enough, one with powerful people in the tech industry. Investors. Engineers. Entrepreneurs. Several prominent CEOs of prominent ed-tech startups had been invited to speak there about the state of education — past, present, and future — and their talks, my friend reported, tended to condemn education’s utter failure to adopt or to integrate computing technologies. The personal computing revolution had passed schools by entirely, they argued, and it wasn’t until the last decade that schools had started to even consider the existence of the Internet. The first online class, insisted one co-founder of a company that’s raised tens of millions of dollars in venture capital since then, was in 2001 at MIT.

And okay, in fairness, these folks are not historians. They’re computer scientists, artificial intelligence experts, software engineers. They’re entrepreneurs. But their lack of knowledge about the history of education and the history of education technology matters.

It matters because it supports a prevailing narrative about innovation — where innovation comes from (according to this narrative, it comes from private industry, that is, not from public institutions; from Silicon Valley, that is, not from elsewhere in the world) and when it comes (there’s this fiercely myopic fixation on the future).

The lack of knowledge about history matters too because it reflects and even enables a powerful strain in American ideology and in the ideology of the technology industry: that the past is irrelevant, that the past is a monolithic block of brokenness — unchanged and unchanging until it’s disrupted by technological innovation, or by the promise of technological innovation, by the future itself.

This ideology shapes the story that many ed-tech entrepreneurs tell about education and about their role in transforming it.

One of my favorite examples of this comes from Sal Khan of Khan Academy fame in a video on “The History of Education" he made with Forbes writer Michael Noer back in 2012.

It’s the history of education “from 1680 to 2050” told in 11 minutes, so needless to say it’s a rather abbreviated version of events. It’s not titled “The History of Education in the United States,” although that would be much better because contributions to education from the rest of the world are entirely absent.

Well, except for the Prussians. Americans involved in education reform and education technology love to talk about the Prussians.

Our current model of education, says Khan, originated at the turn of the nineteenth century: “age-based cohorts” that move through an “assembly line” with “information being delivered at every point.”

“This is the Prussian model,” the Forbes writer Noer adds, “and it’s about as inflexible as a Prussian can be.” But Khan notes that there were benefits to this as “it was the first time people said, ‘No, we want everyone to get an education for free.”

Then “Horace Mann comes along about 1840” and introduces this concept of free education for everyone to the United States. By 1870, says Khan, public education is pretty common “but even at that point it wasn’t uniform” with different standards and curriculum in different states and cities. So in 1892, “something that tends to get lost in history,” a committee of ten — “somewhat Orwellian” quips Noer — meet to determine what twelve years of compulsory public education should look like.

“It was forward looking for 120 years ago,” says Noer, “but what’s interesting is that we’ve basically been stuck there for 120 years.” Education has been "static to the present day,” agrees Khan.

And from 1892, the story they tell jumps ahead, straight to the invention of the Internet — “the mid late Nineties,” says Khan as he plots it on his timeline. “The big thing here,” says Noer as the two skip over one hundred years or so of history, “is what you’ve done” with Khan Academy. “One person with one computer can reach millions.” This revolutionizes lectures, Noer argues; it revolutionizes homework. “Class time is liberated,” adds Khan. This changes everything — Khan Academy (founded in 2006) changes everything — that has been stagnant and static since the nineteenth century.

See, this isn’t simply a matter of forgetting history -- the history of technology or the history of education or the history of ed-tech. It’s not simply a matter of ignoring it. It’s a rewriting of history, whether you see it as activist or accidental.

To contend, as my friend overheard at that tech event or as Khan implies in his history of education, that schools haven’t been involved in the development or deployment of computers or the Internet, for example, is laughably incorrect — it’s an inaccurate, incomplete history of computing technology, not simply an inaccurate history of ed-tech.

Take the ILLIAC I, the first von Neumann architecture computer owned by an American university, built in 1952 at the University of Illinois. (The US was beaten by several years by universities here in the UK, I should point out — the University of Manchester, I believe.)

Or take PLATO, a computer-based education system — sometimes credited as the first piece of educational computing software — build on the University of Illinois ILLIAC machine in 1960.

Or take the work of Marc Andreessen — now a powerful figure in Silicon Valley, a not-quite-but-almost-billionaire, a venture capitalist with several major investments in ed-tech — who took the work he’d done on the Mosaic Web browser as a student at the University of Illinois in order to start his own company, Mosaic Communications Company, which eventually became the Netscape Communications Company, launching the Netscape Navigator web browser and successfully IPOing in 1995.

The history of education technology is long. The history of education technology is rich. And while it certainly predates Netscape or the von Neumann architecture, the history of education technology is deeply intertwined with the history of computing — and visa versa.

And I could probably stop right there with my keynote. This is really the crux of my message: there’s a fascinating and important history of education technology that is largely forgotten, that is largely hidden — it’s overlooked for a number of reasons, some of which is wrapped up in the ideologies I’ve already alluded to.

All this means, if we’re going to talk about “Building the Digital Institution: Technological Innovation in Universities and Colleges” — the theme of this conference — we probably should know a bit about the history of universities and colleges and technological innovation and build from there.

Despite all the problems that these institutions have — and good grief, they do have problems — universities and colleges have been the sites of technological innovation. They are the sites of technological innovation. Or they can be. In pockets, to be sure. In spurts, to be sure. Certain developments in certain times in certain places, yes. Certain disciplines making breakthroughs; certain disciplines get the credit. Certain universities get the credit for innovating — even when, dare I say, they aren’t actually doing anything that new or transformative.

It’s not surprising, perhaps, that the ed-tech startup co-founder in my opening anecdote would credit MIT with offering the first online course. It’s one of those universities that consistently gets the credit for innovating. Perhaps he was thinking of MIT OpenCourseWare which launched in 2002 as an effort to put the university’s course materials online in a free and openly licensed format.

(Sidenote number 1: that putting course materials online could be confused with offering a course online speaks volumes about this co-founder’s startup. Sidenote number 2: this particular ed-tech co-founder attended MIT. Sidenote number 3: Sal Khan, again of Khan Academy fame, is also a MIT graduate, and I think his vision for teaching and learning via a site like Khan Academy draws heavily on that MIT academic culture — where class attendance isn’t as important as working through course materials at your own pace with your smartest peers; as long as you can pass the assessments at the end of the course, that’s what matters.)

It’s unlikely, when touting who put classes online first that this ed-tech co-founder, again from my opening anecdote, was thinking of Fathom, the Columbia University-led online learning initiative founded roughly around the date he ascribed to the first “online course." It’s unlikely he was thinking of AllLearn, the Stanford, Yale, and Oxford Universities-led online learning initiative of about the same period.

Possibly because it’s like the movie Fight Club: the first rule of the history of online education: don’t talk about Fathom. We don’t talk about AllLearn.

And this particular ed-tech startup co-founder certainly wasn’t talking about UK e-University, because as with the development of early computers, we (we Americans, I should qualify here) seem to have forgotten that much has happened outside of the US, let alone outside of Silicon Valley.

Ah, ed-tech of the late 1990s and early 2000s. “The Internet!,” as Sal Khan notes excitedly.

Yet we don’t talk much about that period. We don’t talk much about the heady days of the first Dot Com bubble. Have we really forgotten?

It could be that we’re reluctant in talking about the first Dot Com bubble because some of us don’t want to admit we might just be in the midst of another one. Startups — ed-tech and otherwise — overhyped, overfunded, with little to show in terms of profit (or educational outcomes) as a result.

What’s implied by our silence about the Dot Com era perhaps: we know better now than we did then. Or at least the tech is better. Or at least we’re not spending as much money to launch startups these days. Or we care more about learning now. More about learners. Or something.

And yes, some of us simply don’t want to talk about the tech and ed-tech failures of the Dot Com era — the failures of Fathom and AllLearn and UKeU and the like — because of the shame of failure. It’s not just Silicon Valley entrepreneurs who are at fault here. I think industry and institutions (particularly elite Ivy League institutions) have buried those failures — a pity since there’s much to learn.

I realize that most of the folks here know these stories, but I’m going to repeat them anyway:

Fathom: opened in 2000 and closed in 2003AllLearn: opened in 2001 and closed in 2006UKeU: opened in 2003 and closed in 2004

Fathom: $30 million invested into the initiative by Columbia University

AllLearn: $12 million invested from various schools and foundations

UKeU: £62 million earmarked for and £50 million spent by the British government

By comparison:

edX: launched in 2012 with an initial $60 million investment from Harvard and (yes) MIT

Coursera: launched in 2012 with a total venture capital investment of $85 million

Udacity: launched in 2012 with a total (disclosed) venture capital investment of $20 million

(So this notion that it’s easier and cheaper to launch a startup in the 2010s — that thanks to open source technologies and the cloud and the like that we needn’t funnel so much money into ed-tech startups. Well…)

A little Wayback Machine magic:

[See Slides 8-14]

Here’s what the Fathom website looked like, circa 2001.

Here’s what the AllLearn website looked like, circa 2001.

Here’s what Coursera’s website looks like today.

Here’s what edX’s website looks like today.

Here’s what Udacity’s website looks like today.

Here’s what FutureLearn’s website looks like today.

And here’s what happens when you Google “UK e-University.”

Certainly you can see some changes — improvements no doubt — in Web design. But what’s changed in the decade or so between the Dot Com-era online courses and today’s versions? What’s changed in terms of institutional involvement, what’s changed in terms of branding, what’s changed in terms of course content, and what’s changed in terms of the “ed-tech” under the hood? And what hasn’t changed? What's the same?

The course content for Fathom and AllLearn was similar to what we see being offered online today — not a surprise, such is the makeup of the typical college course catalog. A broad swath of classes in science, technology, humanities, professional development, business, law. Some 2000 courses offered via Fathom. 110 on AllLearn. 25 on UK e-University. (Is that correct?!) Over 500 courses via Coursera.

The marketing pitch to students hasn’t changed much: “Online courses from the world’s best universities” — that’s the tagline on the edX site. The “world’s best courses” — that’s what Coursera promises. “Enjoy free online courses from leading UK and international universities” — that’s FutureLearn’s promise. The “world’s most trusted sources of knowledge” — that was Fathom's. The focus — then and now — is on the prestige of the institutions involved. And they are some of the same institutions. Stanford. Yale. Columbia.

AllLearn, short for the Alliance for Lifelong Learning, stressed that its classes were just that: an opportunity for continuing education and lifelong learning. Udacity stresses something different today: it’s about “advancing your career.” It’s about “dream jobs.”

There’s been plenty of hype about these new online platforms displacing or replacing face-to-face education, and part of that does connect to another powerful (political) narrative — that universities do not adequately equip students with “21st century skills” that employers will increasingly demand. But by most accounts, those who sign up for these courses still fall into the “lifelong learner” category. That is, the majority have a college degree already.

The question remains unresolved — a decade later — as to whether or not people will pay for these online courses (or for certification after successful completion) to an extent that these online initiatives can become financially sustainable, let alone profitable. And that’s even accounting for the massive increase since the early 2000s in the cost of higher education (in the US and now elsewhere) alongside the push for college credentials.

From a 2002 New York Times article about universities’ efforts to move online — “Lessons Learned at Dot Com U”: "college campuses and dot-coms had looked at the numbers and anticipated a rising tide of enrollment based on baby boomers and their children as both traditional students and those seeking continuing education. In short, the colleges essentially assumed that if they built it, students would come.”

“We hope it’s enough money to get us to profitability,’’ Coursera co-founder Daphne Koller told The New York Times in the summer of 2013 when her company announced it had raised another $43 million. “We haven’t really focused yet on when that might be.” Echoing the Field of Dreams reference from a decade earlier — that’s a baseball movie reference, a terrible analogy to invoke in a keynote in the UK, I realize: if you build it, they will come. Indeed, Koller has admitted that her investors have told her that if you do the "right thing" in education, in ed-tech the profits will follow.

Perhaps they will.

We can see already the pressures for Coursera to find a path to profitability — it has raised $85 million in venture capital after all, not in university endowment or in foundation funding. In recent months, Coursera has shuffled its executive team quite a bit, adding a venture capitalist from fabled investment firm Kleiner Perkins Caufield and Byers as President and adding a former Yale President as CEO. Co-founder Andrew Ng has stepped away from day-to-day work at the company, although he remains Chairman of the Board.

The new CEO of Coursera, Richard Levin, as it just so happens, was at the helm at Yale in the AllLearn era. (He was the chair of AllLearn in fact.) One could assume then, I suppose, that he must have a significant amount of expertise and much wisdom gleaned from the university’s Dot Com era ed-tech ventures. Levin, an economist by training, must know a bit about the history of education and the history of technology and the history of ed-tech. Or at least he should know a bit about the history of the economics of ed-tech. Right?

In an interview with The New York Times this spring, Levin offered this explanation as to why AllLearn did not succeed:

"It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.”

AllLearn failed, he argues, because of bandwidth.

Bandwidth.

"The Internet bandwidth in most homes was inadequate for properly sharing course material,” Levin contends. Actually, AllLearn offered its materials via CD-ROM as well, and like many sites in that period, AllLearn recognized that streaming video content might be challenging for some users. It allowed them to turn off some of the high-bandwidth features and download rather than watch video online.

Remember too, AllLearn was marketed as a “lifelong learning” site. Its pitch was to alumni of the universities involved as well as to the general public. The former would pay about $200 per course; the latter about $250. (One creative writing class charged $800 in tuition.) So are we to believe that those groups — alumni and keen lifelong learners — were unable to access AllLearn due to bandwidth issues? That they’d balk at having good Internet but not balk at the AllLearn fees? It’s an assertion that my colleague Mike Caulfield (among others) has questioned:

"All-Learn folded in 2006, when broadband was at a meager 20% adoption. Today, it’s different, supposedly. It’s at 28%. Are we to really believe that somewhere in that 8% of the population is the difference between success and failure?” asks Caulfield.

Caulfield also questions what Levin learned from OpenYale, the ed-tech venture that followed the demise of AllLearn. By Caulfield’s calculations, those courses were created using "$4 million dollars of Hewlett money. And the videos are basically recordings of class lectures. Four million dollars for forty filmed courses, or, if you prefer, $100,000 a course for video lectures.”

That’s close to the figure you hear bandied about today among professors who’ve created Coursera classes, for what it’s worth.

It’s this discrepancy between the costs and the revenue, an inability to find a sustainable business model that plagued the Dot Com era online initiatives. From a 2003 article in the Columbia student newspaper:

"Fathom spent money at an unsustainable rate. In 2001, Fathom burned through almost $15 million, and generated revenues of only $700,000."

And this is what plagues Coursera today.

This is (in part) why history matters. Well, history and a bit of humility, I’d add. It’s not easy to reflect on our failures — the failures of Dot Com era ed-tech in this case — and move forward; but that’s how we make progress.

But it's important too to recognize the successes of the Dot Com era and to remember that, despite the failures of initiatives like AllLearn and Fathom, there were many online education programs founded in roughly the same period that didn't fold, that went on to be sustainable, that continue today.

I’d argue, however, that (sadly) one of the most significant successes of the Dot Com era — financial successes, that is — is one that has left an indelible mark on ed-tech: and that’s the success of the learning management system — the technology, the industry.

While learning management system software predates the Internet, it was the the Internet that became its big selling point. From The Washington Post in 1999: "Blackboard Chalks Up a Breakthrough; Its Educational Software Lets Colleges Put Classes on the Internet.” (Several years, I’d like to point out, prior to the date in my opening anecdote when MIT supposedly offered the first course online.)

The LMS — the VLE, I should say while here in the UK — has profoundly shaped how schools interact with the Internet. The LMS, the VLE, is a piece of administrative software — there’s that word “management” in there that sort of gives it away for us in the US at least — software that purports to address questions about teaching and learning but often circumscribing pedagogical possibilities. You can see its Dot Com roots too in the VLE functionality and in its interface. I mean, some VLEs still look like software from the year 2000! The VLE acts as an Internet portal to the student information system, and much like the old portals of the Dot Com era, much like AOL for example, it cautions you when you try to venture outside of it. You can access the VLE through your web browser but it is not really "of" the web.

The learning management system is a silo, a technological silo, by design. This isn’t because the technology isn’t available to do otherwise. Rather, it’s a reflection of the institution of education. The LMS silo works because we tend to view each classroom as a closed entity, because we view each subject or discipline as atomistic and distinct. Closed. Centralized. Control in the hands of administrators, teachers, and IT but rarely in the hands of learners.

If you look at the much-hyped online courses of today — those offered on the Coursera or the edX platforms, for example — you can see the influence of the LMS. Each course you enroll in is separate, siloed. At the end of the term, your access to your course disappears. There’s a tab on the LMS so you can navigate to the syllabus and a tab for assignments and one for assessments, and there is, of course — thanks early Internet technology! — a discussion forum. A message board. It isn’t an accident — and it certainly isn’t an innovation — that our online classes look this way.

It doesn’t have to look this way, of course. There are other stories we could tell about education technology’s past; there are other paths forward. Again, there’s this hidden history of ed-tech (and of computer tech as well), and it’s worth considering why so much has been forgotten or overlooked or dismissed. Ted Nelson. Douglas Englebart.

Or the person I always point to: Seymour Papert.

Computers, argued Papert, should unlock children’s “powerful ideas.” That’s the subtitle to his 1980 book Mindstorms — a book that I insist people in ed-tech read (although admittedly Papert’s work is geared towards younger children rather than adult learners). Mindstorms addresses “how computers can be carriers of powerful ideas and of the seeds of cultural change, how they can help people form new relationships with knowledge that cut across the traditional lines separating humanities from sciences and knowledge of the self from both of these. It is about using computers to challenge current beliefs about who can understand what and at what age. It is about using computers to question standard assumptions in developmental psychology and in the psychology of aptitudes and attitudes. It is about whether personal computers and the cultures in which they are used will continue to be the creatures of ‘engineers’ alone or whether we can construct intellectual environments in which people who today think of themselves as ‘humanists’ will feel part of, not alienated from, the process of constructing computational cultures."

Computers, Papert insisted, will help children gain "a sense of mastery over a piece of the most modern and powerful technology and establish an intimate contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building.”

But as we see with the LMS, ed-tech has come to mean something else. As Papert notes in his 1993 book The Children’s Machine: “Progressive teachers knew very well how to use the computer for their own ends as an instrument of change; School knew very well how to nip this subversion in the bud.”

“Computer-aided inspiration,” as Papert encouraged, has been mostly trumped by “computer-aided instruction.”

And we come full circle now to a technology I mentioned in passing at the beginning of my talk: PLATO, Programmed Logic for Automatic Teaching Operations — a computer system developed at the University of Illinois in the 1960s on its ILLIAC machine.

Early versions of the PLATO system had a student terminal attached to a mainframe. The software offered mostly “drill and kill” and tutorial lessons. But as the PLATO system developed, new, more sophisticated software was added — more problem-based lessons, for example. A new programming language called TUTOR enabled “anyone” to create their own PLATO lessons without having to be a programmer. The mainframe now supported multiple, networked computers. Students could communicate with one another, in addition to the instructor. Fairly groundbreaking stuff, as this was all pre-Internet, pre-Web.

This networked system made PLATO a site for the development of number of very important innovations in computing technology — not to mention in ed-tech. Forums, message boards, chat rooms, instant messaging, screen sharing, multiplayer games, and emoticons. PLATO was, as author Brian Dear argues in his forthcoming book The Friendly Orange Glow “the dawn of cyberculture.”

But as with so much ed-tech history, PLATO’s contribution to cyberculture is mostly forgotten. Yet clearly we can see remnants of PLATO in many of the features in ed-tech today, including of course, the learning management system. And if the learning management system has trapped us in a moment of Dot Com era tech — the Internet portal — it may be that ed-tech's roots in PLATO have trapped us in an old “mainframe” mindset as well.

See, there are numerous legacies here. One of the features PLATO boasted: tracking every keystroke that a student made, data on every answer submitted — right or wrong. Sound familiar? PLATO offered more efficient computer-based testing. Sound familiar? It offered the broadcast of computer-based lessons to multiple locations, where students could work at their own pace. Sound familiar? Indeed, by the mid-Seventies, PLATO was serving students in over 150 locations — not just across the University of Illinois campus, but in elementary schools, high schools, and on military bases.

Sensing a huge business opportunity — these notion of tapping into the giant “education market” is not new, the Control Data Corporation, the company that built the University of Illinois mainframe, announced that it was going to go to market with PLATO, spinning it out from a university project to a corporate one.

CDC charged $50 an hour for access to its mainframe, for starters. Each student unit cost about $1900; the mainframe itself $2.5 million — on the low end — according to some estimates. CDC charged $300,000 to develop each piece of courseware. (So okay, I guess it is getting a little cheaper to develop courseware.)

Needless to say, PLATO as a commercialized computer-aided instruction system product was largely a failure. The main success that CDC had with it: selling an online testing system to the National Association of Securities Dealers, a regulatory group that licenses stockbrokers.

Yet like the learning management system, the idea of computer-assisted instruction has retained an incredibly powerful hold over ed-tech. Indeed, as the history of PLATO shows us, the two are interconnected. Computer-based instruction. Computer-based management.

As we move forward, “building the digital institution,” I think we must retrace and unwind some of these connections.

Why are we building learning management systems? Why are we building computer-assisted instructional tech? Current computing technologies demand neither. Open practices don’t either. Rather, it’s a certain institutional culture and a certain set of business interests that do.

What alternatives can we build on? What can we imagine -- a future of learner agency, of human capacity, of equity, of civic responsibility, of openness, for example.

I called this talk “Un-Fathom-able,” thumbing my nose I confess at the failures of Fathom and what I think we may soon see as the failure of Coursera. I called this talk “Un-Fathom-able” too because I fear that there’s much in ed-tech that we’ve failed to explore — partly, I would argue, that’s because we have failed to learn and to reflect on the history of ed-tech. It's easy to blame technologists, I suppose. But I think all this runs deeper than that. There's been a failure of imagination to do something bold and different, something that, to borrow Papert’s phrasing, unlocks “powerful ideas” in learners rather than simply reinscribing powerful institutional mandates.

But we can't move forward, I'd argue, til we reconcile where we've been before.