What would happen if the United States decided to cancel all student debt? A Bard College economics research team (Scott Fullwiler, Stephanie Kelton, Catherine Ruetschlin, and Marshall Steinbaum) decided to explore what such a bold near-term future could look like … Continue reading →

The computer scientist Bret Victor gave a keynote back in 2013 that I return to again and again. (See? Keynotes need not be a waste of time and energy!) In “The Future of Programming,” he offers a history of programming – or more accurately, a history of programming developments that were never widely adopted. That is to say, not the future of programming.
The conceit of Victor’s talk: he delivers it as if it’s 1973, using an overhead projector in lieu of PowerPoint slides, and the future he repeatedly points to is our present-day. With hindsight, we know that the computer languages and frameworks he talks about haven’t been embraced, that this future hasn’t come to pass. But as Victor repeats again and again, it would be such a shame if the inventions he recounts were ignored; it would be a shame if in forty years, we were still coding in procedures in text files in a sequential programming model, for example. “That would suggest we didn’t learn anything from this really fertile period in computer science. So that would kind of be a tragedy. Even more of a tragedy than these ideas not being used would be if these ideas were forgotten.” But the biggest tragedy, says Victor, would be if people forgot that you could have new ideas and different ideas about programming in the first place, if a new generation was never introduced to these old ideas and therefore believed there is only one model of programming, one accepted and acceptable way of thinking about and thinking with computers. That these new generations “grow up with dogma.”
Victor mentions an incredibly important piece of education technology history in passing in his talk: PLATO (Programmed Logic for Automatic Teaching Operations), built on the ILLIAC I at the University of Illinois. PLATO, which operated out of the university’s Computer-based Education Research Laboratory (CERL) from 1960 to 1993, does represent in some ways a path that education technology (and computing technology more broadly) did not take. But if and when when its innovations were adopted (and, yes, many of them were), PLATO remained largely uncredited for its contributions.
PLATO serves in Victor’s talk as an example, along with Douglas Englebart’s NLS, of the development in the 1960s of interactive, real-time computing. In forty years time, Victor tells his imagined 1970s audience, our user interfaces will never have any delay or lag because of Moore’s Law and because “these guys have proven how important it is to have an immediately responsive UI” – a quip that anyone who’s spent time waiting for operating systems or software programs to respond can understand and chuckle remorsefully about.
This idea that computers could even attempt to offer immediate feedback – typing a letter on a keyboard and immediately seeing it rendered on a screen – was certainly new in the 1960s, as processing was slow, memory was minute, and data had to move from an input device back to a central computer and then back again to some sort of display. But the “fast round trip” between terminal and mainframe was hardly the only innovation associated with PLATO, as Brian Dear chronicles in his book The Friendly Orange Glow. That very glow was another one – the flat-panel plasma touchscreen invented by the PLATO team in 1967. There were many other advances too: the creation of time-sharing, discussion boards, instant messaging, a learning management system or sorts, and multi-user game-play, to name just a few.
The subtitle of Dear’s book – “The Untold Story of the PLATO System and the Dawn of Cyberculture” – speaks directly to his larger project: making sure the pioneering contributions of PLATO are not forgotten.
If and when PLATO is remembered (in education technology circles at least), it is as an early example of computer-assisted instruction – and often, it’s denigrated as such. Perhaps that should be no surprise – education technology is fiercely dogmatic. And it was already fiercely dogmatic by the 1960s, when PLATO was first under development. The field had, in the decades prior, developed a certain set of precepts and convictions – even if, as Victor contends in his talk at least, computing at the time had (mostly) not.
Dear begins his book where many histories of education technology do: with the story of how Harvard psychology professor B. F. Skinner had, in the late 1950s, visited his daughter’s fourth grade classroom, been struck by its in efficiencies, and argued that teaching machines would ameliorate this. The first mechanisms that Skinner built were not computerized; they were boxes with levers and knobs. But they were designed to offer students immediate feedback – positive reinforcement when students gave the correct answer, a key element to Skinner’s behaviorist theories. Skinner largely failed to commercialize his ideas, but his influence on the design of instructional machines was significant nonetheless, as behaviorism had already become a cornerstone of the nascent field of educational psychology and a widely accepted theory as to how people learn.
At its outset, the Computer-based Education Research Laboratory at the University of Illinois did not hire instructional technologists to develop PLATO. The lab was not governed by educational psychologists – behaviorists or otherwise. The programming language that was developed so that “anyone” could create a lesson module on the system — TUTOR — did not demand an allegiance to any particular learning theory. As one education professor told Brian Dear, CERL did not operate “under any kind of psychological banner. They just didn’t seem to be driven by psychological underpinnings. They were driven by a more pragmatic approach: you work with students, you work with content, you work with the technology, you put it together in a way that feels good and it will work. Whether it’s consistent with somebody’s psychology is a quickly irrelevant question.”
But it seems more likely, if we examine the history of PLATO (and perhaps even the histories of education technology and of computing technologies), that this is not really an irrelevant question at all – not in the long run at least. Certainly, the open-ended-ness of the PLATO system, as well as the PLATO culture at UI, fostered the myriad of technological innovations that Dear chronicles in The Friendly Orange Glow. But the influence of psychology on the direction of education technology – and to be clear, this was not just behaviorism, of course, but cognitive psychology – has been profound. It shaped the expectations for what instructional technology should do. It shaped the expectations for what PLATO should be. (I’d add too that psychological theories have been quite influential on the direction of computing technology itself, although I think this has been rather unexamined.)
The Friendly Orange Glow is a history of PLATO – one that has long deserved to be told and that Dear does with meticulous care and detail. (The book was some three decades in the making.) But it’s also a history of why, following Sputnik, the US government came to fund educational computing. Its also – in between the lines, if you will – a history of why the locus of computing and educational computing specifically shifted to places like MIT, Xerox PARC, Stanford. The answer is not “because the technology was better” – not entirely. The answer has to do in part with funding – what changed when these educational computing efforts were no longer backed by federal money and part of Cold War era research but by venture capital. (Spoiler alert: it changes the timeline. It changes the culture. It changes the mission. It changes the technology.) And the answer has everything to do with power and ideology – with dogma.
Bret Victor credits the message and content of his keynote to computer scientist Alan Kay, who once famously said that “the best way to predict the future is to build it.” (Kay, of course, appears several times in The Friendly Orange Glow because of his own contributions to computing, not to mention the competition between CERL and PARC where Kay worked and their very different visions of the future). But to be perfectly frank, the act of building alone is hardly sufficient. The best way to predict the future may instead be to be among those who mythologize what’s built, who tell certain stories, who craft and uphold the dogma about what is built and how it’s used.
To a certain extent, the version of “personal computing” espoused by Kay and by PARC has been triumphant. That is, PLATO’s model – networked terminals that tied back to a central machine – was not. Perhaps it’s worth considering how dogmatic computing has become about “personal” and “personalization” – what its implications might be for the shape of programming and for education technology, sure, but also what it means for the kinds of values and communities that are built without any sort of “friendly glow.”

The Song Room is a national not-for-profit organisation that brightens the futures of Australia’s most disadvantaged children with tailored, high-quality music and arts programs, delivered in partnership with schools across the country.

Their work is both through in school educational programs, bringing artists into schools, and providing professional development for teachers. The reach tens of thousands of students each week and have a large amount of research to back up their efforts. They have also been active in developing a large collection of online resources and programs, ARTS:LIVE.

It’s rather impressive to see this amount of effort and support to promote arts programs in schools, especially knowing how such programs have been well gutted in the US.

This was a riff on the time Dean Shareski asked me if there were DS106 assignments that would work for science or math. My answer was, “not off the shelf” but it just takes a little bit of imagination for a teacher to take one and recast it to their needs.

I have examples there how the Four Icon / One Story assignment was recast by 3 elementary school teachers and how I did it myself for a session with 2nd grade students.

My idea then was to create an annotated list of digital storytelling activities organized by some larger buckets of activity types. The examples were mostly form DS106, and some other Daily ____ sites, and a few from my other projects. So it’s an Alan Centric view. But it was a useful exercise to think about the DS106 assignments in a different grouping than by media type.

After this we had an hour of discussion more about teaching and running programs online.

I really appreciated the energy and interest of this group of maybe 15+, and even more after hearing they were getting ready for an event the next day. Some 400+ kids from 8 schools who participate in the Song Room were performing at Melbourne Town Hall. Since that is just around the corner from where I am staying, I had to go see it.

Over the last couple of weeks, I’ve got back into the speaking thing, firstly at an OU TEL show’n’tell event, then at a Parliamentary Digital Service show’n’tell.

In each case, the presentation was based around some of the things you can do with notebooks, one of which was using the RISE extension to run a notebook as an interactive slideshow: cells map on to slides or slide elements, and code cells can be executed live within the presentation, with any generated cell outputs being displayed in the slide.

Which brings me to Binderhub. Originally know as MyBinder, Binderhub takes the MyBinder idea of building a Docker image based on the build specification and content files contained in a public Github repository, and launching a Docker container from that image. Binderhub has recently moved into the Jupyter ecosystem, with the result that there are several handy spin-off command line components; for example, jupyter-repo2docker lets you build, and optionally push and/or launch, a local image from a Github repository or a local repository.

To follow on from my OU show’n’tell, I started putting together a set of branches on a single repository (psychemedia/showntell) that will eventually(?!) contain working demos of how to use Jupyter notebooks as part of “generative document” workflow in particular topic areas. For example, for authoring texts containing rich media assets in a maths subject area, or music. (The environment I used for the shown’n’tell was my own build (checks to make sure I turned that cloud machine off so I’m not still paying for it!), and I haven’t got working Binderhub environments for all the subject demos yet. If anyone would like to contribute to setting up the builds, or adding to subject specific demos, please get in touch…)

I also prepped for the PDS event by putting together a Binderhub build file in my psychemedia/parlihacks repo so (most of) the demo code would work on Binderhub. I think the only think that doesn’t work at the moment is the Shiny app demo? This includes an RStudio environment, launched from the Jupter notebooks New menu. (For an example, see the binder-examples/dockerfile-rstudio demo.)

So – long and short of that – you can create multiple demo environments in a single Github repo using a different branch for each demo, and then launch them separately using Binderhub.

What else…?

Oh yes, a new extension gives you a Shiny like workflow for creating simple apps from a Jupyter notebook: appmode. This seems to complement the Jupyter dashboards approoach, by providing an “app view” of a notebook that displays the content of markdown cells and code cell outputs, but hides the code cell contents. So if you’e been looking for a Jupyter notebook equivalent to R/shiny app development, this may get you some of the way there… (One of the nice things about the app view is that you can easily “View Source” – and modify that source…)

Possibly related to the appmode way of doing things, one thing I showed in the PDS show’n’tell was how notebooks can be used to define simple API services using the jupyter/kernel_gateway (example). These seem to run okay – locally at least – inside Binderhub, although I didn’t try calling a Jupyter API service from outside the container. (Maybe they can be made publicly available via the jupyterhub/nbserverproxy? Why’s this relevant to appmode? My thinking is architecturally you could separate out concerns, having one or more notebooks running an API that is consumed from the appmode notebook?

Another recent announcement came from Google in the form of Colaboratory, a “research project created to help disseminate machine learning education and research”. The environment is “a Jupyter notebook environment that requires no setup to use”, although it does require registration to run notebook cells, and there appears to be a waiting list. The most interesting thing, perhaps, is the ability to collaboratively work on notebooks shared with other people across Google Drive. I think this is separate from the jupyterlab-google-drive initiative, which is looking to offer a similar sort of shared working, again through Google Drive?

There are other hosted notebook servers relevant to education too: CoCalc (previously SageMathCloud) offers a free way in, as does gryd.us if you have a .edu email address. pythonanywhere.com/ offers notebooks to anyone on a paid plan.

It also seems like there are services starting to appear that offer free notebooks as well as compute power for research/scientific computing on a model similar to CoCalc (free tier in, then buy credits for additional services). For example, Kogence.

For sharing notebooks, I also just spotted Anaconda Cloud, which looks like it could be an interesting place to browse every so often…

Imagine your classrooms, campus, students not even sure if they have water, electricity, food, much less internet, just because the place they live was twice in the path of hurricanes. Imagine you are a citizen of the United States of America and hear its leader blame you for this problem, and not delivering a fraction of the aid seen in Texas and Florida.

Imagine your syllabus when, as university professor teacher, your are hoping just to contact students, and figure out how to continue classes.

Imagine.

Like several of my colleagues, I have been very worried about our friend, colleague Antonio Vantaggiato (@avunque), and relieved when he was finally able to message us that he was okay.

Frustrated. Friends fleeing. Power authority says nothing on when service reestablished (3 weeks off now) and LTE from AT&T is snail-slow.

As a very small thing I thought we could do now, I suggested we start a campaign of mailing postcards to he and his students just to say, that unlike our President…. we care.

Send a postcard to Antonio and hist students at Universidad de Sagrado Corazon

It’s really simple. Do you know what the postage is to mail to Puerto Rico? Easy, the same as mailing something to me in Arizona or some clown in Washington DC. You seek, Puerto Rico is in the United States of America (someone mail that to the clown).

We are thinking about doing a podcast / storytelling project about teaching and learning under these conditions

These are small, but as Antonio and his university develop their continuation plans, we will wait until he can share some more specific needs of his students.

In my month in Puerto Rico, I experienced an overwhelming amount of friendliness, generosity, and spirit despite what were challenging conditions before the Hurricane. And I am horrified by the tone and lack of empathy our President is sending out; he does not speak for me.

I’d like to think a pile of postcards might let our friends and fellow US citizens in Puerto Rico know that others feel like me.

As I was scrambling to find a first card to send, I ony had a few left from some old movie postcards I bought last year. But the one I did pull was for the movie El Puente de Waterloo (The Waterloo Bridge) — so with some editing via Sharpee pen, I want to declare it as part of a Bridge of Care to Puerto Rico.

Imagine your classrooms, campus, students not even sure if they have water, electricity, food, much less internet, just because the place they live was twice in the path of hurricanes. Imagine you are a citizen of the United States of America and hear its leader blame you for this problem, and not delivering a fraction of the aid seen in Texas and Florida.

Imagine your syllabus when, as university professor teacher, your are hoping just to contact students, and figure out how to continue classes.

Imagine.

Like several of my colleagues, I have been very worried about our friend, colleague Antonio Vantaggiato (@avunque), and relieved when he was finally able to message us that he was okay.

Frustrated. Friends fleeing. Power authority says nothing on when service reestablished (3 weeks off now) and LTE from AT&T is snail-slow.

As a very small thing I thought we could do now, I suggested we start a campaign of mailing postcards to he and his students just to say, that unlike our President…. we care.

Send a postcard to Antonio and hist students at Universidad de Sagrado Corazon

It’s really simple. Do you know what the postage is to mail to Puerto Rico? Easy, the same as mailing something to me in Arizona or some clown in Washington DC. You seek, Puerto Rico is in the United States of America (someone mail that to the clown).

We are thinking about doing a podcast / storytelling project about teaching and learning under these conditions

These are small, but as Antonio and his university develop their continuation plans, we will wait until he can share some more specific needs of his students.

In my month in Puerto Rico, I experienced an overwhelming amount of friendliness, generosity, and spirit despite what were challenging conditions before the Hurricane. And I am horrified by the tone and lack of empathy our President is sending out; he does not speak for me.

I’d like to think a pile of postcards might let our friends and fellow US citizens in Puerto Rico know that others feel like me.

As I was scrambling to find a first card to send, I ony had a few left from some old movie postcards I bought last year. But the one I did pull was for the movie El Puente de Waterloo (The Waterloo Bridge) — so with some editing via Sharpee pen, I want to declare it as part of a Bridge of Care to Puerto Rico.

I recently heard of the Copenhagen Letter (again, if you work in edtech and have not read this, you probably should) and decided to annotate it in class with my students and invite others to participate as well. This also has an interesting conversation going on.

Annotating Privacy Policies #DigCiz

Earlier this summer, the team doing #DigCiz proposed to annotate the privacy policy of Slack. I think this is a really useful exercise, and a way to help us all think critically about the terms of service and privacy policies of different tools we use – sometimes looking at how others have annotated a privacy policy will point us to things we had not noticed on our own (these documents are often inhospitable in terms of jargon and length, but can seem less daunting when you see how others are responding to them).

Are there interesting recent/upcoming annotatathons that you know of? Tell us in the comments

Web design trends change all the time. By the time you find a designer you can work with, decide on a look everyone can agree with, get the work done, and finally go live, what’s cool and fresh and trendy has changed. Enter brutalist websites.

What’s that, you ask? It’s when you buck the trends and purposefully design your website to be ugly. And if it’s not straight-up ugly, it’s at least free of frivolous/superfluous design elements.

Brutalist websites are a relatively new thing among purposeful designs, but a lot of them draw inspiration from the early days of the internet. The days when Angelfire and Geocities were the pinnacle of awesomeness. The days when the only HTML you needed to know were a href and img src.

The web has moved on since then, but good (or maybe good-bad?) design hasn’t. On one hand, brutalism is about function over form. If it works, let it work. On the other, it’s about being punk af and making the user work for whatever they get out of your site.

No matter which way you wanna go with it, I’m sure you can find a way to brutalize your next design project after you’ve checked out these glorious monstrosities.

1. craigslist.org

Because it works. That’s what is important. Part of brutalism is UX, and CL has that down. You can find what you need to buy or sell without any fuss or muss or extraneous moving parts.

2. Konsept83.com

Like Craigslist, Konsept83 looks to the early days of the internet for inspiration. Grey background, typewriter font (a brutalist website staple), and all the links that spell out precisely what you’re clicking into.

The interior pages are just as spare–they just have images embedded.

3. pictureshow.tv

Pictureshow.tv only works in landscape mode on mobile (or on desktops). It’s a video series designed to look and feel like an old VHS tape. You even click into the other pages, and they act like they are VCR setup menus.

But you know what? The site works, you know what to do, and you’re not lost for a moment.

But man. It’s ugly. And beautiful at the time time. And the best part is that you have no question about how to use it.

4. chris.bolin.co

Chris Bolin designed this site to be as brutal to web users as possible: you gotta disconnect to connect. Automagically, the content appears, and you see a simple message about simplicity presented simply.

5. wolftonechambers.com

Talk about brutal. There is content here. Lots of it. Good info that’s well written. But it’s all but hidden from us by a sea of ASCII.

You can’t say this is visually appealing, nor is it terribly functional. But it’s memorable. It’s strangely clean despite all the clutter, and the weirdest part is that after you scroll a bit, you do see how it works and how to find info with no directions. Which means that, again weirdly, this is how UX should work.

Allan Yu, though, uses brutalism to show the revisions he makes to his site as though they’re physical revisions in a notebook. It makes the overall experience less clean overall, but the messiness adds to the user experience so everyone is aware that he not only grows in style, but ability and creativity.

Not a design you’d find on a major news outlet (wouldn’t it just be fun to see editorial marks on CNN?), but for a designer or agency who has relatively slow updates, this style would probably work well.

7. Brutalist Redesigns

But it’s hard to do all of those at once. So Pierre Buttin took some of the cleanest, most popular apps (like Facebook and Instagram and Snapchat) and redesigned them as examples of how you can maintain form and function while removing superfluous flash that adds nothing to the experience.

9. wekflatm.kr

It’s ecommerce at its most fundamental and perfect: Here’s a chair. Buy it. Want a sofa? Okay. You can buy them over there.

No wish lists, no searches, no variations or user reviews. Just item, price, buy.

It ain’t pretty, but it ain’t ugly, either. It’s simple. And it has personality. It’s everything you need in a website and nothing you don’t.

10. keyaar.in

I bet the first thing you’d do on this site is click a shape. Because shapes. Then you’ll read the labels–which don’t really tell you much.

Thing is, though, you know pretty much what each one is when you click it. Work is a gallery of, you guessed it, work. NSFW is a blog, but even before that, you know you’re getting into something personal, if not intimate. And KL11? The design agency themselves. All the info you need is there.

What more do you people want out of a website?

Brutal, shmutal

You may not want a suite of brutalist websites in your portfolio. That’s fine. This design scheme is absolutely not for everyone, and some clients are sure to balk when you pitch an ugly site.

But there are lots of design lessons you can learn from these sites.

I do challenge you, though, to try to make a brutalist site. Boiling the whole thing all the way down to the essentials will make you confront your preconceptions, as well as your strengths and weaknesses. Sure, it’ll be brutal, but you’ll come out of it a stronger designer.

And maybe a stronger person, too.

What do you think about brutalist websites? Let us know your thoughts on this trend in the comments

How To Participate In Digipo (September 2017 version)

Every time I say I can’t make it easier to participate in Digipo, I find a way to make it easier.

The current process involves no skills greater than knowing how to work a word processor, and (more importantly) allows students to participate anonymously if they wish, without having to sign up for Google accounts or have edits tracked under pseudonyms. We accomplish this through a Microsoft Word template and by submitting the files into public domain.

You can of course use a more complex process, sign your name to the article, and use Google Docs as your central tool. Depending on your needs and skill level you may want to do that. It’s just not required anymore.

Pick a question to investigate from our list of 300+ questions, or make up your own.

Have your students download this Microsoft Word template that guides them through an investigation of a question. Apply the skills from the book.

Do whatever sort of grading, assessment, or feedback you want.

Take student reports where the students have agreed to submit them into public domain, and zip up the word documents. Mail them to michael.caulfield@wsu.edu. Make sure you introduce who you are, what the class is about, and a bit about your experience as I do not open zip files from random people. Also give me a blurb about how your class would like to be identified on the site (they have the option of remaining anonymous too). For verification purposes, send it from your university account. I may email back to verify.

I’ll put them on the Digipo site in a subdirectory with a bit about your class and give you a password that allows them to edit online going forward.

At a later point we’ll assemble a small panel of professors who will go through the student work and choose ones to “promote” to the main directory based on quality. The key question reviewers will ask is whether the document provides better information than at least one of the top ten Google results for the question.

This talk was delivered at MIT for Justin Reich’s Comparative Media Studies class “Learning, Media, and Technology.” The full slide deck is available here.

Thank you for inviting me to speak to your class today. I’m really honored to be here at the beginning of the semester, as I’m not-so-secretly hoping this gives me a great deal of power and influence to sow some seeds of skepticism about the promises you all often hear – perhaps not in this class, to be fair, as in your other classes, in the media, in the world at large – about education technology.

Those promises can be pretty amazing, no doubt: that schools haven’t changed in hundreds if not thousands of years and that education technology is now poised to “revolutionize” and “disrupt”; that today, thanks to the ubiquity of computers and the Internet (that there is “ubiquity” is rarely interrogated) we can “democratize,” “unbundle,” and/or “streamline” the system; that learning will as a result be better, cheaper, faster.

Those have always been the promises. Promises largely unfulfilled.

It’s important – crucial even – that this class is starting with history. I’ve long argued that ignorance of this history is part of the problem with education technology today: that its promises of revolution and innovation come with little to no understanding of the past – not just the history of what technologies have been adopted (or have failed to be adopted) in the classroom before, but the history of how education itself has changed in many ways and in some, quite dramatically, with or without technological interventions. (I’d add too that this is a problem with tech more broadly – an astounding and even self-congratulatory ignorance of the history of the industries, institutions, practices folks claim they’re disrupting.)

I should confess something here at the outset of my talk that’s perhaps a bit blasphemous. I recognize that this class is called “Learning, Media, and Technology.” But I’m really not interested in “learning” per se. There are lots of folks – your professor, for starters – who investigate technology and learning, who research technology’s effect on cognition and memory, who measure and monitor how mental processes respond to tech, and so on. That’s not what I do. That’s not what my work is about.

It’s not that I believe “learning” doesn’t matter. And it’s not that I think “learning” doesn’t happen when using a lot of the ed-tech that gets hyped – or wait, maybe I do think that.

Rather, I approach “learning” as a scholar of culture, of society. I see “learning” as a highly contested concept – a lot more contested than some researchers and academic disciplines (and entrepreneurs and journalists and politicians) might have you believe. What we know about knowing is not settled. It never has been. And neither neuroscience nor brain scans, for example, move us any closer to that. After all, “learning” isn’t simply about an individual’s brain or even body. “Learning” – or maybe more accurately “learnedness” – is a signal; it’s a symbol; it’s a performance. As such, it’s judged by and through and with all sorts of cultural values and expectations, not only those that we claim to be able to measure. What do you know? How do you know? Who do you know? Do you have the social capital and authority to wield what you know or to claim expertise?

My work looks at the broader socio-political and socio-cultural aspects of ed-tech. I want us to recognize ed-tech as ideological, as a site of contested values rather than a tool that somehow “progress” demands. Indeed, that’s ideology at work right there – the idea of “progress” itself, a belief in a linear improvement, one that’s intertwined with stories of scientific and technological advancement as well as the advancement of certain enlightenment values.

I’m interested not so much in how ed-tech (and tech more broadly) might change cognition or learning, but in how it will change culture and power and knowledge – systems and practices of knowing. I’m interested in how ed-tech (and tech more broadly) will change how we imagine education – as a process, as a practice, as an institution – and change how we value knowledge and expertise and even school itself.

I don’t believe we live in a world in which technology is changing faster than it’s ever changed before. I don’t believe we live in a world where people adopt new technologies more rapidly than they’ve done so in the past. (That is argument for another talk, for another time.) But I do believe we live in an age where technology companies are some of the most powerful corporations in the world, where they are a major influence – and not necessarily in a positive way – on democracy and democratic institutions. (School is one of those institutions. Ideally.) These companies, along with the PR that supports them, sell us products for the future and just as importantly weave stories about the future.

These products and stories are, to borrow a phrase from sociologist Neil Selwyn, “ideologically-freighted.” In particular, Selwyn argues that education technologies (and again, computing technologies more broadly) are entwined with the ideologies of libertarianism, neoliberalism, and new forms of capitalism – all part of what I often refer to as the “Silicon Valley narrative” (although that phrase, geographically, probably lets you folks here at MIT off the hook for your institutional and ideological complicity in all this). Collaboration. Personalization. Problem-solving. STEM. Self-directed learning. The “maker movement.” These are all examples of how ideologies are embedded in ed-tech trends and technologies – in their development and their marketing. And despite all the talk of “disruption”, these mightn’t be counter-hegemonic at all, but rather serve the dominant ideology and further one of the 21st century’s dominant industries.

I want to talk a little bit today about technology and education technology in the 20th century – because like I said, history matters. And one of the ideological “isms” that I think we sometimes overlook in computing technologies is militarism. And I don’t just mean the role of Alan Turing and codebreakers in World War II or the role of the Defense Department’s Advanced Research Projects Agency in the development of the Internet (although both of those examples – cryptography and the Internet – do underscore what I mean when I say infrastructure is ideological). C3I – command, control, communications, and intelligence. Militarism, as an ideology, privileges hierarchy, obedience, compliance, authoritarianism – it has shaped how our schools are structured; it shapes how our technologies are designed.

The US military is the largest military in the world. That also makes it one of the largest educational organizations in the world – “learning at scale,” to borrow a phrase from this course. The military is responsible for training – basic training and ongoing training – of some 1.2 million active duty soldiers and some 800,000 reserve soldiers. That training has always been technological, because soldiers have had to learn to use a variety of machines. The military has also led the development and adoption of educational technologies.

Take the flight simulator, for example.

One of the earliest flight simulators – and yes, this predates the Microsoft software program by over fifty years, but postdates the Wright Brothers by only about twenty – was developed by Edwin Link. He received the patent for his device in 1931, a machine that replicated the cockpit and its instruments. The trainer would pitch and roll and dive and climb, powered by a motor and organ bellows. (Link’s family owned an organ factory.)

Although Link’s first customers were amusement parks – the patent was titled a “Combination training device for student aviators and entertainment apparatus” – the military bought six in June of 1934, after a series of plane crashes earlier that year immediately following the US Army Air Corps’ takeover of US Air Mail service. Those accidents had revealed the pilots’ lack of training, particularly under night-time or inclement weather conditions. By the end of World War II, some 500,000 pilots had used the “Link Trainer,” and flight simulators have since become an integral part of pilot (and subsequently, astronaut) training.

(There’s a good term paper to be written – you are writing a term paper, right? – about the history of virtual reality and the promises and presumptions it makes about simulation and learning and experiences and bodies. But mostly, I’d argue if I were writing it, that much of VR in classrooms today does not have its origins the Link Trainer as much as in the educational films that you read about in Larry Cuban’s Teachers and Machines. But I digress.)

The military works along a different principle for organizing and disseminating knowledge than does, say, the university or the library. The military is largely interested in teaching “skills.” Or perhaps more accurately, this is how military training is largely imagined and discussed: “skills training.” (Officer training, to be fair, is slightly different.) The military is invested in those skills – and in the teaching of those skills – being standardized. All this shapes the kinds of educational software and hardware that gets developed and adopted.

One of the challenges the military has faced, particularly in the twentieth century, is helping veterans to translate their skills into language that schools and civilian hiring managers understand. This is, of course, the origin of the GED test, which was developed during WWII as a way to assess whether those soldiers who’d dropped out of high school in order to enlist had attained high-school level skills – to demonstrate “competency” rather than rely on “seat time,” to put this in terms familiar to educational debates today. There has also been the challenge of translating skills within the military itself – say, from branch to branch – and within and across other federal agencies. New technologies, to a certain extent, have complicated things by introducing often incompatible software systems in which instruction occurs. And at the end of the day, the military demands regimentation, standardization – culturally, technologically.

I just want to lay out an abbreviated timeline here to help situate some of my following remarks:

I’m not suggesting here that the Web marks the origins of ed-tech. Again, you’ve read Larry Cuban’s work; you know that there’s a much longer history of teaching machines. But in the 1990s, we did witness a real explosion in not just educational software, but in educational software that functioned online.

In January of 1999, President Clinton signed Executive Order 13111 – “Using Technology To Improve Training Opportunities for Federal Government Employees.” Here’s the opening paragraph, which I’m going to read – apologies – simply because it sounds as though it could be written today:

Advances in technology and increased skills needs are changing the workplace at an ever increasing rate. These advances can make Federal employees more productive and provide improved service to our customers, the American taxpayers. We need to ensure that we continue to train Federal employees to take full advantage of these technological advances and to acquire the skills and learning needed to succeed in a changing workplace. A coordinated Federal effort is needed to provide flexible training opportunities to employees and to explore how Federal training programs, initiatives, and policies can better support lifelong learning through the use of learning technology.

One of the mandates of the Executive Order was to:

in consultation with the Department of Defense and the National Institute of Standards and Technology, recommend standards for training software and associated services purchased by Federal agencies and contractors. These standards should be consistent with voluntary industry consensus-based commercial standards. Agencies, where appropriate, should use these standards in procurements to promote reusable training component software and thereby reduce duplication in the development of courseware.

This call for standards – and yes, the whole idea of “standards” is deeply ideological – eventually became SCORM, the Sharable Content Object Reference Model (and one of the many acronyms that, if you work with education technology, will make people groan – and groan almost as much as a related acronym does: the LMS, the learning management system).

Indeed, SCORM and the LMS – their purposes, their histories – are somewhat inseparable. (And I want you to consider the implications of that: that the demands of the federal government and the US military for a standardized “elearning” experience has profoundly shaped one of the foundational pieces of ed-tech that is used today by almost all colleges and increasingly even K–12 schools.)

The SCORM standard was designed, in part, to make it possible to easily move educational content from one learning management system to another. Among the goals: reusability, interoperability, and durability of content and courses. (I’m not going to go into too much technical detail here, but I do want to recognize that this did require addressing some significant technical challenges.) SCORM had three components: content packaging, runtime communications, and course metadata. The content packaging refers to the packaging of all the resources needed to deliver a course into a single ZIP file. The runtime communications includes the runtime commands for communicating student information to and from the LMS, as well as the metadata for storing information on individual students. And the course metadata, obviously, includes things like course title, description, keywords, and so on. SCORM, as its full name implies, served to identify “sharable content objects” – that is the smallest unit in a course that contains meaningful learning content by itself – content objects that might be extracted and reused in another course. The third version of SCORM, SCORM 2004, also introduced sequencing, identifying the order in which these content objects should be presented.

The implications of all this are fairly significant, particularly if we think about the SCORM initiative as something that’s helped, almost a decade ago, to establish and refine what’s become the infrastructure of the learning management system and other instructional software, as something that’s influenced the development as well of some of the theories of modern instructional design. (Theory is, of course, ideology. But, again, so is infrastructure.) The infrastructure of learning software shapes how we think about “content” and how we think about “skills” and how we think about “learning.” (And “we” here, to be clear, includes a broad swath of employers, schools, software makers, and the federal government – so that’s a pretty substantial “we.”)

I will spare you the details of decades worth of debates about learning objects. It’s important to note, however, that there are decades of debate and many, many critics of the concept – Paulo Freire, for example, and his critique of the “banking model of information.” There are the critics too who argue for “authentic,” “real-world” learning, something that almost by definition learning objects – designed to move readily from software system to software system, from course to course, from content module to content module, from context to context – can never offer. I’d be remiss if I did not mention the work of open education pioneer David Wiley and what he has called the “reusability paradox,” which to summarize states that if a learning object is pedagogically useful in a specific context, it will not be useful in a different context. Furthermore, the most decontextualized learning objects are reusable in many contexts, but those are not pedagogically useful.

But like I said at the outset, in my own line of inquiry I’m less interested in what’s “pedagogically useful” than I am in what gets proposed by industry and what becomes predominant – the predominant tech, the predominant practice, the predominant narrative, and so on.

Learning objects have been blasted by theorists and practitioners, but they refuse to go away. Why?

The predominant narratives today about the future of learning are all becoming deeply intertwined with artificial intelligence. We should recognize that these narratives have been influenced by decades of thinking in a certain way about information and knowledge and learning (in humans and in machines): as atomized learning objects and as atomized, standardized skills.

There’s a long history of criticism of the idea of “intelligence” – its origins in eugenics; its use as a mechanism for race- and gender-based exclusion and sorting. It’s a history that educational psychology, deeply intertwined with the development of measurements and assessments, has not always been forthright about. Education technology, with its origins in educational psychology, is implicated in this. And now we port this history of “intelligence” – one steeped in racism and bias – onto machines.

But we’re also porting a history of “skills” onto machines as well. This is, of course, the marketing used for Amazon’s Alexa. Developers “build” skills. They “teach” skills to the device. And it’s certainly debatable whether many of these are useful at all. But again, that’s not the only way to think about teaching machines. Whether or not something is “pedagogically useful,” here are reasons why the stories about it stick. The narrative about AI and skills is something to pay attention to – particularly alongside larger discussions about the so-called “skills gap.”