Driverless Ed-Tech: The History of the Future of Automation in Education

Audrey Watters

on
30 Mar 2017

read

This talk was presented at The University of Edinburgh's Moray House School of Education

Let me begin with a story. In December 2012 – we all remember 2012 right? “The Year of the MOOC” – I was summoned to Palo Alto, California for a small gathering to discuss the future of teaching, learning, and technology. I use the verb “summoned” deliberately.

The event was organized by Sebastian Thrun, who at the beginning of the year had announced that he was resigning his full time professor position at Stanford in order to launch Udacity, his online education startup. It was held at Stanford in its artificial intelligence lab, which was a little awkward a venue as Thrun’s office – he still had an office on campus, of course – was right next to those of Daphne Koller and Andrew Ng, his fellow Stanford AI professors who’d announced in April that they were launching a competitor company, Coursera.

When Thrun first invited us all to this event – about ten of us – he promised that at the end of the weekend, we would take a ride in a zeppelin over San Francisco. And I thought “like hell I will.” I’ve seen A View to a Kill. I know what happened to the dissenters who got into a zeppelin in that movie. But as it turned out, the zeppelin company had gone out of business – I imagine that many people, like myself, could only think about Christopher Walken and Grace Jones’ characters and opted not to go.

So instead of a zeppelin, we got to ride in one of Google’s self-driving cars, which was of course the project that Thrun had been working on when he gave his famous TED Talk in 2011 – and that, in turn, was where he heard Salman Khan give his famous TED Talk. It was when and where Thrun decided that he needed to rethink his work as a Stanford professor in order to “scale” education.

Thrun “drove.” He steered the car onto I–280 and then let the car take over, and I have to say – and I say this as a professional skeptic of technology – it was this strange combination of the utterly banal and the utterly impressive. (It was 2012, I should reiterate, so it was right at the beginning of all this hype about a future of autonomous vehicles.)

The car was covered in cameras and sensors, inside and out – even a QR code on the driver’s side glove compartment that you were supposed to scan to sign Google’s Terms of Service before riding. Seemingly the most dangerous element of our little jaunt was that other drivers swerved and slowed down as they stared at the car, with its giant camera on top and Google logo on the sides. There was Thrun with his hands off the wheel, feet off the pedals, eyes not on the road, sometimes turning around entirely to face the passengers in the back seat, explaining how the car (and Google, of course) collected massive amounts of data in order to map the road and move efficiently along it.

Efficiency. That’s the goal of the self-driving car. (You’re free to insert here some invented statistic about the percentage of space and energy that are wasted by human-driven traffic and human driving patterns and that will be corrected by roads full of autonomous vehicles. I vaguely recall Thrun doing so at least.)

It was then and there on that trip that I had a revelation about how many entrepreneurs and engineers in Silicon Valley conceive of education and the role of technology in reshaping it: that is, if you collect enough data – lots and lots and lots of data – you can build a map. This is their conceptual framework for visualizing how “learners” (and that word is used to describe various, imagined students, workers, and consumers) get from here to there, whether it’s through a course or through a degree program or towards a job. With enough data and some machine learning, you can identify – statistically – the most common obstacles. You can plot the most frequently traveled path and the one that folks traverse most quickly. You can optimize. And once you’ve trained those algorithms, you can apply them everywhere. You can scale.

We can debate this model (we should debate this model) – how it works or doesn’t work when applied to education. (Is learning “like a map”? Is learning an engineering problem? Is the absence of “data” or algorithms really a problem?) But one of the most important things to remember is that this is (largely) a computer scientist’s model. It’s the model of human learning by someone who claims expertise in machine learning, a field of study which has aspired to model if not surpass the human mind. And that makes it a model in turn that rests on a lot of assumptions about “learning” – both how humans “learn” and how machines “learn” to conceptualize and navigate their worlds.

It’s a model. It’s a metaphor.

It’s an aspiration – a human aspiration, to be clear. This isn’t what machines “want.” (Machines have no wants.)

I think many of us quickly recognized back in 2012 that, despite the AI expertise in the executive offices of these MOOC companies, there wasn’t much “artificial intelligence” beyond a few of their course offerings; there wasn’t much “intelligence” in their assessments or in their course recommendation engines. What these MOOCs were, nonetheless, were (and still are) massive online honeypots into which we’ve been lured – registering and watching and clicking in order to generate massive education datasets.

Perhaps with this data, the MOOC providers can build a map of professional if not cognitive pathways. Perhaps. Someday. Maybe. In the meantime, these companies continue to collect a lot of “driving” data.

Who controls the mapping data and who controls the driving data and who controls the autonomous vehicle patents are, of course, a small part of the legal and financial battles that are brewing over the future of autonomous vehicles. Google versus Uber. Google versus Didi (a Chinese self-driving car company). We can speculate, I suppose, about what the analogous battles might be in education – which corporation will sue which corporation, claiming they “own” learning data and learning roadmaps and learning algorithms and learning software IP.

(Spoiler alert: it won’t actually be learners – just like it’s not actually drivers – even though that’s where the interesting data comes from: not from mapping the roads, but from monitoring the traffic.)

As we were driving on the freeways around Palo Alto in the Google autonomous vehicle, someone asked Sebastian Thrun what happens if there’s an unexpected occurrence while the car is in self-driving mode. Now, the car is constantly making small adjustments – to its speed, to its distance to other vehicles. “But what would happen if, say, a tree suddenly came crashing down in the road right in front of it,” the passenger asked Thrun.

“The car would stop,” he said. The human driver would be prompted to take over. Hopefully the human driver is paying attention. Hopefully there’s a human driver.

Of course, the “unexpected” occurs all the time – on the road and in the classroom.

Recently the “ride-sharing” company Uber flouted California state regulations in order to start offering an autonomous vehicle ride-sharing service in San Francisco. The company admitted that it hadn’t addressed at least one flaw in their programming: that its cars would make a right hand turn through a bicycle lane (the equivalent of a left-hand turn here in the UK). Uber didn’t have a model for recognizing the existence of “bike lane” (and as such “cyclists”). It’s not that the car didn’t see something “unexpected”; that particular “unexpected” was not fully modeled, and the self-driving car didn’t slow, and it didn’t stop.

In this testing phase of Uber’s self-driving cars, it did still have a driver sitting behind the wheel. Documents recently obtained by the tech publication Recode revealed that Uber’s autonomous vehicles drove, on average, less than a mile without requiring human intervention.

The technology simply isn’t that good yet.

At the conclusion of our ride, Thrun steered the Google self-driving car back to his house, where he summoned a car service to take us back to our hotel. Giddy from the experience, one professor boasted to the driver what we’d just done. He frowned. “Oh,” he said. “So, you just put me out of a job?”

“Put me out of a job.” “Put you out of a job.” “Put us all out of work.” We hear that a lot, with varying levels of glee and callousness and concern. “Robots are coming for your job.”

We hear it all the time. To be fair, of course, we have heard it, with varying frequency and urgency, for about 100 years now. “Robots are coming for your job.” And this time – this time – it’s for real.

I want to suggest – and not just because there are flaws with Uber’s autonomous vehicles (and there was just a crash of a test vehicle in Arizona last Friday) – that this is not entirely a technological proclamation. Robots don’t do anything they’re not programmed to do. They don’t have autonomy or agency or aspirations. Robots don’t just roll into the human resources department on their own accord, ready to outperform others. Robots don’t apply for jobs. Robots don’t “come for jobs.” Rather, business owners opt to automate rather than employ people. In other words, this refrain that “robots are coming for your job” is not so much a reflection of some tremendous breakthrough (or potential breakthrough) in automation, let alone artificial intelligence. Rather, it’s a proclamation about profits and politics. It’s a proclamation about labor and capital.

And this is as true in education as it is in driving.

As Recode wrote in that recent article,

Successfully creating self-driving technology has become a crucial factor to Uber’s profitability. It would allow Uber to generate higher sales per ride since it would keep all of the fare. Uber has currently suffered losses in some markets partly because of having to offer subsidies to attract drivers. Computers are cheaper in the long run.

“Computers are cheaper in the long run.” Cheaper for whom? Cheaper how?

Well, robots don’t take sick days. They don’t demand retirement or health insurance benefits. You tell them the rules, and they obey the rules. They don’t ask questions. They don’t unionize. They don’t strike.

A couple of years ago, there was a popular article in circulation in the US that claimed that the most common occupation in every state is “truck driver.” The data is a little iffy – the US is a service economy, not a shipping economy – but its claim about why “truck driver” is still fairly revealing: unlike other occupations, the work of “truck driver” has not been affected by globalization, the article claimed, and it has not (yet) been affected by automation. (The CEO of Otto, a self-driving trucking company now owned by Uber, just predicted this week that AI will reshape the industry within the next ten years.)

Truck driving is also a profession – an industry – that’s been subject to decades of regulation and deregulation.

That regulatory framework is just one of the objects of derision – of “disruption” and dismantling – of the ride-sharing company Uber. Founded in 2008 – ostensibly when CEO Travis Kalanick was unable to hail a cab while in Paris – the company has become synonymous with the so-called “sharing” or “freelance” economy, Silicon Valley’s latest rebranding of technologically-enhanced economic precarity and job insecurity.

“Anyone” can drive for Uber, no special training or certification required. Well, anyone who’s 21 or older and has three years of driving experience and a clean driving record. Anyone with car insurance. Anyone whose car has at least four doors and is newer than 2001 – Uber will also help you finance a new car, even if you have a terrible credit score. Your loan payments are simply deducted from your Uber earnings each week.

All along, Uber has been quite clear, that despite wooing drivers to its platform, using “independent contractors” is only temporary. The company plans to replace drivers with driverless cars.

Since its launch, Uber has become infamous for its opposition to regulations and to unions. (Uber has recently been using podcasts broadcast from its app in order to dissuade drivers in Seattle from unionizing, for example.)

And I’ll note here in case this sounds too much like a talk on autonomous vehicles and not enough on automated education, I am purposefully putting these two “disruptions” side by side. After all, education is fairly regulated as well – accreditation, for example, dictates who gets to offer “real” degrees. There are rules about who gets to run a “real school.” Trump University, not a real school. And there are rules as to who gets to be in the classroom, rules about who can teach. But any semblance of job protections – at both the K–12 level and at the higher education level in the US – is under attack. (Again, this isn’t simply about replacing teachers with computers because computers have become so powerful. But it is about replacing teachers nonetheless.) You no longer need a teaching degree (or any teaching training) in Utah. And while the certification demands might still be in place in colleges and universities, they’ve been moving towards a precarious teaching labor force for some time now. More than three-quarters of the teaching staff in the US are adjuncts – short-time employees with no job security and often no benefits. “Independent contractors.” Uber encourages educators to earn a little cash on the side as drivers.

Like I said, I’m not sure I believe that the most prevalent job in the US is “truck driver.” But I do know this to be true: the largest union in the United States is the National Education Association. The other teachers’ union, the American Federation of Teachers, is the sixth largest. Many others who work in public education are represented by the second largest union in the US, the Service Employees International Union.

Silicon Valley hates unions. It loathes organized labor just as it loathes regulations (until it benefits from regulations, of course).

Now, for its part, Uber has also been accused of violating “regulations” like the Americans with Disabilities Act for refusing to pick up riders with service dogs or with wheelchairs. A fierce proponent of laissez-faire capitalism, Uber has received a fair amount of negative press for its price gouging practices – it uses what it calls “surge pricing” during peak demand, increasing the amount a ride will cost in order, Uber says, to lure more drivers out onto the road. It’s implemented surge pricing not just on holidays like New Year’s Eve but during several weather-related emergencies. The company has also actively sabotaged its rivals – attacking other ride service companies as well as journalists.

None of this makes the phrase “Uber for Education” particularly appealing. But that’s how Sebastian Thrun described his company Udacity in a series of interviews in 2015.

“At Udacity, we built an Uber-like platform,” he told the MIT Technology Review. “With Uber any normal person with a car can become a driver, and with Udacity now every person with a computer can become a global code reviewer. … Just like Uber, we’ve made the financials line up. The best-earning global code reviewer makes more than 17,000 bucks a month. I compare this to the typical part-time teacher in the U.S. who teaches at a college – they make about $2,000 a month.”

“We want to be the Uber of education,” Thrun told The Financial Times, which added that, “Mr Thrun knows what he doesn’t want for his company: professors in tenure, which he claims limits the ability to react to market demands.”

In other words, “disrupt” job protections through a cheap, precarious labor force doing piecemeal work until the algorithms are sophisticated enough to perform those tasks. Universities have already taken plenty of steps towards this end, without the help of algorithms or for-profit software providers. But universities are still bound by accreditation (and by tradition). “Anyone can teach” is not a stance on labor and credentialing that many universities are ready to take.

Udacity is hardly the only company that invokes the “Uber for Education” slogan. There’s PeerUp, whose founder describes the company as “Uber for tutors.” There’s ProfHire and Adjunct Professor Link, Uber for contingent faculty. There’s The Graide Network, Uber for teaching assistants and exam markers. There’s Parachute Teachers, which describes itself as “Uber for substitute teachers.”

Again, what we see here with these services are companies that market “on demand” labor as “disruption.” These certainly reflect larger trends at work dismantling the teaching profession – de-funding, de-professionalization, adjunctification, a dismissal of expertise and experience.

Anyone can teach. Indeed, the only ones who shouldn’t are probably the ones in the classroom right now – or so this story goes. The right wing think tank The Heritage Foundation has called for an “Uber-ized Education.” The right wing publication The National Review has called for “an Uber for Education.” Echoing some of the arguments made by Uber CEO Travis Kalanick, these publications (and many many others) speak of ending the monopolies that “certain groups” (unions, women, liberals, I don’t know) have on education – ostensibly, I guess, on public schools – and bringing more competition to the education system.

US Secretary of Education in a speech earlier this week also invoked Uber as a model that education should emulate: “Just as the traditional taxi system revolted against ridesharing,” she told the Brookings Institution, “so too does the education establishment feel threatened by the rise of school choice. In both cases, the entrenched status quo has resisted models that empower individuals.”

All this is a familiar refrain in Silicon Valley, which has really cultivated its own particular brand of consumerism wrapped up in the mantle of libertarianism.

Travis Kalanick is just one of many tech CEOs who have praised the work of objectivist “philosopher” and “novelist” Ayn Rand, once changing the background of his Twitter profile to the cover of her book The Fountainhead. He told The Washington Post in a 2012 Q&A that the regulations that the car service industry faced bore an “uncanny resemblance” to Rand’s other novel, Atlas Shrugged.

(A quick summary for those lucky enough to be unfamiliar with the plot: the US has become a dystopia overrun by regulations that cause industries to collapse, innovation to be stifled. The poor are depicted as leeches; the heroes are selfish individualists. Eventually business leaders rise up against the government, led by John Galt. The government collapses, and Galt announced that industrialists will rebuild the world. It is a terrible, terrible novel. It is nonetheless many libertarians’ Bible of sorts.)

I’ve argued elsewhere (and I’ve argued repeatedly) that libertarianism is deeply intertwined in the digital technologies developed by those like Uber’s Kalanick. And I don’t mean here simply or solely that these technologies are wielded to dismantle “big government” or “big unions.” I mean that embedded in these technologies, in their design and in their development and in their code, are certain ideological tenets – in the case of libertarianism, a belief in order, freedom, work, self-governance, and individualism.

That last one is key, I think, for considering the future of education and education technology – as designed and developed and coded by Silicon Valley. Individualism.

Now obviously these beliefs are evident throughout American culture and have been throughout American history. Computers didn’t cause neoliberalism. Computers didn’t create libertarians. (It just hooked them all up on Twitter.)

Indeed, there’s that particular strain of individualism that is deeply, deeply American which contributed to libertarianism and to neoliberalism and to computers in turn.

I’d argue that that strain of individualism has been a boon for the automotive industry – for car culture. Many Americans would rather drive their own vehicles rather than rely on – and/or fund – public transportation. I think this is both Uber’s great weakness and also, strangely, its niche: you hail a car, rather than take the bus. The car comes immediately; you do not have to wait. It takes you to your destination; you needn’t stop for others. As such, you can dismiss the need to develop a public transportation infrastructure as some cities in the US have done, some opting to outsource this to Uber instead.

In a car, you can move at your own pace. In a car, you can move in the direction you choose – when and where you want to go. In a car, you can stop and start, sure, but most often you want to get where you’re going efficiently. In a car – and if you watch television ads for car companies, you can see evidence of this powerful imaginary most strikingly – you are truly free.

Unlike the routes of public transportation – the bus route, the subway line – routes that are prescribed for and by the collective, the car is for you and you alone. The car is another one of these radically individualistic, individualizing technologies.

The car is a prototype of sorts for the concept of “personalization.”

Branded. Controlled. Manufactured en masse. Mass-marketed. And yet somehow this symbol of the personal, the individual.

We can think about the relationship too between education systems and individualism. I believe increasingly that’s how education is defined – not as a collective endeavor or a public good, but as an individual investment.

“Personalization” is a reflection of that.

“Personalized” education promises you can move at your own pace. You can (ostensibly) move in the direction you choose. You can stop and start, sure, but most often you want to get where you’re going efficiently. With “personalized” software – – and if you read publications like Edsurge, you can see evidence of this powerful imaginary most strikingly – the learner is truly free.

Unlike the routes of “traditional” education – the lecture hall, the classroom – those routes that are prescribed for and by the collective, “personalized software” is for you and you alone. The computer is a radically individualistic, individualizing technology; education becomes a radically individualistic act.

(I’ll just whisper this because I’d hate to ruin the end of the movie for folks: this freedom actually involves you driving.)

Let me pause here and note that there are several directions that I could take this talk: data collection and analysis as “personalization,” for example. The New York Times just wrote about an app called Greyball that Uber has utilized to avoid scrutiny from law enforcement and regulators in the cities into which it’s tried to expand. The app would ascertain, based on a variety of signals, when cops might be trying to summon an Uber and would prevent them from doing so. Instead, they’d see a special version of Uber – “personalized” – that misinformed them that there were no cars in the vicinity.

How is “personalized learning” – the automation of education through algorithms – a form of “greyballing”? I am really intrigued by this question.

Another piece of the automation puzzle for education (and for “smart car” and for “smart homes”) involves questions of what we mean by “intelligence” in that phrase “artificial intelligence.” What are the histories and practices of “intelligence” – how have humans been ranked, categorized, punished, and rewarded based on an assessment of intelligence? How is intelligence performed – by man (and I do mean “man”) and by machine? What do we read as signs of intelligence? What do we cultivate as signs of intelligence – in our students and in our machines? What role have educational institutions had in developing and sanctioning intelligence? How does believing there’s such a thing as “machine intelligence” challenge some institutions (and prop up others)?

But I want to press on a little more with a look at automation and labor: this issue of driverless cars and driverless school, this issue of “freedom” as being intertwined with algorithmic decision-making and precarious labor.

I am lifting the phrase “driverless school” for the title of this talk from Karen Gregory who recently tweeted something about the “driverless university.” I believe she was at a conference, but in the horrible way that Twitter strips context from our utterances, I’m going to borrow it without knowing who or what she was referring to and re-contextualize the phrase here for my purposes because that’s the visiting speaker’s prerogative.

I do think that in many ways MOOCs were envisioned – by Thrun and by others – as a move towards this idea of a “driverless university.” And that phrase and the impulse behind it should prompt us to ask, no doubt, who is currently “driving” school? Who do education engineers imagine is doing the driving? Is it the administration? The faculty? The government? The unions? Who is exactly going to be displaced by algorithms, by software that purport to make a university “driverless”?

What’s important to consider, I’d argue, is that if we want to rethink how the university functions – and I’ll just assume that we all do in some way or another – “driverlessness” certainly doesn’t give the faculty a greater say in governance. (Indeed, faculty governance seems, in many cases, one of the things that automation seeks to eliminate. Think Thrun’s comments on tenure, for example.) More troubling, the “driverlessness” of algorithms is opaque – even more opaque than universities’ decision-making already is (and that is truly saying something).

And despite all the talk of catering to what Silicon Valley has lauded in the “self-directed learner,” to those whom Tressie McMillan Cottom has called the “roaming autodidacts,” the “driverless university” certainly does not give students a greater say in their own education either. The “driverless university,” rather, is controlled by the engineers who write the algorithms, those who model the curriculum, those who think they can best navigate a learning path. There is still a “driver,” but that labor and decision-making power is obscured.

We can see the “driverless university” already under development perhaps at the Math Emporium at Virginia Tech, which The Washington Post once described as “the Wal-Mart of higher education, a triumph in economy of scale and a glimpse at a possible future of computer-led learning.”

Eight thousand students a year take introductory math in a space that once housed a discount department store. Four math instructors, none of them professors, lead seven courses with enrollments of 200 to 2,000. Students walk to class through a shopping mall, past a health club and a tanning salon, as ambient Muzak plays.

The pass rates are up. That’s good traffic data, I suppose, if you’re obsessed with moving bodies more efficiently along the university’s pre-determined “map.” Get the students through pre-calc and other math requirements without having to pay for tenured faculty or, hell, even adjunct faculty. “In the Emporium, the computer is teacher,” The Washington Post tells us.

“Students click their way through courses that unfold in a series of modules.” Of course, students who “click their way through courses” seem unlikely to develop a love for math or a deep understanding of math. They’re unlikely to become math majors. They’re unlikely to become math graduate students. They’re unlikely to become math professors. (And perhaps you think this is a good thing if you believe there are too many mathematicians or if you believe that the study of mathematics has nothing to offer a society that seems increasingly obsessed with using statistics to solve every single problem that it faces or if you think mathematical reasoning is inconsequential to twenty-first century life.)

Students hate the Math Emporium, by the way.

Despite The Washington Post’s pronouncement that “the time has come” for computers as teachers, the time has been coming for years now. “Programmed instruction” and teaching machines – these are concepts that are almost one hundred years old. (So to repeat, the push to automate education is not about technology as much as it’s about ideology.)

In his autobiography, B. F. Skinner described how he came upon the idea of a teaching machine in 1953: Visiting his daughter’s fourth grade classroom, he was struck by the inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials – sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed by a machine, so he built a prototype that he demonstrated at a conference the following year.

Skinner’s teaching machine broke concepts down into small concepts – “bite-sized learning” is today’s buzzword. Students moved through these concepts incrementally, which Skinner believe was best for “good contingency management.” Skinner believed that the machines could be used to minimize the number of errors that students made along the way, maximizing the positive behavioral reinforcement that students received. Skinner called this process “programmed instruction.”

Driverless ed-tech.

“In acquiring complex behavior the student must pass through a carefully designed sequence of steps,” Skinner wrote, “often of considerable length. Each step must be so small that it can always be taken, yet in taking it the student moves somewhat closer to fully competent behavior. The machine must make sure that these steps are taken in a carefully prescribed order.”

Driverless and programmatically constrained.

Skinner had a dozen of the machines he prototyped installed in the self-study room at Harvard in 1958 for use in teaching the undergraduate course Natural Sciences 114. “Most students feel that machine study has compensating advantages,” he insisted. “They work for an hour with little effort, and they report that they learn more in less time and with less effort than in conventional ways.” (And we all know that if it’s good enough for Harvard students…) “Machines such as those we use at Harvard,” Skinner boasted, “could be programmed to teach, in whole and in part, all the subjects taught in elementary and high school and many taught in college.” The driverless university.

One problem – there are many problems, but here’s a really significant one – those Harvard students hated the teaching machines. They found them boring. And certainly we can say “well, the technology just wasn’t very good” – but it isn’t very good now either.

Ohio State University psychology professor Sidney Pressey – he’d invented a teaching machine about a decade before B. F. Skinner did – said in 1933 that,

There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the future will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines – not at all for the mechanizing of education, but for the freeing of teacher and pupil from educational drudgery and incompetence.

Oh not replace you, teacher. To free you from drudgery, of course. Just like the Industrial Revolution freed workers from the drudgery of handicraft. Just like Uber drivers have been freed from the drudgery of full-time employment by becoming part of the “gig economy” and just like Uber will free them from the drudgery of precarious employment when it replaces them with autonomous vehicles.

Teaching machines – the driverless school – will replace just some education labor at first, the bits of it the engineers and their investors have deemed repetitive, menial, unimportant, and let’s be honest, those bits that are too liberal. But it doesn’t seem interested, however, in stopping students from having to do menial tasks. The “driverless university” will still mandate students sit in front of machines and click on buttons and answer multiple choice questions. “Personalized,” education will be stripped of all that is personal.

It’s a dismal future, this driverless one, and not because “the machines have taken over,” but because the libertarians who build the machines have.

A driverless future offers us only more surveillance, more algorithms, less transparency, fewer roads, and less intellectual freedom. Skinner would love it. Trump would love it. But we, we should resist it.