When Your Boss Is an Algorithm

Tuesday, December 27, 2016

In the New Inquiry Artist and Programmer, Sam Lavigne points out that we’re all working for free for machine learning algorithms. Worse than that, though, is that the nature of machine learning means that every time we interact with one of these algorithms we’re actually automating our own jobs out of existence. But describing this system as exploitative misses an important point, and the critique overlooks a more pertinent potential problem with working for machine learning algorithms.

You teach Facebook's face recognition algorithm how to do its job better.It works like this: the computer does a thing. Let’s say it suggests a restaurant for you to try tonight. Maybe you make a reservation. In that case, if the computer had a back and an arm it would pat itself on the back. But it doesn’t so the machine does nothing. But maybe you click “next.” In that case, the computer makes a small adjustment to its recommendation engine. With enough recommendations and enough “book” and “nexts,” the computer gets better and better at recommending restaurants. To learn more, check out the first three or so chapters of The Master Algorithm.

A better example is Facebook’s photo tagging suggestions. When you upload a photo, Facebook guesses who is in it. When you click one of the names, nothing happens. But when you type in a different name, you’re teaching Facebook’s face recognition algorithm how to do its job better.

Exploitative or Nah?

Machine learning algorithms require human labor to improve. Humans have to say “yes” or “no,” or the algorithm doesn’t know when to adjust itself.

Some of this labor is paid. Lavigne used the example of Facebook’s new personal digital assistant service called M that “completes tasks and finds information on your behalf.” Every answer is generated by a machine but then checked by a person. Facebook pays humans to correct mistakes the machine makes.

The irony, as Lavigne points out, is that these workers are training their computer bosses to take over their jobs. But most of this labor is unpaid. You don’t get any money when you respond to Facebook’s photo tagging suggestions. Lavigne describes this relationship as exploitative because internet users aren’t getting paid for their contributions to companies’ bottom lines.

What Lavigne deftly recognizes is that today and going forward machine learning algorithms are the “means of production.” They are the capital, the machinery, the factories and the farms of the new economy. In the marketplace for online services, the companies with the best algorithms will prevail.

Of course, all paid labor is exploitative, not in the Marxian sense, but simply because the owners of the means of production depend on the value workers bring to the task, in return for which they are paid based on agreed-upon terms. This will be the case so long as scarcity exists. Unfortunately, scarcity means the best we can do is keep training algorithms until the cost of goods and services falls to near-zero.

Sure, Facebook isn’t paying you cash money to tag your photos. But it’s still paying you, in the form of a service you want. Photo hosting isn’t free. Facebook gives you, as far as I know, unlimited photo hosting and doesn’t charge you a dime. It doesn’t even force you to work for it. It just asks you if you’d like to help train its algorithm. It makes no sense to see getting a service, and being asked for labor, as any more exploitative than getting a service and being asked to pay for it. Facebook is paying its users, but instead of a paycheck, health insurance, and free snacks, it’s connecting us to our friends across the world.

The Better Critique

Lavigne is absolutely right in saying that current and future machine learning algorithms are the means of production, with all the attendant problems. One big problem with machine learning is that the amount of human-corrected data you have access to is directly proportional to how good your algorithm can get.

Machine learning is an industry with a high natural barrier to entry, which creates a competition problem.Which means that machine learning is an industry with a high natural barrier to entry in terms of fixed costs. Getting all that data and having humans correct it is capital intensive, which means that only companies large enough to amass lots of human-corrected data can compete. This creates a competition problem.

When a service becomes more valuable as more people join, it’s called the “network effect": some services become more valuable as they grow in popularity. The network effect is why Facebook doesn’t have any real competitors. Everyone joined Facebook back in the day, so everyone joins Facebook today. Same with LinkedIn. There are other professional social networking sites, but they’re less valuable because they’re less popular, which keeps them less valuable, which keeps them less popular.

This isn’t a problem capitalism can’t overcome. Innovators can always offer new services that are more valuable than the network, see Snapchat. But it is a problem. When you dislike Facebook’s new privacy policy, for example, you can’t just jump off to a similar website.

The high amount of human-corrected data machine learning algorithms require is both a network effect and high fixed cost. It’s a network effect in that the more people use the service (thereby generating the data), the better the service becomes. But lots of data requires lots of correctors – thus the high fixed cost.

A Potential Solution

One solution to reducing the barriers to entry for companies who want to use machine learning is to keep as much information as free as possible. Making algorithms open-source (free and legal to copy and use) will make it much easier for companies to compete with each other. There would still be an incentive to innovate since incumbent players would still have an advantage over upstarts in the amount of data they have access to, but either way, there’s nothing wrong with working for an algorithm that isn’t also wrong with working for a human. Heck, there are lots of benefits to working for a machine. A machine is less moody than your current boss and doesn’t care whether you wear a suit.

The best way to overthrow capitalism is to make it unnecessary. More importantly, complaining that machine learning is more exploitative than working for a human boss is like complaining that being beaten with a blue shovel is more painful than being beaten with a red one.

Y’all need to be complaining about scarcity. That’s what’s exploitative. As long as you need to produce things people want in order to live, the people who have more stuff will exploit the people who have less. The end goal isn’t to find the perfect boss who will treat you well. Or to overthrow capitalism. The goal is to innovate until you don’t need a boss.

The best way to overthrow capitalism to make it unnecessary. By every single possible measure – working conditions, hours, injuries on the job, the poorest workers in the US – are less exploited than they’ve ever been. And why? Because they need their jobs less than they ever have. Goods and services are hella cheap. This is where we need to be putting our focus.

We need to focus on keeping the market for services powered by machine learning competitive. Not because correcting machine learning algorithms is a good or bad job, per se, but because where we’re going, we don’t need jobs. And machine learning algorithms will help us get there.

Cathy Reisenwitz is a D.C.-based writer. She is Editor-in-Chief of Sex and the State and her writing has appeared in The Week, Forbes, the Chicago Tribune, The Daily Beast, VICE Motherboard, Reason magazine, Talking Points Memo and other publications.