Apparently TV scripts are being rewritten by algorithms now, or at least that's the impression you got if you paid any attention to the publicity around Cary Fukunaga's Maniac, released on Netflix in September. Fukunaga gave an interview to GQ, where he said, in part:

Because Netflix is a data company, they know exactly how their viewers watch things... So they can look at something you're writing and say, We know based on our data that if you do this, we will lose this many viewers. So it's a different kind of note-giving. It's not like, Let's discuss this and maybe I'm gonna win. The algorithm's argument is gonna win at the end of the day.

At least in the parts of the internet that turn up in my feeds, this was not well received. Ron Gilmer at Collider wrote about "a computer program dictating creative changes" and "[surrendering] our entertainment to the equivalent of Skynet" while an article at the Quartzy pondered whether the show would look like it had been "written by a computer."

Meanwhile my Twitter feed was full of dire predictions of the assimilation of the artistic impulse into the algorithmic Borg, although admittedly a lot of this came from science fiction writers whose professional viability depends in part on being able to come up with worst case scenarios for any new technological development.

But if we want to get past the hype, it seems reasonable to point out that TV ratings have existed in America since the Truman administration. What is Netflix doing that's actually new?

We can start by pointing out what they're not doing. We've seen such advances in facial and voice recognition technology that it might seem plausible their Skynet-for-script-development is being fed video footage and spitting out analysis and statistics. But that's not happening. As machine-learning specialist Professor Michael Jordan points out, Google's search algorithm still can't parse a relatively simple sentence like "What's the second-biggest city in New England that is not on the coast?" We're nowhere near having software that can be given a tv episode and make useful determinations about the plot or characters.

So we need human beings to watch the show and give some sort of simplified analysis of its content. The traditional Hollywood method of simplified analysis is what's called 'coverage,' where agencies and production companies hire script readers to reduce unproduced screenplays to summaries of their plot, characters, quality and potential commercial prospects. A 2006 New Yorker article by Malcolm Gladwell contains an account of Epagogix, a company that claims to be able to predict the eventual box office takings of a movie based on this sort of simplified plot synopsis.

It's worth thinking about that for a moment. First, you can ask anyone who works in movie production whether you can reliably tell what the finished product will look and feel like based on the original screenplay. In my experience the answer you will get to this is a unanimous no. Putting aside sequels that are following an existing template, there are so many points during pre-production and filming where the character of the work can change completely, based on choices made by the director, the actors, the cinematographer, editor, composer, costume designer and so on.

Then, once the film is completed, there are still so many confounding factors affecting its profitability: changes in culture and fashion, news stories that affect the way audiences feel about the star(s) and subject matter, or internal studio politics that affect whether a movie is promoted or dumped. For example, the creative disagreement that allegedly led Paramount to abandon the international theatrical release of Alex Garland's Annihilation and release it directly to streaming instead, on Netflix.

Given all the things that can distort the relationship between a screenplay and its eventual box office take, I think it's worth being extremely sceptical about anyone who claims to have discovered a mathematical relationship between the two. In fact, I'll go a step further. It's worth being extremely sceptical when you see any news coverage of developments in AI, period. The field is so overhyped at the moment: if you take a look at job postings for tech firms, there are a huge number where the business strategy appears to be 1) Hire data scientists; 2) ?????; 3) Profit. Every company with unanalysed data seems to want to wave a machine learning magic wand at it, and any company with a proprietary algorithm has a vested interest in making it seem as magical as they can.

Netflix, then. What is it actually doing?

We have some clues, thanks to the work of Alexis Madrigal at the Atlantic, who used web scraping software to get the details of the oddly-specific genre categories that turn up in your Netflix recommendations. His analysis revealed that Netflix "microgenres" are formed out of combinations of categories including things like "based on books," "set in the 1970s," and an odd, unexplained enthusiasm for the works of Raymond Burr.

Madrigal ended up getting in touch with Todd Yellin, a Netflix VP, and this is where we learn more about the details of the company's process. Netflix has replaced traditional coverage with a process for gathering metadata on everything in their library. Instead of hiring aspiring screenwriters to write plot synopses, Netflix hires them to fill out an elaborate spreadsheet Yellin designed, with a 36-page user manual (or at least it was originally an elaborate spreadsheet; maybe they've since replaced it with an elaborate app).

I haven't seen the spreadsheet—unsurprisingly it doesn't seem to be publically available—but it is apparently used to record details like story locations and the lead characters' jobs, plot elements like spaceships or zombies or the presence of a strong female lead, and rate various aspects from 1 to 5, like how happy the ending is, how much gore is seen, and the social acceptability of each of the characters. Netflix then combines all of this metadata with its viewership data (basically a more granular version of traditional ratings) to generate recommendations for users and to tell its creators what they can and can't do.

I don't know about you, but for me this is a pretty far cry from Skynet rewriting people's screenplays. Studio executives have been demanding rewrites from screenwriters since Hollywood began, based on some unarticulated admixture of gut hunches and previous box-office performance. Using an elaborate spreadsheet seems like a more scientific approach, but consider what a lossy compression format that spreadsheet is. Every second of film or tv is filled with information, signals that shape the viewing experience, from lighting and sound design and casting up to word choice in the dialogue. Can you really capture all of that in a spreadsheet completed by an independent contractor who's watching 20 hours of content per week? Imagine giving someone the spreadsheet to look at and then getting them to watch the movie. Would it be identical to the movie they had in their head based on the metadata alone?

(That's putting aside the subjective aspects of categorising and rating content, like anything to do with humour. I know people with otherwise good taste who can't stand Monty Python; I would rather chew on tinfoil than watch an episode of Arrested Development.)

Plus, you have to take into account that viewer response is based on combinations of factors. In different genres you're willing to accept different things. I kept watching The Shield after the main characters murdered a police officer in cold blood, but it would probably have derailed my viewing of Grace and Frankie.

The combinations can be more subtle and counterintuitive than that—screenwriter William Goldman once blamed the commercial disappointment of his The Great Waldo Pepper on audiences liking Robert Redford too much in the lead, meaning that they turned against the movie when Redford's character failed to prevent Susan Sarandon's character from falling to her death. The same story would have succeeded if the film had starred an actor with more of a dark side to his persona, Goldman argued. "I truly believe that if Jack Nicholson had been in the part, he wouldn't have been as good as Redford, but the movie would have worked for audiences," he wrote.

So you can start to see how murky this stuff gets, and how many variables there are. Although if you were really serious about taking a scientific approach you could always try creating alternative edits of tv episodes, changing one aspect at a time, assigning them randomly to viewers and doing A/B testing, but while Netflix has apparently been doing this sort of thing with its promotional materials, I doubt they're going to commit the sort of resources required to experiment with the content itself. (Although maybe you could see that as a missed opportunity to try out different casting choices for Iron Fist.)

None of this is meant to argue against the idea of collecting data or doing analysis. As someone who regularly downloads interesting-looking datasets to play with in R, I'd love to get my hands on Netflix's data. But as someone who also gets paid money to write fiction I'd be extremely dubious about basing any creative decision entirely on that data, even if what you're trying to maximise is eyeballs rather than quality.

If you've made it this far I should perhaps sound a note of caution, which is that because I haven't had a chance to examine Netflix's metadata collection process for myself I don't actually know how it deals with series pacing, which was the issue with Cary Fukunaga's original version of Maniac. But based on the above, I would hope that future Netflix creators are able to treat feedback on their work as a slightly better-informed version of standard studio notes, rather than as Netflix wielding some sort of unimpeachable evidence-based argument.

And I hope the Netflix executives giving feedback are able to avoid believing their own press. When you have qualitative research in spreadsheet form it's easy to start thinking that you're dealing in cold, hard fact and not the mushy intangibles of human experience and perception. But not only has Netflix not yet solved the problem of art, claims that they have solved the problem of how to keep people watching are, in the meantime, still pretty dubious.