Anyone who's used Netflix knows that the titles appearing in your feed are curated based on movies and shows you've previously watched -- it's how you can tell that your brother's friends have been vulturing your account to watch Nic Cage movies. But did you realize that as of December 2017, Netflix knows which image will make you click play and keep watching?

It's true, though "knows" is a strong word for this context. In a Medium post, members of the streaming giant's tech team describe a new recommendation algorithm the company rolled out to serve up unique images to its 100 million-plus subscribers. These images are designed, like most aspects of the Netflix experience, to make you more likely to spend time using the platform, so it's less "knowing" and more "predicting" based on past behavior.

As a simple example, you might see Good Will Hunting pop up in one of your recommended rows (which are also tailored to your viewing habits). If you've watched a lot of love stories in the past, you might see an image of Matt Damon and Minnie Driver kissing, whereas if you're a comedy fan, you'll likely get a shot of Robin Williams:

Recommended Video

Entertainment

The Post-Credit of 'Thor:Ragnarok' May Confirm the Sad Fate for This Character

"Big deal," you might say, "so Netflix is slightly changing which images pop up for shows. Why does this matter?" When you need to digest a lot of information quickly to decide how you spend your time in an internet culture dominated by scrolling through myriad choices, pictures are worth a billion pixels. Nick Nelson, Netflix's Global Manager of Creative Services, said in 2016, "We conducted some consumer research studies that indicated artwork was not only the biggest influencer to a member's decision to watch content, but it also constituted over 82% of their focus while browsing Netflix. We also saw that users spent an average of 1.8 seconds considering each title they were presented with while on Netflix."

It makes sense: It's easier and faster to look at a picture than to read a title or description, so a first visual impression plays a significant role in grabbing your attention. Also, part of the joy of watching television is immersing yourself in a world that looks and feels unique. Take a series like Stranger Things, which ties much of its identity to an '80s American aesthetic. That title font might be what first hooks you into the show's Reagan-era pop culture pastiche. If it used a sleek, modern font, you might've skipped right over it.

Figuring out which image best depicts a show, then, could be the difference between success and failure, and in the past Netflix has tried to optimize cover art by determining the most enticing image for the biggest number of users. With a subscriber base that would make Netflix the 12th-largest country on Earth, however, millions upon millions of people might find a different image more appealing than the one Netflix chose for a show. You only get one image per title to make an impression, after all, so there's no way to know who's staying away because they didn't like what they saw. That's what makes the company's new approach so intriguing.

How Netflix's image algorithm works.

As a Netflix subscriber, you are constantly participating in a range of behavioral experiments to produce data that allows the company to recommend movies and TV shows it thinks you'll like. What you watch next, what you give thumbs up and thumbs down to, when you quit watching a not-so-bingeable show: All of this information can help engineers create a recommendation algorithm.

Historically, the company's tech team has tested algorithms on designated batches of users to see which leads to more watch time, more titles watched -- anything that keeps viewers on Netflix. Once they figure out which algorithm performs best, they roll it out to everyone, but all that time spent testing means most of the people on the platform aren't getting what the company terms "the better experience."

The new image algorithm, though, works in real time to project the image it thinks you'll respond to, and continues collecting data to improve its performance. And it's doing the same for 100 million other subscribers, collecting THAT information for further customization.

In the end, you get an image that Netflix's algorithm thinks will entice you to watch, but the key is that the picture could change tomorrow as the system "learns" more about you and subscribers like you. For any given show, there might be a dozen possible images loaded, which are ranked according to "context." So, in the Good Will Hunting example, the Robin Williams image might be ranked fifth-best for a user Netflix determines is romance-centric, but first for a comedy profile. For each title, you'll get the image with the highest rank based on your profile.

It's not a revolutionary or entirely unique approach to contextualize each user's experience, but doing so for such a huge number of people at on-demand speeds could open the door to further customized aspects of the platform. In their post, the engineers say they could potentially apply the same method to the short descriptions of each title, or even trailers -- if you don't like love stories, a description for an action-romance might focus more on action, though the Netflix engineers are conscious of straying too far from truth, which leads to what they call "clickbait images." There's no point in hyping up a movie a user is unlikely to enjoy, because it risks alienating and disengaging that user.

What that change means for you, and why some people are upset about it.

Automation can make a system simpler and more efficient, but a human being has to build the automated system first, and underlying values and goals come through in the finished product. It may sound simplistic, but you have to ask what automation is trying to accomplish, especially important given that most major tech companies frame updates and algorithm changes as a vague "better user experience." What, in the company's terms, does "better user experience" mean? One of Netflix's goals, which most users understand intuitively, is to attract as many subscribers as possible and give them personalized experiences to keep them watching.

It's not so much that this system is an inherently bad thing so much as that it's not inherently good, especially in a broader context of how the internet works. Search and recommendation algorithms can be pretty effective at surfacing content a user already likes, but they're notvery good at exposing an individual to content she doesn't like, or content she doesn't even know she likes. The issue comes down to a basic push-pull concept in machine learning, called exploitation vs. exploration -- the exploitation side is surfacing what a user knows and loves, the "go-tos," whereas the exploration side is what it sounds like, discovering new things you like. (Spotify's "Discover Weekly" feature is a prime example of the exploration approach, but it's also mining your "go-tos" for recommendations.)

The vague word "content" actually applies here, because it really could refer to anything online. All the products you use, from Google search to Facebook to Spotify, try to predict what you want in real time, whether it's an image that will intrigue you, a hard news story, a music video, or an uncle's Facebook rant. It forces content creators, especially those who want to appeal to as many people as possible, to play by the algorithms' rules, and no one's immune. Your media becomes a product of your existing tastes, as filtered through a tech company's assumptions about what you already like.

Of course, because online algorithms are human creations, they have biases. Almost daily, you see and hear criticisms leveled at Facebook and Twitter for failing to control what information is disseminated on their platforms, which the companies often dismiss by citing freedom of speech and the fact that the content on their services is dictated by algorithms. The fact that these algorithms rely on user input -- i.e. what you click and engage with -- to deliver new content for you allows the companies to say that any perceived biases are actually the result of user prejudice, as opposed to an inherent flaw in their design.

Case in point: The recent uproar over Netflix's perceived targeting of black users with images of black actors who are often minor characters in a film or show. When people complain, Netflix can say, as they did to Fader, that they're not targeting black users with anything at all; it's a function of the algorithm. "Reports that we look at demographics when personalizing artwork are untrue," the company's statement says. "We don't ask members for their race, gender, or ethnicity so we cannot use this information to personalize their individual Netflix experience. The only information we use is a member's viewing history."

That's technically true, but also a cop out, because it doesn't reveal any of the input factors going into the service's algorithm that delivers the images in the first place. In other words: What's making Netflix's algorithm think certain users will respond best to black actors in a show or movie's cover image? The company conveniently sidesteps this question by saying it doesn't collect any demographic information that would target users' race, without acknowledging that that's exactly what the image-selection algorithm is doing anyway.

Stephen Colbert recently made fun of Netflix's racially targeted promotion, but most responses to this seem to be a fundamental misunderstanding of even the most basic functionality of Netflix, which makes it more difficult to hold the company (or Facebook, or Google, or Twitter, etc.) accountable.

How different is Netflix from other big tech companies?

In the wake of the 2016 elections, there's a lot of talk in American media about online echo chambers powered by algorithms, so why couldn't Netflix's image feature produce the wrong kind of reinforcement?

A platform like Facebook, which serves up user-generated content as well as publishing tools for media sites, tends to favor exploitation, because a user can continuously interact with the same or similar posts ad infinitum via comments, shares, and so on. Because Netflix unilaterally controls the content on its platform, however, they're incentivized to include more exploration than Facebook; you can't post your own thoughts to Netflix, and watching the same or similar movies and shows over and over is likely to drive you away from the platform, not keep you on it.

Of course, the fundamental issues with algorithms apply to all tech companies. Clarke's third law, that sufficiently advanced technology appears as magic, gives widely used platforms the appearance of omniscience -- Facebook "knows" what your political beliefs are, Netflix "knows" what you're likely to watch next. And what defense does any mere mortal have against a godlike force? Less dramatically, what defense does a mere mortal have against a photo of Mr. Bean used to promote Love, Actually?

Another implication is that the algorithmic approach shifts responsibility for the results from companies to users. If what appears in your feed, your queue, or your playlist is a result of your own past behavior, it becomes more difficult to criticize or hold accountable the companies that created the systems in the first place. This challenge applies less to Netflix at the moment, since it's a relatively harmless way to watch movies and TV whenever and wherever you want, though it's worth remembering that 10 years ago Facebook was just a great tool to stay in touch with friends and family. The next few years will surely see streaming platforms evolve in ways those on the outside can't predict.

In the meantime, you can stream all four seasons of Black Mirror on Netflix.

Sign up here for our daily Thrillist email and subscribe here for our YouTube channel to get your fix of the best in food/drink/fun.

Anthony Schneck is an entertainment editor at Thrillist. Follow him @AnthonySchneck.