Looking back on a number of metrics associated with the 253 feature-length science fiction, fantasy, or supernatural horror films I’ve seen from the years 2011-2015, I don’t find too much of interest, but an uninteresting result is still something. It’s not bad to have empirical evidence for what you might have guessed, and if you look at something closely enough for long enough, you come out the other side of uninteresting and enter a tiny world of nerdy fun.

Correlations among metrics

First, a matrix of correlation coefficients based on data from multiple sources:

As a basic explanation of what’s going on here, every available set of metrics has been compared with every other. For the sake of space, the column labels have been simplified to letters that correspond to the letters beside the row labels. “My rating” is the collection of scores I gave each film at the time I watched it. Evidently I held my marks: I didn’t give out a single “10.” I also never changed a rating, so the “My rating” metric is not colored by rosy retrospection. But in view of the fact that my opinions did evolve a little over time, I built a “retro rating” metric by awarding a one-point bonus to every film noted in my previous “Movie Favorites: SF/F/H 2011-2015” post. Most other data sources should be transparently intelligible, except for row/column K, which is based on multiplying the IMDb user rating and RT critic ratings together—a metric I’ve used in some earlier data mining posts about movies.

Unsurprisingly, my judgments line up best with those of other users at IMDb, RT, and (to a slightly lesser extent) Letterboxd. It’s only a moderate correlation, but aggregate ratings from critics have even weaker correlations with my scores.

Optimal movie selection

Looking only at the 40 feature-length films I eventually designated as favorites, here are the minimum scores they achieved:

IMDb 6.0+
Letterboxd 2.7+
RT user 43%+
RT critic 32%+
Metacritic 41+

If I’d known this in advance, I could have skipped 29 out of 253 films without missing out on any favorites. That is to say, if I had insisted sort of compulsively that everything I watch have at least the scores above, I could have achieved a 40/224 favorite-to-watched ratio. But if I’d been interested in a compromise—discovering fewer favorites but also watching far fewer films I didn’t enjoy—what criteria would have yielded better ratios? To answer that question, I wrote a script to cycle through random threshold values for all five metrics and find a set of scores yielding a good ratio for each number of favorites:

My script in fact yielded much more data, showing each ratio it preferred over the next, and one very slow version of the script deterministically ran through every possible value for every metric. But the “random walk” version converged on similar results much more quickly.

My conclusions are a little impressionistic. Aiming at a ~33-34% favorite-to-total ratio “feels” about right. Ratios above 38% require very high RT critic ratings, perhaps limiting a viewer to a steady diet of blockbusters and children’s movies. Surprisingly, Metacritic and IMDb scores don’t seem very crucial in these results: they often return to their start values as the ratios improve, suggesting some Letterboxd or RT user rating might have served about as well. So I suspect I could find favorites more easily in the future by selecting only SF/F films that have a Letterboxd rating of 3.4+, an RT user approval rating of 61% or above, and an RT critic rating of 68% or above. I mean, I doubt I’ll do much with the information—movies that don’t turn out to be favorites can still be fun, etc.—but it was an engaging puzzle to work through.