(Photo by Ron Hoskins/NBAE via Getty Images)

Ready for a bold prediction? I think the most significant early advances in the realm of video tracking will have to do with helping us understand and quantify the value of setting and using screens properly.

The NBA, more than ever, is a pick-and-roll league, and yet so many of its players look as awkward working with a screen as I look working with tap shoes. This year, I’ve watched a lot of games from both bad and good teams1, and I would posit that one major determining factor that separates the top half of the standings from the bottom is that good teams get more from their picks.

This is something I have been thinking (and tweeting2) about for a while now, and then this week the issue came up again. ESPN’s Amin Elhassan wrote a smart piece using Vantage Sports data to find effective screen setters, and Hardwood Paroxysm’s Andrew Lynch wrote about how programming machines to recognize screens will allow humans a better understanding of their impact.

Purportedly, the goal is to “wipe your man off on the screen.” That’s what I was taught in Jr. Jazz and in church gyms, anyway. Yet a lot of guys who have been schooled in much loftier basketball classrooms don’t seem to accomplish this.

Coaches and scouts don’t need tracking cameras, of course, to tell them when somebody set or used a pick incorrectly. But where the new tools will help us is in figuring out trends and in accurately assessing the correlation of this skill to winning. What my eyeballs tell me is that teams with good records and efficient offenses usually feature tightly executed screens at predetermined spots and angles. Their peers at the other end don’t.

Using the screen

I have long held that the Jazz youngsters, in general, need to make better use of picks. For example, Alec Burks had a tendence early on to go away from the pick (seemingly) a lot more often than he’d actually use it. The coaches realized this and started sending picks that would favor Burks’ preference to go baseline3, but for several weeks, he was robbing the pick of its efficiency.

Gordon Hayward had a habit of going over the pick and then slowing a beat while taking a lateral step, ostensibly to read his options. Problem is, that gave his man a half second to recover and in a lot of cases it made it like the pick never happened. Trey Burke was guilty early of leaving too much space between himself and the screener. All of these are things that have been cleaned up to some degree, but these points illustrate how many ways there are to reduce the value a pick creates for your offense.

Some of this should be low-hanging fruit for video tracking systems. With the bird’s eye view, it should be easy to measure the distance between the handler and the screener on picks – at least on-ball ones. Coaches, analysts, scouts and even cheeky bloggers would have empirical proof of who tends to wipe their man off on the screen and who is just going through the motions.

Take these two cases, for example. I pulled an example from the most efficient offensive team in the league (Miami) and the least efficient (Philadelphia). For fairness’ sake, I went to a random 2nd quarter moment in each team’s most recent game and watched for the first on-ball screen. This was a random exercise, but couldn’t have proven my point better.

In Miami’s screen, the guard actually makes physical contact, rubbing the defender off on his screener:

Philly’s guard, not surprisingly, looks like he’s afraid of contracting a contagious illness if he gets to close to his teammate:

If the purpose of a screen is wipe your man off, then this picture is like trying to wipe food from the corners of your mouth by having a friend wave a napkin at you from a few feet away. And what’s sad is this isn’t among the top 20 worst screen uses I’ve seen in the last week. Sometimes the handler leaves enough daylight between himself and his screener for the Griffin Force team to drive a Kia through. Whats the point of even setting that pick?

There are a great many things that make Miami a better team than Philadelphia, but it’s neither coincidence nor hasty generalization to say that this is something that separates good offensive teams from bad ones.

We can scout this easily on a case-by-case basis with our eyes (that Ben Dowsett link I shared about Burks does so very well), but programmed systems could take the guess work out of it and free human beings up to spend our time analyzing what it means. The capability is already there, we just have to figure out how to leverage and access it.

Setting the screen

A lot of my attention (and frustration) has been about ball handlers who never learned what “wiping your man off” means, but let’s not let the screeners off the hook.

Setting effective (or ineffective) screens can be a major factor in offensive efficiency. The most easily measured way bad screens impact efficiency is when a poorly-set pick results in a loss of possession: an illegal screen. According to some quick numbers I ran on the correlation of overall efficiency to various turnover types, there’s a stronger relationship between being good on O and limiting offensive fouls (the category that includes illegal picks4). Put another way: being worse than average at offensive fouls appears to be a stronger indicator of a bad offense than being worse at ball handling mistakes.5

A screener can also set a pick at the wrong angle, so that when he opens up on the pop or dive, he can’t see what the handler is going to do. He can also position the screen at the wrong spot on the floor, causing the whole play to “bend” in a direction that makes the screen less effective and the play break down. The same pick that creates an open shot when it’s set from 18’ might not do anything if you let the defense push you out to 22’.

In short, there are a lot of very nuanced factors that can determine whether your screen made your team more likely to score, and some of these can be measured by video. The body angles might be a tough ask, but the resulting efficiency of screens set at different distances or angles from the basket could be incredibly prescriptive. To the point, several years ago you couldn’t find too many teams using elbow pick-and-roll other than San Antonio and Dallas. Today, teams have wised to the possibilities that particular play creates, and it’s much more prevalent. What else can we learn about screens by enlisting machines to do the tedious part of the work?

The Jazz

Bringing it back to the Jazz, my unofficial theory is that any empirical rating of how our guys set and use picks would probably not grade out too well. While they’re improving and adjusting on the fly, the loose execution and timing is something the young Jazz share with many of their lottery peers. It’s hard to tell in this one because of the angle, but here’s another screen where our guy doesn’t wait for the screener to set up, and then leaves room for his man to get over.

Again, it’s a pick-and-roll league. The Jazz will either figure out how to win on that playing field or they won’t. Imagine how helpful it would be if somebody could arm the team’s development staff with quantifiable, directional stuff on where they can hone their technique?

Someone will do it. Someone will figure out how to mine the data in a way that makes it a competitive advantage.

Dan Clayton

Dan covered Utah Jazz basketball for more than 10 years, including as a radio analyst for the team’s Spanish-language broadcasts from 2010 to 2014. He now lives and works in New York City where his hobbies include complaining about League Pass, finding good doughnut shops and dishing out assists for the Thoreau It Down team in the Word Bookstore basketball league.

2 Comments

Excellent work, Dan. One of the best pieces up on SCH recently in my personal opinion. Did you see the paper presented at Sloan this year on this subject? Interestingly enough, it was widely panned by the analytics community at large, but I think this had more to do with the specific research format and some holes in the process rather than an indictment of the idea. There’s no doubt this is one of the very next leaps we should be expecting from optical tracking data.

I could not agree more, Dan, on all counts. It’s painfully obvious the Jazz coaching staff has been trying to get this aspect of play in shape all season, and they’re still trying. There’s a lot of room to improve. Kanter’s bevy of moving screens combined with his early pull outs. Trey Burke’s reluctance to attack the paint coming off screens. Alec Burk’s rejecting screens, not moving near enough the screener, or moving too early. Hayward’s either/or problem of attacking hard and being unable to pass or hesitating so as to see the floor, leaving him poor percentage pull up jumpers. Not to mention the inability of the offense to run a pick, not get what they want, then easily and automatically reverse the pick and run it again. I think that as Jazz screens and the use of them improves, the impact on the overall offense will be profound.

Side Notes

Because of the Jazz’s pick situation, I’m watching a lot more lottery teams than usual, but I still have my eye on playoff teams because of the Golden State pick. It means I am seeing good and bad basketball juxtaposed almost nightly.

No small adjustment, by the way. Changing the angle/direction of an elbow screen and role means you have to adjust spacing all over the floor and even taking personnel out of the strong side corner. That the Jazz made that kind of tweak midseason indicates to me that they really wanted to give Burks the freedom to be successful.

Interestingly, passing turnovers had an inverse relationship. Elite offensive teams had a worse average rank for passing miscues than the worst offensive teams. Moving the ball – even with occasional mistakes – is a prerequisite for being great offensively, which supports Clint Johnson’s share & stop doctrine.