The beauty of the MIT Sloan Sports Analytics Conference, which completed its fifth annual session on Friday and Saturday at the Boston Convention and Events Center, is that so many ideas are shared throughout the weekend--in panels, during informal conversations in the hallway and all sorts of evening get-togethers in the city. With so many intelligent and influential people in one place thinking about how to better make decisions and use data across sports, there is much to ponder in the wake of the Sloan Conference.

To me, two big issues stood out during this year's conference. From Friday morning's marquee panel on Player Development came the age-old question of how much what a player has done in the past tells us about what he or she will do in the future. The staple Basketball Analytics session brought up the issue of which is more informative: larger, more reliable samples or more specific data? Let's consider both of these topics.

How do we predict a player's future?
Malcolm Gladwell was an obvious choice to moderate the discussion of player development. Gladwell's book Outliers was all about how potential gets turned into production--and whether potential really exists at all. One surprising point raised early in the panel was how talent can actually be an impediment because it allows the most gifted young players to take shortcuts and develop bad habits that limit their ultimate level of play. Given the panelists' heavy Houston Rockets lean (Rockets general manager Daryl Morey, the event's co-chair, was joined by his former head coach Jeff Van Gundy), Tracy McGrady caught more than his fair share of flack as an example.

Player development is a wide-open topic that might spin off a column of its own at some point--the 10,000-hour rule, popularized by Gladwell, is especially fascinating in sports terms--but what really struck me was the introduction of psychological data into the discussion. Van Gundy noted the importance of finding players who love the game and will remain motivated throughout their career.

Gladwell responded with an intriguing hypothetical question. If you could have perfect information either on a player's mentality or his past performance (as measured, presumably, by the holy grail of player statistics) but not both, which would you choose?

Van Gundy, unsurprisingly, chose the psychological evaluation based on his belief that he could learn enough about the player's current ability from scouting alone. From my perspective, however, part of what Van Gundy believes will separate players down the road is already contained within their statistics. While we cannot distinguish work ethic and practice habits, a player's past performance should already reflect some of these personal characteristics in addition to skill. I'm not necessarily saying Van Gundy is wrong; that there is room for interpretation is what makes the hypothetical worth discussing. I do think he's giving past performance too little credit in this regard.

The one problem we can all agree exists with past performance at a young age is that it is much more difficult to determine how well it will predict what a player is like against stronger competition. During the Baseball Analytics panel, Arizona Diamondbacks scout Joe Bohringer borrowed from a colleague to use the analogy of a swingset to describe the balance between scouting and statistics. The farther a player is from the majors, the more valuable subjective assessments will be because of the low reliability of statistics and because how a player succeeds or fails is as important as the bottom line. As a player reaches the pros and gets a reliable track record of data, this tends to take precedence over a good or bad day a scout might have happened to see.

Specificity vs. reliability
As it turns out, the weekend provided a perfect example of one of the issues debated throughout the Basketball Analytics panel. On Thursday, the Miami Heat blew a 20-point second-half lead and lost a close game against the Orlando Magic. On Sunday, the Heat blew a lead and lost a close game against the Chicago Bulls. The pair of losses dropped Miami to 5-13 in games decided by five points or fewer and 2-8 against the other top three contenders in the Eastern Conference. Yet thanks to a series of blowouts against lesser competition, the Heat is still tied for second in the NBA in point differential with the Boston Celtics, just a hair behind the San Antonio Spurs (who were sandwiched in between as part of Miami's 0-3 weekend).

As the playoffs approach, the Heat's record against elite competition is sure to be cited as a reason to pick against Miami. Is it more telling than the Heat's season-long performance? Here, the evidence is mixed at best. When I looked at performance against top teams, I found some indication that it does matter, just as head-to-head records do hold predictive value in the postseason. In contrast, Basketball-Reference.com's Neil Paine found essentially no value to playing well against elite teams when he looked at the numbers in a slightly different fashion. And all of these effects are dwarfed by the two most important keys to winning in the postseason: home-court advantage and overall regular-season performance, as best measured by differential.

This is just one example of many where there is an inherent tradeoff between having a robust sample size and being able to narrow in on a specific issue. One other common example is "clutch" performance. Dallas Mavericks owner Mark Cuban, a fixture on the Basketball Analytics panel, has been a proponent of adjusting player values (most notably the adjusted plus-minus ratings that the Mavericks have long used) for the leverage of the situation. Cuban has a tendency to downplay the possibility that some of these metrics may end up with more noise than usual information because the sample has been chopped down.

Over time, it's possible that could change. With each passing year, for example, the evidence that the Dallas Mavericks and New Orleans Hornets are uniquely capable of winning close games becomes stronger. This remains true despite Dallas' one-point loss to the Memphis Grizzlies on Sunday courtesy Zach Randolph's game-winner with 0.3 seconds remaining. The Mavericks are 16-8 in games decided by five points or fewer this season; the Hornets are 11-9. My working theory notes that Dallas and New Orleans are two of the few teams in the league that primarily rely on the pick-and-pop late in games, leaning on the shooting ability of power forwards Dirk Nowitzki and David West. Maybe everyone in the league should be running pick-and-pop down the stretch?

Emptying out my notebook, here are a few other thoughts from the Sloan Conference:

The weekend's presentation with the most meaningful long-term ramifications for basketball analysis was Sandy Weil's preliminary look into the results of Stats Inc.'s game-tracking. Six teams have purchased the service, getting cameras in their arena that track the location on the court of all 10 players, the ball and the referees 25 times per second. Weil got access to data from three of the teams and chose to look at what player location tells us about shooting.

For the most part, his results confirmed conventional wisdom, but it's handy to put numbers behind some of our intuition. A tightly contested shot (with a defender three feet or closer to the shooter) drops percentages by 12 percent (i.e. from 50 percent to 38 percent). The second threshold Weil found for defense was within five feet of the shooter. This seems to conform nicely to our language about contesting shots. Nobody within three feet is "open," while nobody within five feet is "wide open."

After accounting for defensive pressure, Weil found that shooting percentages dropped by 1.5 percent for each additional foot from the basket. In practice, we see that shooting percentages are fairly similar anywhere behind about three feet from the hoop. That tells us that teams tend to contest shots just enough so they are about equally viable.

Weil also found that, even accounting for the other factors, players shot better off the pass than off the dribble and that a quick release tended to produce higher percentages. One interesting practical result was Weil's finding that players shoot a low percentage on tips, making it wise to come down with a rebound and go back up instead.

We're just scratching the surface of what this charting data (which Stats Inc. began tracking this season) will eventually yield. Alas, it's unlikely many of the insights will ever be seen by the public because teams are paying for the information and the NBA passed on a comprehensive program. That's a different model than Sportvision used with PITCH f/x in baseball, which has been widely available to fans. In the Baseball Analytics panel, Greg Moore of Sportvision said that the company did not initially anticipate so much leaking, but called the input of the sabermetic community "a blessing in disguise" because of how it has helped explore the possibilities of the PITCH f/x data.

To me, the weekend's most interesting panel was Referee Analytics, which featured a fascinating, diverse crew of panelists. ESPN's Bill Simmons moderated, while Cuban represented the team perspective, and NFL referee Mike Carey offered first-hand experience. Scorecasting author Jon Wertheim and statistician Phil Birnbaum rounded things out with their interpretations of Wertheim's research on the role of refereeing in home-court advantage.

Since the NBA was watching, Cuban had to be careful with his comments. He did make it clear that he believes rules should be enforced without regard to context, meaning a foul in the first quarter is a foul in the fourth quarter. Cuban also praised the NBA's restructuring of its referee oversight and an improved training program that he believes has made young referees more prepared for the league.

I was a little surprised everyone on the panel seemed to be against the concept of a home bias in officiating. To me, that's an interesting aspect of the game that ultimately is fair, since teams earn home-court advantage in the playoffs and have equal chances at home and on the road during the regular season. I guess I like my game a little more human.

Gladwell got most animated when asking why NBA players don't shoot better from the free throw line. A variety of explanations were suggested, ranging from the notion that it's more beneficial to work on other skills in limited development time to the conventional wisdom that some players' hands are too big to be effective at the line. Gladwell finds the fact that NBA players do not all shoot a high percentage on free throws evidence that they're not practicing enough. To me, it's actually quite the opposite. The general absence of evidence that players improve much on free throws as a class (with individual exceptions, certainly) demonstrates the limit at which practice no longer helps.

It is possible that your take on this issue relates to your own ability at the line. Both Gladwell and ESPN Insider's John Hollinger referenced shooting high percentages despite their otherwise limited games. Personally, I can't make a free throw to save my life and shooting is the most problematic of my many weaknesses. So perhaps that explains why I'm more sympathetic.

On a more serious level, players and coaches need to believe in the possibility of developing to justify the work they put in. I think there's a danger, however, to expecting development in skills that are not historically malleable. When the player fails to make improvement, they tend to get blamed not because of anything they have not done but simply because of unrealistic expectations. That's not fair to anyone.

This free article is an example of the kind of content available to Basketball Prospectus Premium subscribers. See our Premium page for more details and to subscribe.

Kevin Pelton is an author of Basketball Prospectus.
You can contact Kevin by clicking here or click here to see Kevin's other articles.