Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter SocratesJedi writes "Like many technically-minded people, I don't have a lot of time to keep up with sports. Nevertheless, trying to predict the outcome of the NCAA men's basketball tournament is a fun activity to share with friends, family and colleagues. This year, I abandoned my usual strategy of quasi-randomly choosing teams and instead modeled the win-loss history of all Division I teams as a weighted network. The network included information from 5242 games played during the 2011-2012 season. From this, teams came be ranked using tools from graph theory and those rankings can be used to predict tournament outcomes. Without any a priori information, this method accurately identified all the #1 seeds in the top 5 best teams. It also predicts that at least one underdog, Belmont (#14 seed), will reach the Elite Eight. Although the ultimate test will be how well it predicts tournament outcomes, initial benchmarks suggest 70-80% accuracy would not be unreasonable."

Yes. If fairly valued at a PE of say 25 or so (which is still low for their growth rate), their stock should be at $875 or so.

MOT, INTC, EMC, JNPR are all similarly valued. But have much lower growth rates.

BIDU is the only large tech company with a similar growth rate. It's PE is 46, which would put AAPLs stock price at $1615.VMware has lower growth, but a PE of 60. AAPL would be at $2100 if similarly valued.

I worked in a research group in college that worked on exactly this problem - predicting NCAA tournaments with a graph-theoretic approach. That is exactly how you test the algorithm. And the cited estimate of 70-80% accuracy seems made up. People who research the field know that there is far less certainty than that. At something like 20% confidence, your prediction should be something like 20%-90%.

The problem stems from the fact that we traditionally predict a team will win if it is a stronger or better team, and we use our graph theory to produce relative team ratings. And if each game of the tournament were played over and over again with the winner of the majority going to the next round, then our methods would work even better. As it stands though, we are trying to predict a single sampling from a probability distribution - which will necessarily have error. Informally, the real tournament has upsets (when a weaker team beats a stronger one). Our algorithms can't predict these, the best they can do is gain a better understanding than humans as to which team is better.

Add to that the fact that the tournament is structured hierarchically - a mis-prediction in the first round prevents you from even attempting to predict later games (and by NCAA bracket scoring, that counts the same as mis-predicting those later games). So early upsets can potentially have large negative outcomes on brackets.

Yeah, like when someone intentionally throws a game. As long as people are gambling (somewhere) and money is to be made, there is an opportunity and incentive to cheat. Get your graph theory to account for that!

Or maybe regression analysis is better like Levitt used to find cheating with Sumo wrestling and US student test takers in his book Freakonomics. (Awesome book BTW);)

And the cited estimate of 70-80% accuracy seems made up. People who research the field know that there is far less certainty than that. At something like 20% confidence, your prediction should be something like 20%-90%.

If a coin flip is 50% accurate, than an extra 20% accuracy will give you 70%.

There are only two teams per game, modeling that with a coin flip makes a lot more sense than modeling it with a die. A random chance will give you 50% accuracy at picking the winner. You have to do better than 50% accuracy to have any claim at success at all. The real question is, what was the GP talking about when he claimed that success rates between 20% and 90% were more realisitic. Why even try if your algorithms can't beat random chance?

Definitely need to define what is 'success'. In your example 50% is as low as you can go since a success rate of 20% really implies 80% as you'd merely do the opposite.

Here in the UK betting is perfectly legal and Betfair (a betting exchange that allows people to take either side of a bet) has a nice API that lets you back or lay most sporting events. People use very sophisticated algorithms to work out the in play odds of football matches, adjusting them second by second as the game goes along.

Good catch. I meant an alpha of 0.2 - which as you note is 80% confidence.

50% is not as low as you go, because of the way brackets are scored. You predict the outcome of *all* the games in the tournament before *any* games are played. Which means that errors in the first round mean that you haven't even properly predicted who is playing in the second round. If the team you picked as winning a game doesn't even play that game, then you automatically lose.

The problem stems from the fact that we traditionally predict a team will win if it is a stronger or better team, and we use our graph theory to produce relative team ratings. And if each game of the tournament were played over and over again with the winner of the majority going to the next round, then our methods would work even better. As it stands though, we are trying to predict a single sampling from a probability distribution - which will necessarily have error. Informally, the real tournament has upsets (when a weaker team beats a stronger one). Our algorithms can't predict these, the best they can do is gain a better understanding than humans as to which team is better.

It's not just the single game problem - and even if you set aside upsets, the "stronger" team doesn't always win because as the coaches have been saying for years, it's about matchups. Teams have strengths & weaknesses - style of play, offensive & defensive skill sets of individual players, etc. A team with a tremendous front court but weak ball handlers is more likely to lose to a inferior team that has a high pressure trapping defense whereas it might beat a stronger team that doesn't use the same

Everyone knows who the big names are who are likely to make it to the final four. It's predicting how things will go at the middle and bottom, where teams are much more likely to be evenly matched, that's really hard.

Going off of the 2011 tournament for any generalized method of picking games is a bad idea. It was a particularly chaotic tournament for a variety of reasons. Having a system that failed last year is potentially a good thing because last year didn't work like the majority of tournaments do.

That may work for pro sports, but not for college sports. In fact, because teams usually lose their nucleus after winning it all (players declare for the draft), it is rare for a team to make it to the final game two or more years in a row.

I disagree - how good a team is can vary wildly year to year. Coaching changes, injuries, age, experience and so on can play huge roles in how a team performs especially on a collegiate level where there is so much growth between juniors and seniors in terms of development. This is less so in professional sports but still relavent.

Yes but last year's tournament had 2 small schools, Butler and VCU, in the Final Four. While Butler made it to the championship 2 years in a row, they were a surprise both times. VCU has never made it that far in the tournament and there were some TV pundits that said they should not have been selected for the tournament at all when the bracket was announced. VCU got to the Final Four after the same pundits predicted they would lose in the next game for every single game.

Some problems I see. Disclaimer: I know there's a margin of error here as the author said, and I know my observations will be based largely on anecdotal evidence, making it inferior. But if sports were so easy to predict there would be no sports gambling.

- That's probably too far for Belmont; a #14 has only ever gotten as far as the Sweet 16, twice (Cleveland State '86, Chattanooga '97). Lowest seed to make an Elite 8 is Missouri in 2002 as a #12 . Belmont is actually going to be one of the more popular upset picks, but they would have to upset two far superior teams twice in 3 days.

- It's a bit too "chalk". #1 seeds generally survive the first two games (undefeated against #16's, 55-14 v. #8's, 59-6 v. #9's), but the #2's have it worse (only four losses v. #15's, but 58-21 v. #7's and 29-21 v. #10's). I know two #12's, a #13 and a #14 doesn't seem like "chalk" but historically it's much more likely that we'll see more #5-7 or #10-11's. To have only one #2 not make the Elite 8 and all the #1's would be almost unheard of.

- A #12 always beats a #5, but three of them doing so in one year would seem unlikely, as they're only 39-89 overall.

- Some of the other first round matchups seem a bit improbably. It has every #6 and every #7 winning, for example.

I didn't read the article (yet), but I put together a game result predictor a couple of years ago that I ran against the tournament field with about an 83% success rate for the whole tournament. It was in the 93% range for the first two rounds. My algorithm utilized season long team statistics to get a team's baseline and then incorporated strength of schedule and seeding components. Just like you mentioned about how far a team has historically progressed from a specific seed, I used historical analysis of

It is not hard to create a model that works perfectly on observed data. But then you run into the problem of overfitting [wikipedia.org] and your model loses any general predictability it had. To counter overfitting you need to have separate datasets for training and testing otherwise the model will depend on random details in the data. The proof of the pudding is in the eating and if you're model is good enough, you should be able to make money on sports betting on it.

if you're model is good enough, you should be able to make money on sports betting on it.

Not against people whose model is just as good, and not (over the long term) against any professional gambling enterprise (legal casino or bookie) set up to profit whether you win or lose. A professional gambling outfit either takes a cut that negates all statistical advantages of having a good predictive method or they set up the odds so that they'll make back what they lost to you last time when you lose to them next time. The only people who make money reliably in gambling are those who have found a suck

Not quite. Picking winners =/= winning at gambling. Margin of victory, aka the spread, comes into play. That is a bit harder to account for in these types of situations involving so much human variable. Granted, being able to identify some potential upsets could allow someone to bet big on those and become potentially rich.

And then he has to persuade someone to take the bet. You can sure that betting establishments will pay someone to work out the odds at least as well as he does. It's okay for informal betting among friends, but if you're trying to make money then fleecing your friends only works a few times before you run out of friends...

Ah, you would think that the casino sports book odds were the most accurate availibe and only determined by scientific study of the sports.

BZZZT! Wrong. Casinos need to make a profit. So they determine the *initial* odds by studing the sport, but then change the odds in reaction to the bets that are placed. They try to have equal amounts on both sides of a bet. They pay less to the winners than they get from the losers.

What's the point of pointing that out? Well, you have some pro gamblers who actually do m

March Madness is notoriously hard to predict, partly because of the number of teams involved and also because of the single elimination system that I love so much. Its prevalent in few sports and makes each game mean a lot more, also opening the door for cinderalla to take her 15 minutes of fame. 7-game playoff rounds like they have in Baseball and the NBA tend to nullify those outliers. I honestly think that's a big reason for the success of the NFL too - every game and every play means a hell of a lot more when the best possible record is 19-0.

You know, for stereotypical nerd behaviour like communicating to each other in incomprehensible jargon and obscure references that other people don't get, obsessive behaviour, dressing up in ridiculous costumes for gatherings, etc, I've come to realize that nothing beats a hard-core sports fan.

You're some way behind the curve if you want to make money sports betting on this. There is an extreme non-stationarity problem with basketball teams which inevitably means methods using past statistics will never be that successful. I know of professional basketball modellers who pay an army of students and the like to watch college games while clicking on hand-held devices to record second-by-second data on passes, interceptions etc. This data is then fed into their models and provides a very accurate pic

His statistical reasoning is always well described, so that if you disagree with his results, at least you understand why you disagree. He's got "picks" [nytimes.com] and a description of the system [nytimes.com] used to generate them.

The original article is an interesting network analysis exercise, but it is really limited by its assumption of no a priori quality data. (Any time you beat Kentucky or North Carolina or other perennial powerhouses, that's almost always a quality win.) Sagarin and LRMC follow similar logic, but without a

I "won" an office pool once without even playing. I told the guy that I could win the whole thing, but, as I didn't want to take their money through gambling, I would just tell him my picks after it was closed.

The problem? He was giving points to each "winner" based on their number. If a #1 won, you got one point. If a #12 won, you got 12 points. I just picked 9-16 the whole tournament through. (He admitted that they all would have hated me had I played.) After the first round, with 4 upsets, there w

Regarding the bracket, the four No 1 seeds march along, undefeated, until they meet in the final four. While this can happen, it seems like a trivial and unsophisticated result to me.

The problem with these predictions is, of course, that is the most likely scenario. There are enough other things that can happen that it is probably a worse than 50-50 shot, but there isn't another scenario that is more likely. Really, all any algorithm can do to beat picking the better seed every time is try to find spots where teams are seeded either higher or lower than they should be, and the very top and bottom of the list are probably not the most likely spots for this to happen.

There's too much data and too many variables. Even just inputting all the known, public data might significantly improve the accuracy, but there's also lots of unknown private data that can influence games. Algorithms like this can't account for things like the coach's son getting killed in an automobile accident the night before a game, or the star center getting hit with a bad flu. And when you make it complex enough to take in all that data, it still has to get all that data somewhere, which means it has

... modeled the win-loss history of all Division I teams as a weighted network. The network included information from 5242 games played during the 2011-2012 season. From this, teams came be ranked using tools from graph theory...