Thoughts on weighting Oscar pool?

UPDATE: I think I’ll do a 1-5 system based on difficulty of category, as decided by you in the poll below. So please vote and let us know what you think the toughest category to predict is. (Also, I screwed up and didn’t include Best Documentary Short Subject in the poll and it’s too late to change it given all the voting, so if you were going to pick doc short as the most difficult, please let me know in the comments section. I can tally up any votes spoken there. Sorry, guys.)

Just a reminder, if you haven’t joined our annual Oscar pool at Picktainment, you can do so here. Any advice on weighting the categories? I’m open to the consensus. Not sure what’s best.

I don’t know anything about Picktainment, but would you be able to do something where you get more points for choosing an upset? As an example…for Best Actor, if Firth were to win and you picked Firth then you get three points, but if you picked Bridges (a longshot) and actually got it right then you get 5 points. Doing something like this makes it so that going for those risky upsets could pay off. But it would also be very complicated to keep track of, and since I know nothing of Picktainment I have no idea if they’re able score something like that.

Sam, I think vfx and Inception is pretty solid. Yeah, it is tricky to classify–because something that inspires splits on say Gold Derby or Gurus can feel clear to folks on either side. Should users vote on what could go either way? And toss-ups are different from upsets (eg Hurt Locker and Sound Editing, Precious and Screenplay versus Original Screenplay or Editing last year.)

I think you should weigh them as you feel the degree of difficulty is of getting it right in each category.

I’d say Animated Feature, Actor, Actress, VFX, the Screenplays are easy to call, everything else is somewhat fluid. The biggest headscratchers for me are Art Direction, Score, Foreign, the Shorts as always. I could also see Documentary going several different ways. Pick them as you see them.

I don’t know how possible this is for you to do, but here’s a more objective way you could do it than simply assigning arbitrary difficulty levels for certain categories: look at the statistical spread of the predictions. For instance, if Colin Firth got 98% of the vote, correctly forecasting his win would get you 1 point, where as if you correctly picked Javier Bardem who had only 1% of the ballot, you would 9 points. In other words, you could have a sliding scale like this:

Picking correct with 90%-100% of people also correctly picking: 1 point

Picking correct with 80%-89.99999% of people also correctly picking: 2 points

Picking correct with 70%-79.99999% of people also correctly picking: 3 points

and so on until

Picking correct with 0.00001%-9.9999% of people also correctly picking: 9 points

This way you wouldn’t be guessing the degree of difficulty. You would be using the actual ballots to determine the degree of difficulty after the fact, making it more objective.

The answer to me, is obvious: Wait till AFTER the Oscars to decide weighing. Not only will it allow you the hindsight needed to see which categories were truly close. Yes, I know this may seem like a less fun way to do this thing but I actually think that keeping people in the dark would make it more challenging, and ultimately more rewarding.

And I disagree with Dude’s approach and here’s why. The game theoretic scenario it sets up, disproportionally rewards going for upsets. To put it in layman’s terms, if I want to play in an anonymous Oscar pool and my main abjective is to win I would have absolutely no incentive going with consensus picks. In the desire to stand out from the crowd, I would be then motivated
to pick upsets not primarily because I actually believe they will happen but because they offer significantly large reward for the risk. When this things up are set up poorly just one correctly (read: luckily) picked upset could make up for three or more incorrect picks. This could still work, mind you, because there’s still thinking involved but setting up weighting properly may require too much consideration.

As Dude himself said:
“Doing something like this makes it so that going for those risky upsets could pay off. ”

I think I’d rather see people pick what they really think would actually win and use the more difficult categories (based on the actual results) for tie brackers.

Your points sound reasonable, Maxim, but I think there are a couple of problems with scoring after the Oscar show:

1. I’m not sure how Picktainment works, but I believe that what you say could render the standings useless. We wouldn’t know how much a certain category is worth until its award is given, right? The thing is that I’m not sure if Picktainment allows you to modify all the categories’s values once the show is over, since it would leave a wide room for cheating. If it allows it, sure, it could be an interesting way, but it if doesn’t, then it’d be horrible to start manually adding and substracting points to all 195 contestants (as of now) depending whether they got the upset right or wrong in all categories, and THEN make the defintive standings with the modified results. Too cumbersome.

2. Even if we were doing it based on ‘how close were they’, then I still think that it’d be required for all of us to establish before the Oscar show how much a nominee’s win is worth in all categories. That ‘in the dark’ feeling you mention removes any sense of transparency the game has. Also, unless we can all agree in advance on how much each nominee’s worth in points if he were to win in his category, the scoring system might lend itself to conflict among players.

The more obscure the category, the more points it should be worth since the likelihood of a majority of people picking the same movie in the Doc Short category is lower than in the Best Picture category.

Thanks for the thoughtful response, Andrej. In regards to your first point, I admit I haven’t really considered the way Picktainment works at all (I actually never used that website before). That’s an oversight on my part. I guess I was more interested in coming up with the approach I thought was most effective without giving much thought to its feasibility.

I understand your second point but, parodoxically, it is one of the main reasons why I argued for waiting till after the oscars to decide on scoring. This is because I am not so much interested in how close someone gets to making a correct prediction (which is something that cannot be known with absolute certainty for people who made a wrong pick even after the ceremony) as I am in knowing which categories contained true (read: unexpected upsets) .

In other words, the goal is not to disproptinally reward people for picking something we think could have an upset over people who picked an upset that no one even saw coming. It’s a very subtle point but is something that makes sense to me.

To make the whole thing more simple, I would say make each category be worth one point by default and then make every category that scores an upset be worth some multiple of 1. (You could go further and decide that multiple based on its degree of apparent unlickliness but that seems like way too much work). The only problem here is that, again, it would require adjusting scores based on actual results. That said, any site that deals with those types of predictions worth its salt should allow you to do that type of recalculation for each ballot easily.

Maybe we could take a poll…have the predictors predict what they think the hardest category is. Sure Best Picture is the big cheese and should be worth the most, I guess, but it’ll be far more impressive for someone to get Costumes and Supporting Actress correct. The most hard-to-pick category–as chosen by the pickers–is worth the most if chosen correctly.

There are, what, 27 categories, the hardest to pick is work 27 points and the slam dunk–cough, Best Actor–is worth 1.

If I’m reading this right, then, for example… in the case of John Hawkes (an outsider) winning, all those who picked him to win get more points that those who voted for Geoffrey Rush (the more expected upset) could have gotten if he had won instead?

It’s not a bad idea, but I still think that some previous notion of how likely is a certain nominee to win is needed to appropiately score their victories. Probably the polls on the left side could serve as a way to see how much an unexpected win could score, those less voted net more points than the more popular ones. I know it says ‘what do you think it’s the most deserving’, instead of ‘what do you think it’s more likely to win’, but it could still work, we all voted there anyways.

But this would work as long as Picktainment allows changing the score system in such an advanced fashion, obviously.

I may end up just taking the easy route and, on a scale of 1-5, weighing the categories in the order of difficulty to predict based on my own take. Though I could try to put together a poll to get your thoughts on that, I guess.

Crap. If you’d pick doc short subject as the most difficult, please let me know by commenting here. I can’t change the poll because the code would be different, but I can tally up any votes spoken here. Sorry, guys.

I’ll say what I said last year: I think the shorts should all be worth one point, regardless of weighting. I realize this ignores the difficulty criterion you’ve instituted, but it’s unrealistic to expect more than a handful of readers to have seen the shorts, so we’re at a severe disadvantage on those categories.

I’ll just repeat what I’ve said earlier: there is a difference between perceived difficulty of predicting a category vs actual difficulty.

If nearly everyone predicted Inception to win Best Visual Effects and it actually went to, say, Alice in Wonderland, it would be silly to give the person who picked it correctly just the minimum amount of points just because it was perceived as an “easy category”. The point being, a category is a category and some years best score could be as hard as best animated short.

I understand how weighing the categories after the fact would be impossible (though I insist that for a site like Picketenment it’s a clear oversight), but do we really need a poll to tell us that documentary shorts are hard? Why not just go with a simple one point per category prediction and save the shorts for tie brackers?

In all likelihood, even under one point rule, the only candidates left for contention will have gotten nearly everything correctly anyway.

Maxim does have a point: if somebody has the balls to predict an upset in a category widely perceived as locked, only to be proven right, they deserve more than the minimum number of points.

For example, Adapted Screenplay would have been deemed an easy category last year — but anybody who went out on a limb by predicting Up in the Air would lose to Precious should have been duly rewarded.

I can understand that. On the one hand, there’s a lot of evidence that Leo is a frontrunner. On the other’s there’s just the opposite. In other words, it’s the perceived closeness that makes it hard. People are torn.

These results are wonky. We’re seriously going to judge Best Supporting Actress, where we have 102,281 precursors but there’s still *a little* mystery, as more up-in-the-air than the shorts, where there’s absolutely no precursor and nobody knows the make-up of the voting body in that category?

Kris/Guy, I think you guys would be much better judges of what’s difficult to predict than the less informed (and main category-skewed) public.

Besides, all of my subsequent comments were repsonses to others and adjusted for realism (while still maintaining the hip PG13 rating). There is value is entertaing hypotheticals. It’s not like life is vexing or anythig, right?

“I AM THE DECIDER!”

What are you, 12? We went a whole two days without an annoying comment like that on this column, but I knew one was coming…