The methodology for the finals model is described here. The model is 87% accurate on ranking within a margin of error of +/- 3%. Probabilities being what they are, somebody with a not-safe probability of just 0.25 will be in the bottom 3 one out of four times. Please do not comment that the numbers are wrong. They are probabilities, not certainties or even claims. Do not gamble based on these numbers.

Names in green are predicted safe. Names in red are considered at risk for being in the bottom 3. Names in yellow are undecided. Any person not in green is not considered safe by the model. The most probable bottom 3 is Ben, Dexter, and C.J. However, anybody on the list being in the bottom 3 would not be shocking.

I’ve addressed the model’s apparent poor accuracy this year in a couple posts, but let me reiterate here. The model uses history to calibrate its expectations. This works well when things that are happening now are like history. But when the rules change a lot, and this year they have, history is not as good a predictor of the future. At this point, I do not yet have enough information to tell how different this year is. Previous weeks have seen events that the model predicted were improbable, but that doesn’t make them impossible. Once we get some better statistics this year (with a few more shows), I will revisit this topic in great detail.

Nobody this week has a crazy high or crazy low chance of being in the bottom 3. Majesty, whom most people (including myself) thought was awful, has great poll numbers for popularity, and has only about a 10% chance of being in the bottom 3 if history is to be trusted. Ben’s numbers are really bad, but he has proven resilient before this year. He’s about 50/50 to be in the bottom 3.

The two WNTS approval ratings that stick out in this list are that of Majesty and Sam. So why are they so much less likely to be in the bottom 3? Because they appear to have a base of popularity—people who will vote for them no matter what they did on the show. This popularity is measured by Votefair. However, I’m dismayed by how few people are voting in that poll this year. Votefair already had a huge sampling bias, because people weren’t randomly selected to take the poll, they were self-selected. If the total number of voters there drops, the error associated with that is hugely amplified.

Dialidol is once again out for the count. That service works by measuring the busy signals on voting phone lines to try to determine whose lines are being called the most. Tonight, no Dialidol lines identified a busy signal on any of the contestants’ phone lines. That variable has been nulled in the projection above, meaning that data that was available in predictions last year is not available now. Internet voting may have just killed Dialidol, but I’ll withhold judgement until a little bit later in the year. The Dialidol forums are mostly quiet about the issue, but it is addressed in one post.