I think your scores are accurate. I was pretty happy with most of the blog-based reviews of 6SB, and I know via email it was at least one person's favorite game...but I believe this was more than balanced out by a few reviews that didn't disclose a score but seemed pretty negative, and the constant low-level madness induced by the game's stingy parser. I'd be happy with a 6 or a 6.5, and thrilled with a 7.

Based on the far too limited available data (2 public ratings for each game) the scores might be:

Anno 1700: 6Six Silver Bullets: 6Stone of Wisdom: 5,5

Resulting in 30th, 30th and 37th place respectively compared to last years result. I'm probably completely wrong but it is fun to guess.

This is where I have a "problem"

First of all let me say that I feel that a 30th place out of 77 isn't bad and I'm pretty happy if that's the way the score goes.However, I can't help but feel that there's a "problem" regarding the way the scores are set.

If a game only gets 3 reviews (and scores) how is it to compare with other games that receive multiple scores?

I haven't checked, so I don't know if people are scoring games without writing reviews (is it possible?) but I'm just wondering. Is (in this case) Adrift games at a disadvantage from the beginning?Not that it matters much. I'm just curious, trying to understand the way the scoring system works in the Annual IFComp.I'm already working on my entry for the 2019 Annual IFComp.

There were 247 judges this year, far more than there are reviewers, so you will most likely get more ratings than reviews.

The scoring system is based on the average score, e.g. if you receive ten ratings of 6, your average score will be 6. Somebody who receives 100 ratings of 6 will also have an average score of 6. So it is not necessarily an advantage to get many ratings. However, maybe there is a rule(?) if you get too few ratings you will not get a score. I am concerned about this, even though I don't think it has happened before in IFcomp history.

There's always a potential for bias for small sample sizes, but this is just as likely to work in your favor as against it. One of the nicest responses I got was from someone who, I suspect, does not play much interactive fiction. If that was one of 3 reviews, he'd have seriously pulled "up" my score.

It's not gonna be a problem this year, this is the busiest year in a while, it seems like, so I bet everyone has plenty of reviews., even on more onerous systems like ours.

Prune, I have started work on my submission as well, though it will be a very different sort of game than 6SB so I don't know how much the lessons I've learned this year will carry over. I look forward to running again with y'all next year.

ralphmerridew wrote:Judges just have to put down a score. Writing a review takes a lot more work.

I realize that now. I somehow thought that the two went together, though.Would be interesting to find out how many actually scored on your game, meaning that they must have played through it, at least partly.

As far as voting went, I think the lack of online play really hurt ADRIFT. Only four games got less than 30 votes in the whole competition and three were the ADRIFT entries, and yes they were the lowest voted games of all. Quite a few games got more votes on their own than the ADRIFT entries combined. With no online play and no way to play the games on anything but Windows, that's a sizeable portion of potential judges who couldn't even play the ADRIFT games Whatever else needs fixing here, online play should be the number one priority.

My big project over the winter holiday is going to be to turn one of my old computers into a permanent web server, to better host my stuff and help with some professional projects. I would also like to try to host my ADRIFT games through this (though I don't know much about how).

Next year, if a better solution has not been worked out, I would like to offer any IFCOMP entry to use this server to run their game, because I agree with david that a lack of online play did hurt us,