Vacation Time – Afterthought 36

Jon reports on the quality of Ireland and their criminally small sodas, Jef waxes rhapsodic about a crazy awesome band, and listener questions are asked and almost answered before diverting to … I don’t know, probably complain about Star Wars. That sounds pretty likely, right?

17 responses to “Vacation Time – Afterthought 36”

I’m surprised you didn’t mention Lord of the Rings as another bad place for RPG settings. My understanding of it is that licensing basically restricts players to War of the Ring Era, which is almost exactly what you mean by “Player actions are meaningless.”

Both 3d6 and d20 can be translated into percentage chances of success, but the difference between them is that +1 on a d20 is always 5% and +1 on 3d6 varies between ~0.5% and 12.5% depending on what the original value was. For that reason, d20 is way easier to analyze from a statistical perspective; it’s more math-friendly.

Of course, you’ve previously expressed that you consider a skill with only an 80% chance of success to be too unreliable to ever use, so if you ignore everything with less than 90% chance of success as being irrelevant since nobody will ever use them, then a d20 only has three interesting values and 3d6 has five interesting values. A d20 system is equally useful at all probabilities, but a 3d6 is most useful at the extreme high end of probability.

That bit about success chances may have had to do with hard caps before appreciable penalties kicked in, but I’ll have to go back to 5th Age to check that.
Though I will say that a d20 is easier to analyze from a probabilistic perspective. And the results are that it’s something of a newbie probabilist’s trap in a TRPG where you’re looking for results that are moderately unpredictable and moderately dispersed. The former is a measure of how difficult it is for you to narrow down an outcome, while the latter is a measure of how wide the results tend to be. The two are related but not identical, as a die that only shows 1 or 20 is predictable (for obvious reasons) but widely dispersed.
(We’re looking for moderate unpredictability and dispersion so that the randomness of a roll vs. circumstances/character skill have roughly equal contributions. If the roll doesn’t do much then there’s little point in actually picking up the dice and tossing them, but if the roll does too much then inversely there’s little point to caring about a character’s abilities.)
Anyway, rolling a d20 has a relatively large amount of dispersion, as measured by the standard deviation. 3d6 has a standard deviation of ~3, while 1d20 instead has a standard deviation of ~6. Getting around that dispersion so that character effort matters actually requires a fair bit of tinkering under the hood. I’ve seen it done with tight control/guidelines about what sorts of modifiers are given out, specific rules on caps and floors for modifiers and targets, exhaustible bennies to push a result around when the stakes are high, or simple acceptance of that dispersion. (The last is what OSR games and older versions of D&D tend to do, by keeping things sufficiently quick that it’s easy to recover as a player from mistakes like a slip of the tongue or character death.) This link here is a quick run of a few distributions all of which have a mean of 10.5, but which nonetheless function differently.http://anydice.com/program/8d4d
A d20 also has a lot of unpredictability. How much? The maximum possible for a twenty-outcome distribution, that’s how much. See, Shannon entropy is a weird concept in information theory that boils down to “all pieces of information are probability distributions, and entropy is how you measure that information based on disorder and unpredictability having more meaning”. (A three-million digit string of 1s and Tolstoy’s War and Peace are about the same length, but we’d all agree that the latter has more information.) And the cool thing about it, unlike Kolmogorov entropy (which is mathematically neat but also nearly useless in practice), is that you can apply it to find the entropy in a distribution with some calculation. Anyway, uniform distributions like the d20 can be proved with some relatively simple tools to have maximum entropy in general for their number of outcomes. The effect is that if you’re rolling that d20 you’ve got no clue as to whether it might come up as a 1, a 7 , or a 17. Whereas if you’re rolling 3d6, you know that it’ll “likely come up within 8 to 13” or “probably be at least 9”.
TLDR: Probability moves in mysterious ways.

As far as I was aware, the topic only pertained to binary pass/fail systems, where the dispersion doesn’t matter since all it does is affect the probability – whether the dice come up 3 or 14 is irrelevant when the target number is 15; we know that the outcome is either pass or fail, and the only question is which one.

I don’t think anyone is arguing for a d20 where the high degree of dispersion would matter. Nobody is suggesting a d20 (or d%) for damage rolls, except possibly Kevin Siembieda.

Dispersion does matter on binary pass/fail systems, though as you’ve noted not for any sort of degree of failure. Rather, moderate dispersion is important on the back end, because then modifiers can mean more in presumably standard play. A +1 on a d20 always means +5% to do something, but +1 on 3d6 in the middle of the bell curve can mean somewhere between +9% and +12.5% to do something. (This is why I am totally fine with “just” giving a +1 to something in my Fantasy Age game, because it still means a fair bit.)

It really comes down to whether you want a bonus to help average chances more or less than already-extreme chances. If you need an extreme result on 3d6, then a +1 isn’t going to change that by an appreciable amount, even if +1 is a meaningful bonus to someone who would succeed half of the time. In actual play, the most common situation is a character with a significantly above-average chance of success; if you only had a 50% chance of succeeding at a task, most players wouldn’t even make the attempt (especially if there’s any consequence for failure).

Although I’m inclined to believe that the whole thing is just a matter of cognitive bias, where people expect a line of maximum probability at every step, and they get thrown when their 80% sure-thing happens to fail. A bell curve feels better because it reinforces that bias, since a roll of 15 or less on 3d6 occurs far more often than the 15 out of 18 numbers would suggest.

Generally I’d say that you should treat extreme values as just that – outliers – and not require them for most play, but also not make extreme modifiers do much more. Like I said above, the more likely you allow outliers to be, the more dispersed your RNG will be, and thus it will also make modifiers in the standard range to be less meaningful.
But that’s not totally entwined with success/failure rates. You can have a game in which success chances aren’t that great, but if you want it to be good then you’ll have to ensure that failure states are still interesting. Such might be (depending on the situation) damage-on-a-miss, progression on some metagame timer that explodes in your face at 0, a twist that makes things weirder for all participants, accumulating damage on you, or whatever else. Whatever it is, outcomes stemming from failure states should move the narrative along so that it’s not just a closed loop. (This was the issue with some of the stuff in Fifth Age, that even at the skill cap you might have only have a ~50% chance to hit after penalties from your opponent’s defense kicked in. And 50% doesn’t sound that bad, but keep in mind that failing to hit means that you’ve just done nothing for the round.)
“Although I’m inclined to believe that the whole thing is just a matter of cognitive bias, where people expect a line of maximum probability at every step, and they get thrown when their 80% sure-thing happens to fail.” Ain’t that the truth. Like I said, probability moves in mysterious ways.

3d6 is a bell curve, d20 is linear. d20 will have more extremes of rolls, a big swing that makes it as easy to roll a 10 as a 20. Bell curves make it more likely to roll the average result of the dice (a d6 rolls 3.5 as the average, so 3d6 will roll abn average of 10.5.. yes, you cannot roll a 0.5 but the calculated average works out that way).

Linear is best for games which are a bit more cinematic… for good and bad. You will get more highs and lows so you get more hits against high defense targets with d20 than 3d6. However,, bell curves are better for more “realistic” systems.

I kind of liked running d20/DnD using 2d10. It is still a bell curve but not as much as 3d6. All will roll 10.5 on average.

System mastery recommends that you call a family member or trusted friend before interacting with a Totino’s Party Pizza. System Mastery is brought to you by Totino’s Party Pizza. “Totino’s: The World Won’t Even Notice You’re Gone.”

I have no problem with running a game based on a movie or TV show and letting the characters make changes and create a alternate timeline that becomes different than the source material. I do not know any games that have done this except Traveller- to itself! In the regular Traveller timeline the emperor was killed and the Imperium descended into a civil war with a catastrophic end. When Steve Jackson Games did their Traveller line it diverged from the original in that the assassination failed and the civil war never happened.

I once sat in on a d6 Star Wars game where the characters were Imperial agents hunting the movie heroes. I was there when they decapitated Princess Leia. They had kill most of the original trilogy heroes.

This game had that thing you guys hate about licensed games: I was there just one night and one of the players used the “he’s got a thermal detonator” move three times! I understood he used it almost every non-combat encounter, even with shopkeepers. Thank gawd I did not join that game. I like hanging with friends and having fun, but not that much. All though the beheading came as a nice surprise and kind of made everything bearable.