Tab CompletionTab Atkins Jr.jackalmage@gmail.comhttp://www.xanthir.comhttp://www.xanthir.com/2018-12-21T20:17:09+00:00All content is published in the public domain, or optionally may be licensed under the CC0 license.http://www.xanthir.com/b4y31Fast/Slow D&amp;D Initiative System2018-12-21T20:17:09+00:002018-12-18T00:46:17+00:00<p>D&amp;D 5e's initiative system is more-or-less unchanged from much earlier editions. Every character has an "Initiative Bonus"; at the start of combat everyone (including all the DM-controlled enemies) rolls a d20 and adds their initiative bonus; then everyone takes their actions in descending order of their rolls. When everyone's gone once, it "goes back to the top" and repeats in the same order.<p>This... works. It gives you an ordering and lets you represent a faster character by giving them a higher initiative bonus... sorta. But it has several problems.<p>First, your initiative bonus <i>just doesn't matter that much</i>. A d20 has a lot of variation. Over the course of many rolls, you can distinguish between, say, a +2 and a +5 bonus in how many times you succeed. Initiative simply isn't rolled that often, tho, so a character with +5 to initiative won't <i>feel</i> like they're actually much faster than a character with +2 to initiative.<p>Second, in practice it's a rather slow, clunky way to start a battle. A perhaps dramatic build-up to combat suddenly screeches to a halt as the DM demand initiative rolls from everybody, rolls a bunch of initiatives for their monsters, and then sorts everything out. This can easily take several minutes! (It doesn't <i>seem</i> like it should - it sounds easy and quick - but theory and practice don't align well here. In practice, it's pretty slow.) Only after all that's done can combat, and fun, actually begin.<p>Third, once the initial initiative roll has happened, and the first round has finished, initiative... doesn't matter anymore. The order just determines who gets to strike first; after that, every round is the same for everyone: you go, then <i>everyone else</i> gets a turn, then you go, etc.<p>Fourth, while players aren't <i>technically</i> locked into their initiative result (they can delay and take their turn later if they need to), in practice players don't (for various practical reasons). This restricts what sort of combos people can use; it might be more effective to let the Fighter rush forward and have the Cleric hold back to see if they need to drop some heals or just do cleanup, but if initiative puts the Cleric first, generally they'll just go first. This can get very frustrating!<p><a href="https://www.tribality.com/2014/12/19/dd-5e-combat-initiative/" title="">This article</a> presents a better version of initiative, that both simplifies things <i>and</i> gives players more meaningful options. Their write-up didn't handle some corner cases well, tho, so I've reproduced and cleaned up the idea for my own purposes:<h2>Fast/Slow Rounds</h2><p>The core idea is that initiative is done away with. Instead, each round, players announce whether they'll be taking a "fast" or "slow" round. All the players taking a fast round take their actions immediately, in whatever order they decide amongst themselves.<p>Second, the DM decides which monsters are taking the "fast" or "slow" round - fast monsters take their turn now, in whatever order the DM wants. (Typically, all the "mooks" will go in the fast round.)<p>Third, all the players who chose to take a "slow" round take their turns, in whatever order they wish. However, because they held back, examining the battlefield and waiting for an opportune moment, they can add advantage or disadvantage to a single roll anyone makes during their turn. (They can give themselves advantage on an attack roll, or give an enemy disadvantage on a saving throw, or provoke an Opportunity Attack and give the enemy disadvantage on their attack, etc.)<p>Fourth, the "slow" monsters take their turn, and also get to impose advantage or disadvantage to one roll during their turn. (Typically, the "significant" enemies will go here.)<p>Then the round is over, and the next round begins, with players once again choosing to go fast or slow.<p>That's it! (Except for some of the additional quirks, noted later in this post.)<h2>Benefits</h2><p>In practice this ends up having a <i>lot</i> of benefits over traditional initiative.<ol><li>Because there's no big "initiative list" setup at the beginning of the combat, you can jump straight into combat with no delay. Just ask the players who's going fast, and you're off to the races. This has a surprising psychological effect on players, maintaining the drama that was built up pre-combat very effectively!<li>Because the players can adjust when they take their action each round, they remain engaged thru more of the round, rather than just perking up on their turn and checking out a bit while they wait for everyone else to go. They plan out their actions along with the rest of the party, setting up combos and adjusting things for optimal safe ordering. You end up getting a lot more interesting teamwork out of people as a result!<li>It's so fast! Even on a round-by-round basis, this really does make combat move faster. Because the players are working together and going all at once, their plans don't collapse as much due to enemies taking actions between them (and players don't simply <i>forget</i> what they were going to do, which is a significant danger normally...). As such, players don't have to reassess the battlefield before each of their turns - they know exactly what's changed, since it <i>just happened and was part of the plan</i>.<li>No more (or at least, much less) forgetting about people! It's remarkably easy to occasionally skip people in the initiative when using it normally; if a non-active player asks a question, it's easy to slip back into the order as if they'd just gone. Since the players and enemies all go in just two large groups, tho, it's much simpler to track everyone - the players will remember themselves, and enemies become dramatically less fiddly to track.<li>Slow rounds are <i>amazing</i> for players who want to get off a big dramatic action with less chance of whiffing. Similarly, they're great for making your Big Bad actually threatening, rather than several rounds of "They swing, and... they miss. Again. Your turn."</ol><h2>Fiddly Details</h2><p>While the core rules above are trivial, there are a few additional details to cover.<p>First, several classes or feats give bonuses to initiative, which no longer do anything. (Alert feat gives +5, Revised Ranger gives advantage, etc.) While initiative bonuses aren't <i>actually</i> very significant, and thus it would probably be okay to just drop them, players don't like losing abilities even if they're minimal, and it's still a cool differentiator for a "fast" character.<p>As such, any ability that grants a "significant" initiative bonus (+2 or higher, more or less, but use your best judgement) is reinterpreted to let you get the slow-round bonus (adv or dis on one roll during your turn) during a fast round <i>once per long rest</i>. If you have multiple sources of bonuses, they stack to give you multiple uses of this ability.<p>The Bard's Jack of All Trades and the Champion's Remarkable Athlete don't count; their bonuses only range from +1 to +3 and aren't really "significant", plus most people don't realize they apply to Initiative in the first place (it's a Dex check!), so whatever.<p>There are some details to work out for spells that last X rounds (particularly those that are "one round") that I'm not sure about. http://www.xanthir.com/b4y30We Should Be Using Base 6 Instead2019-01-02T03:20:42+00:002018-12-18T00:02:22+00:00<p>Occasionally you might come across someone who believes that it would be better for us to count in a base other than 10. Usually people recommend base-12 ("dozenal"); compsci people sometimes recommend base-2 (binary) or base-16 (hexadecimal). My personal opinion is that all of these have significant downsides, not worth trading out base-10 for, but that there <i>is</i> a substantially better base we should be using: base 6.<p>Let's explore why.<p>(Warning, this post is long and definitely not edited well enough. Strap in.)<h2>Bases Are Arbitrary</h2><p>First of all, there's nothing special about base-10. Powers of 10 look nice and round to us <i>because</i> we use base-10, but we can use any other base and get just as round numbers. Base 6 has 10<sub>6</sub>, 100<sub>6</sub>, etc. (Those are 36<sub>10</sub>, 216<sub>10</sub>, etc; on the other hand, 10<sub>10</sub> and 100<sub>10</sub> are 14<sub>6</sub> and 244<sub>6</sub>. Converting between bases will usually produce awkward numbers no matter which base you start with.)<p>Why do we use base-10, then? The obvious answer is that we have 10 fingers. Counting off each finger gives us one "unit" of 10 things, and that unit-size carried over until we invented positional notation, where it froze into the base-10 we know today.<p>If we invented positional notation earlier, tho, then our hands could have supplied a better base - each hand can count off the values 0, 1, 2, 3, 4, and 5, which are exactly the digits of base-6. Two hands, then, lets you track two base-6 digits, counting up to 55<sub>6</sub>, which is 35<sub>10</sub>!<h2>Bases Are Significant</h2><p>On the other hand, there <i>are</i> important qualities that <i>do</i> differ between bases. <br><p>The most obvious is the tradeoff of length vs mathematical complexity. Binary has <i>trivial</i> math - the addition and multiplication tables have only four entries each! - but it produces very long numbers - 100<sub>10</sub> is 1100100<sub>2</sub>, 7 digits long! On the other hand, using something like, say, base-60 would produce pretty short numbers - 1,000,000<sub>10</sub> is only four digits long in base-60 ([4, 37, 46, 40]), but its multiplication table has <b>3600 entries</b> in it.<p>When evaluating the tradeoffs of long representations vs complex mental math, it's important to understand a little bit about how the brain actually works for math. In particular, we have a certain level of inherent ability in various domains - short-term memory, computation, etc. Overshooting that ability level is bad - it makes us slower to do mental math, and might require us to drop down to tool usage instead (writing the problem out on paper). But <i>undershooting</i> it is just as bad - our brain can't arbitrarily reallocate "processor cycles" like a computer can, so when we undershoot we're just wasting some of our brain's ability (and, due to the tradeoffs, forcing something else to get <i>harder</i>).<p>So, we know from experience that binary is bad on these tradeoffs - base-2 arithmetic is drastically undershooting our arithmetic abilities, while binary length quickly exceeds our short-term memory. Similarly, we know that base-60 (used by the Babylonians, way back when) is bad - it drastically overshoots our arithmetic abilities while not significantly reducing the length of numbers, at least in the domain of values we care about (in other words, less than a thousand or so). So there's a happy medium somewhere in the middle here, and conveniently the geometric mean of 2 and 60 is base-11. Give it a healthy ±5 range, and we'll estimate that the "ideal" base is probably somewhere between base-6 and base-16.<p>But arithmetic complexity is actually more subtle than that. The addition tables, while technically scaling in size with the square of the base, scale in <i>difficulty</i> roughly linearly, since each row or column is just the digits in order, but starting from a different offset. It takes some memorization to recall how each offset works, but fundamentally the difficulty scales up slowly and simply, and you can do simple mental tricks to make addition easier anyway. (Such as adding 8+7, and adding/subtracting 2 from each to make it 10+5, a much simpler addition problem.)<p>Multiplication is more complex, tho. Some rows are easier to remember and use, others are more difficult: <ul><li>"easy" rows are either trivial (0 and 1) or are factors of the base (2 and 5 for base-10), so they only cycle thru a subset of the digits in ascending order - less to memorize! Easy rows end up being pretty trivial to do mental math with; you can really easily multiply or divide in your head by these numbers.<li>"medium" rows are either smallish numbers that share all their factors with the base but aren't divisors (like 4 in base-10) because they also use only a subset of the digits but cycle thru them in a more complicated manner; or are the last row (9 in base-10) because of the nice pattern that makes its complexity easier to handle; or are just small numbers in general (like 3 in base-10), because even tho they cycle thru all the digits they do so in ascending series that are easier to memorize. Medium rows tend to be harder in mental math; you often need to resort to paper-and-pencil, but they're at least easy to do at that point.<li>"hard" rows are the rest - larger numbers that have some (or all) of their factors different from the base (6, 7, and 8 in base-10), so they cycle thru all the digits in a complicated manner, or just thru a subset in a complex manner + you have to track the 10s digit more carefully. Hard rows are just plain hard to compute with, even when you pull out paper-and-pencil. Rows that are coprime to the base, like 7 in base-10, are <i>maximally</i> difficult.</ul><p>So multiplication difficulty varies in a complicated manner between bases, and doesn't scale monotonically. Base-60, for example, while looking tremendously bad for arithmetic at a naive glance, has significant mitigating factors here - because 60 factors into 2<i>2*3*5, a *lot</i> of the rows in the multiplication table are "easy", particularly among the more "useful" small numbers. (It also has a lot of maximally-hard numbers, of course - all the prime rows 7 or higher except for 59, and 49 - and even more merely "hard" rows.) We'll examine this in more detail in a bit.<p>Divisibility difficulty is very similar:<ul><li>"easy" divisibility are the trivial values (1 and 10 in base-10), and values whose factors are a subset of the base's (2 and 5 in base-10, as 10 factors into 2*5). You only have to look at the final digit of a number to tell if it's evenly divisible by one of these values, and memorize which values correspond to divisibility and which don't. (0-2-4-6-8 for 2, 0-5 for 5.)<li>"medium" divisibility are the values who factor into the same <i>primes</i> as the base, but which use at most one more of a given prime than the base does. That is, since 10 factors into 2<i>5, 4 (2*2), 25 (5*5), 20 (2*2*5), and 50 (2*5*5) all have either two 2s or two 5s. These only require you to look at the last *two</i> digits of a number. (While 100 is 2<i>2*5*5, it's also just a power of the base, which clicks it over into "trivial" territory again.) Also "medium" is, again, the last value less than the base (9 in base-10) because you can always tell divisiblity by just adding up the digits and seeing if *that</i> value is divisible by your last row value. (Yes, this works in any base - you can tell if a hexadecimal value is divisible by F (15) by adding together the digits and seeing if the result is still divisible by F.) If this final row value is composite, any numbers whose factors are a subset are also medium, because the same trick applies: 9 is 3*3, so in base-10, you can indeed test for 3-divisibility by adding the digits and seeing if the result is divisible by 3.<li>"hard" divisibility is all the rest. The rows either exceed the base's factor usage by two or more (like 8 in base-10), and thus require looking at the last three or more digits, or they use a factor that's not in the base at all (like 6 in base-10) and so require you to look at the whole number in a more complicated way. And again, rows which are fully coprime to the base (like 7 in base-10) are maximally hard, with no easy tricks or bounded recognition possible; you just have to do the division and see if there's any remainder.</ul><h2>So What's Actually Best?</h2><p>So, based purely on a naive length-vs-arithmetic-difficulty analysis, we've already concluded that the "ideal" base is likely between base-6 (heximal) and base-16 (hexadecimal). Now let's narrow that list down based on the more complex factors, above!<p>First off, we can cross off any odd base right off the bat. They lack easy mult/div by 2 (it becomes "medium" difficulty instead), which is a supremely important number to multiply and divide by. I don't think any other qualities could possibly make up for this loss even in theory, but as it turns out none of the odd numbers in that range are particularly useful anyway, so there's not even a question. They're gone.<p>So we're left with 6, 8, 10, 12, 14, and 16. Let's scratch off another easy one: 14 sucks. Its factors are 2 and 7, and 7 is the least useful small number. 14 has bad mult/div with all the other small numbers above 2. So it's gone too.<p>8 and 16 we can cover together, because they're both powers of 2. This makes them easy to use in computing, as you can just group binary digits together to form octal or hex digits, but it limits their usefulness in mental arithmetic - since 2 is their <i>only</i> factor, you don't get as many useful combinations of values to make mult/div easier. Plus, the "trick" that makes mult/div easier with the largest digit value is, in these cases, applying to 7 and 15, which are again not particularly useful values. So, while these have some mitigating factors with computing, they're not really contenders. Gone.<p>So we're down to 6, 10, and 12. I'll break these down more specifically, because they're all starting to get useful and we need more details.<p>Base-10 has 10 rows in its multiplication table. 0, 1, 2, and 5 are all "easy" - the patterns are trivial or at least very simple. 3, 4, and 9 are "medium" - the patterns are more complex, but not <i>too</i> hard to memorize and use intuitively. But 6, 7, and 8 are all "hard" - the patterns are hard to use, and the tens digit varies enough that it's an additional burden to memorization. (And 7 is "maximally hard".) So 40% easy, 30% medium, 30% hard.<p>Base-6 has 6 rows. 0, 1, 2, and 3 are all "easy", because 0 and 1 are trivial, and 2 and 3 divide 6 and thus are simple repeating patterns (2-4-0, 3-0). 4 and 5 are "medium"; 4 for the same reason as base-10, but moreso (pattern is just 4-2-0, or 4-12-20, a simple counting-down-by-evens pattern), and 5 is the last digit so has the same quality as 9 does in base-10 (5-4-3-2-1-0, or 5-14-23-32-41-50). It's got 66% easy, 33% medium, and no hard rows at all! On top of this, the whole times table is 1/3 the size, at only 36 entries vs 100; if you throw away the truly trivial x0 and x1 rows and columns, then it's a mere 1/4 the size, with 16 vs 64 entries! That's small enough to be simply memorizable regardless of patterns.<p>Now base-12, with 12 rows. 0, 1, 2, 3, 4, and 6 are all "easy", because 12 has lots of useful factors. 8, 9, and 11 are "medium", but 5, 7, and 10 are "hard" (and 5 and 7 are both "maximal"). This is a better distribution than base-10 (50% easy, 25% medium, 25% hard), but it's larger <i>in general</i> (12x12, so 144 entries vs 100) which makes it harder to memorize, so it's probably roughly equivalent to base-10 overall. That said, easy multiplication/division by 3, 4, and 6 is probably worth more in the real world than mult/div by 5, so I'm sympathetic to the claims of base-12 lovers.<p>And just to finish this out, let's examine base-16, which is real bad because its factors are less useful. 0, 1, 2, 4, and 8 are easy, 3, C, and F are medium, but 5, 6, 7, 9, A, B, D, and E are all hard, for a 31% easy, 19% medium, 50% hard distribution. That's not only substantially worse than base-10, it's also <i>so much bigger</i> (256 entries) that its overall difficulty is also higher. (And to make it worse, <i>more than half</i> of the hard rows (5, 7, 9, B, and D) are maximally hard, as they're coprime to 16! That's so much worse!) Base-16 is useful as a more convenient way to read/write binary, but it's horrible as an actual base to do arithmetic in.<h2>Length of Numbers, and Digit "Breakpoints"</h2><p>As mentioned earlier, binary is a bad base for humans, because it produces very long representations. Humans have a "difficulty floor" for dealing with individual digits, so having a long number full of very-simple digits doesn't actually trade off properly; you still end up paying a larger "complexity price" per digit, times a long representation, for a very complex result.<p>In base-10, numbers up to a hundred use 2 digits, and numbers up to a thousand use 3 digits. Base-6 is fairly close to this: 2 digits gets you to 36, 3 to 216, and 4 to 1296. Since we don't generally work with numbers larger than 1000 in base-10 (after that we switch to grouping into thousands/millions/etc, so we're still working with 3 or less digits), you get the same range from base-6 by using, at most, 4 digits. That's only gaining one digit; combine that with the vastly simpler mental math, and you're <i>at worst</i> hitting an equal complexity budget to base-10.<p>But there's more. You see, the 100/1000 breakpoints aren't chosen because they're particularly <i>useful</i>, they're just where base-10 happens to graduate up to the next digit. We use higher-level groupings rather than 10000 (in many languages, at least; traditional Chinese numbering groups by 10000) because 10000 is simply too large to usefully deal with. That is, we <i>just can't think about 10000 things</i> very well.<p>But we can't really think about things up to 1000 well, either. Even 100 is a pretty big chunk of stuff, larger than we traditionally like working with. Left to our own devices, we seem to like things maxing out at approximately 30-ish - that's how many days are in a month, and how many students are traditionally in a large class (at least in America...). Guess what's approximately 30-ish? That's right, the 2-digit breakpoint for base-6, 36!<p>The 3-digit breakpoint for base-6 is 216, which is also a pretty reasonable number. It's about twice as large as 100, so any time 100 would be reasonable, 216 is probably also reasonable.<p>So, altho you need four base-6 digits to reach 1000<sub>10</sub>, I don't think that's particularly a useful goal to hit. 1000<sub>6</sub> being 216<sub>10</sub> is sufficiently useful that it's worth still batching our numbers into 3-digit groups, like today. <a href="https://en.wikipedia.org/wiki/Benford%27s_law" title="">Benford's Law</a> tells us that, even tho 216 is only 20% of 1000, it will generally cover a <i>far higher</i> percentage of numbers in actual usage; in order words, most of the time we'll write our number with three or less major digits anyway, and won't even miss the lost range!<p>As an added bonus, dividing things into groups of 3 is actually <i>natural</i> in base-6, unlike in base-10!<h2>In Conclusion</h2><p>So, base 6 has more useful divisors, making it easy to divide by many small numbers. It's got a smaller (and thus easier to memorize/use) addition table, and a multiplication table that's not only substantially smaller than base-10, but substantially <i>easier</i> in very significant ways, making mental arithmetic much simpler. We can cover a similar range of numbers with just three digits, so it even looks similar to base-10 when the numbers get large enough to need scientific notation.<p>If you ever find a time machine, let me know so I can fix this. ^_^http://www.xanthir.com/b4wJ1Strings Shouldn't Be Iterable By Default2018-09-04T18:01:48+00:002018-09-04T18:01:48+00:00<p>Most programming languages I use, particularly those that are more "dynamic", have made the same, annoying mistake, which has a pretty high chance of causing bugs for very little benefit: they all make strings iterable by default.<p>By that I mean that you can use strings as the sequence value in a loop, like <code>for(let x of someString){...}</code>. This is a Mistake, for several reasons, and I don't think there's any excuse to perpetuate it in future languages, as <i>even in the cases where you intend to loop over a string</i>, this behavior is incorrect.<h2>Strings are Rarely Collections</h2><p>The first problem with string being iterable by default is that, in your program's semantics, strings are rarely actually collections. Something being a collection means that the important part of it is that it's a sequence of individual things, each of which is important to your program. An array of user data, for example, is semantically a collection of user data.<p>Your average string, however, is <i>not</i> a "collection of single characters" in your program's semantics. It's very rare for a program to actually want to interact with the individual characters of a string as significant entities; instead, it's almost always a singular item, like an integer or a normal object.<p>The consequence of this is that it's very easy to accidentally write buggy code that nonetheless runs, just incorrectly. For example, you might have a function that's intended to take a sequence as one of its arguments, which it'll loop over; if the user accidentally passes a single integer, the function will throw an error since integers aren't iterable, but if the user accidentally passes a single string, the function will successfully loop over the characters of the string, likely not doing what was expected.<p>For example, this commonly happens to me when initializing sets in Python. <code>set()</code> is supposed to take a sequence, which it'll consume and add the elements of to itself. If I need to initialize it with a single string, it's easy to accidentally type <code>set(&quot;foo&quot;)</code>, which then initializes the set to contain the strings "f" and "o", definitely not what I intended! Had I incorrectly initialized it with a number, like <code>set(1)</code>, it immediately throws an informative error telling me that <code>1</code> isn't iterable, rather than just waiting for a later part of my program to work incorrectly because the set doesn't contain what I expect.<p>As a result, you often have to write code that defensively tests if an input is a string before looping over it. There's not even a useful <i>affirmative</i> test for looping appropriate-ness; testing <code>isinstance(arg, collections.Sequence)</code> returns True for strings! This is, in almost all cases, the <i>only</i> sequence type that requires this sort of special handling; every single other object that implements Sequence is almost always <i>intended</i> to be treated as a sequence.<h2>There's No "Correct" Way to Iterate a String</h2><p>Another big issue is that there are <i>so many ways to divide up a string</i>, any of which might be correct in a given situation. You might want to divide it up by codepoints (like Python), grapheme clusters (like Swift), UTF-16 code units (like JS in some circumstances), UTF-8 bytes (Python bytestrings, if encoded in UTF-8), or more. For each of these, you might want to have the string normalized into one of the Unicode Normalization Forms first, too.<p>None of these choices are broadly "correct". (Well, UTF-16 code units is almost always <i>incorrect</i>, but that's legacy JS for you.) Each has its benefits depending on your situation. None of them are appropriate to select as a "default" iteration method; the author of the code should really select the correct method for their particular usage. (Strings are actually super complicated! People should think about them more!)<h2>Infinite Descent Shouldn't Be Thrown Around Casually</h2><p>A further problem is that strings are the only built-in sequence type that is, by default, <i>infinitely recursively iterable</i>. By that I mean, strings are iterable, yielding individual characters. But these individual characters are actually still strings, just length-1 strings, which are still iterable, yielding themselves again.<p>This means that if you try to write code that processes a generic nested data structure by iterating over the values and recursing when it finds more iterable items (not uncommon when dealing with JSON), if you don't specially handle strings you'll infinite-loop on them (or blow your stack). Again, this isn't something you need to worry about for <i>any</i> other builtin sequence type, nor for virtually any custom sequence you write; strings are pretty singular in this regard.<p>(And an obvious "fix" for this is worse than the original problem: Common Lisp says that strings are composed of <i>characters</i>, a totally different type, which doesn't implement the same methods and has to be handled specially. It's really annoying.)<h2>The Solution</h2><p>The fix for all this is easy: just make strings non-iterable by default. Instead, give them several methods that return iterators over them, like <code>.codepoints()</code> or what-have-you. (Similar to <code>.keys()/.values()/.items()</code> on dicts in Python.)<p>This avoids whole classes of bugs, as described in the first and third sections. It also forces authors, in the rare cases they actually do want to loop over a string, to affirmatively decide on how they want to iterate it.<p>So, uh, if you're planning on making a new programming language, maybe consider this?http://www.xanthir.com/b4wJ0Ki-Users, or, the Warlock Multiclassing Rules That Are <i>Almost</i> Already Built Into the Game2018-09-04T02:08:18+00:002018-09-04T00:39:38+00:00<p>In earlier editions of D&amp;D, multiclassing between spellcasters was generally pretty terrible. Spell levels increased in power super-linearly, so losing access to high-level spells was much worse than gaining double the number of low-level spells.<p>5e made this substantially better - you add together your levels to determine the spell slots you have, so a Wizard10/Cleric10 still gets 9th level spell slots just like a Wizard20; the drawback is that neither class gives you <i>spells known</i> above what each class at level 10 caster can know (5th level spells) - a lot of spells scale up in power if you use them in higher-level slots, so that 9th-level slot is still <i>useful</i> for a big attack, but it's not the equal of an actual 9th-level spell.<p>However, 5e also introduced a totally different spellcasting mechanic - Pact Magic - and then utterly failed to address multiclassing with it. A Warlock10/Wizard10 just... has 5th level slots. Two more than a Wiz10 would normally have, and those extra two refresh on a short rest, but still, this sucks.<p>Related to this, the Spellcasting multiclass rules also cover "half-casters" (like the Paladin or Ranger) and "third-casters" (like the Eldritch Knight or Arcane Trickster) - they add 1/2 or 1/3 their levels to a full-casting class's levels to figure out spell slots. But again, Pact Magic has no obvious way to do "half-casters", which severely limits how homebrew can approach Warlock-ish stuff.<h2>But Here's The Thing</h2><p>The special thing about Pact Magic is that your spell slots regen on short rest, so you don't need too many of them. But you know who <i>else</i> kinda has spellcasting that regens on short rest? MONKS.<p>When you go look at monk "spellcasting", they burn ki points to do it, which regen on short rest. They learn up to 5th level spells, spread over twenty levels. They can spend extra ki to power up the spell, at the same time as they unlock higher-level spells. They're basically just spell-point Warlocks, all in all.<p>(The Elemental monk charges spell level + 1 in ki points, but that's pretty widely recognized as crappy. The Shadow monk charges straight spell level. Other monk subclasses with spell-casting stuff also either charge spell level, or do spell level +1 but get extra benefits, like the Sun Soul which can Burning Hands as a <i>bonus</i> action.)<p>If we were to convert the Warlock over to Ki points, at the spell level = ki cost rate, the Warlock would even roughly keep up with the Monk's ki pool total, maxing out at 20 (four 5th-level slots). The Warlock just gets additional power above 5th-level spells in the form of their Mystic Arcanum, single-use higher-level spells that recharge on long rest. We'll handle those in a bit.<p>Overall, the Warlock would retain roughly the same power as they have today - slightly higher versatility, as they could cast more low-level spells in an encounter, but often slightly less overall power. (RAW Warlock gets 3 5th-level slots at level 11, equivalent to 15 ki, while this Ki-lock would only have 11, gradually raising to 15 at 15th level. Similarly, RAW-lock gets a fourth slot at 17, while Ki-lock only has 17 points, finally matching at level 20.) The big benefit is that the Warlock is no longer virtually <i>restricted</i> to scaling spells - instead, they can take non-scaling spells and actually get reasonable use out of them, since they'll just always be cast at their normal (low) cost, while RAW-lock has to "waste" the additional power of their higher-level slots.<h2>So How's This Actually Work?</h2><p>Here's the plain details of ki-using:<p>Warlocks get a ki pool equal to their level, just like Monks. It refills on short rest. They can cast a spell that they know by spending ki points equal to its level (and can spend additional points to cast it at a higher level).<p>At 1st level they can only spend 1 point on a given spell. This increases to 2 at 3rd, 3 at 5th, 4 at 7th, and 5 at 9th. This also determines what level of spells they're allowed to learn, in the same fashion as other full casters.<p>At 11th level they get their first Overcharge: usable 1/long rest, this lets them cast a spell <i>for free</i>, as if they had spent <i>6</i> ki points on it. At 13th level they gain an additional overcharge, worth a free 7-point cast; at 15th, another overcharge worth 8; and at 17th, a final overcharge worth 9. (So, by the end they have four Overcharges, each usable 1/long rest: a 6-point, 7-point, 8-point, and 9-point.) Alternately, instead of getting a free cast, they can spend an overcharge to refill their ki pool by 2 fewer points (the 6-point overcharge can be spent to refill 4 points of ki, 7-point overcharge can refill 5 points of ki, etc).<p>As a class feature, warlocks still <i>learn</i> one 6th-level spell at 11th level, 7th-level spell at 13th level, etc. These spells cannot be swapped out like their other spells known, which continue to be limited to a max of 5th level.<h2>Multiclassing Ki-users</h2><p>Monks are half-ki-users; they add 1/2 level to the full levels of Warlock to determine their ki limits and overcharges, but still add their full level to determine their ki pool. The full ki-user multiclass spellcaster table is:<table>
<thead><tr><th>Ki-User Level<th>Benefit</thead>
<tr><td>1<td>1 ki/spell
<tr><td>2<td>1 ki/spell
<tr><td>3<td>2 ki/spell
<tr><td>4<td>2 ki/spell
<tr><td>5<td>3 ki/spell
<tr><td>6<td>3 ki/spell
<tr><td>7<td>4 ki/spell
<tr><td>8<td>4 ki/spell
<tr><td>9<td>5 ki/spell
<tr><td>10<td>5 ki/spell
<tr><td>11<td>5 ki/spell, 6ki overcharge
<tr><td>12<td>5 ki/spell, 6ki overcharge
<tr><td>13<td>5 ki/spell, 6ki + 7ki overcharges
<tr><td>14<td>5 ki/spell, 6ki + 7ki overcharges
<tr><td>15<td>5 ki/spell, 6ki + 7ki + 8ki overcharges
<tr><td>16<td>5 ki/spell, 6ki + 7ki + 8ki overcharges
<tr><td>17<td>5 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
<tr><td>18<td>5 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
<tr><td>19<td>5 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
<tr><td>20<td>5 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
<caption><i>Ki-user level is Warlock + ½ Monk levels. Ki pool is Warlock + Monk levels.</i>
</table><p>"Casting" Monk subclasses, like Way of the Elements, can use overcharges earned from multiclassing in a full-ki-user like normal; they can cast their known spells at a higher level, or recharge their ki pool. They do not learn any higher-level spells, however. Non-casting subclasses, like Way of the Open Hand, have no scaling-ki abilities, and so can only use overcharges to recharge their ki pool.<h2>Interactions with Normal Spellcasters</h2><p>First, multiclassing a ki-user and a spellcaster partially counts for both; your ki-user levels count ⅓ for the spellcasting multiclass table (or half that for Monks and other half-ki users), and your spellcasting levels count ⅓ for the ki-user multiclass table (or half or third that for lesser casters) and for the ki pool.<p>Second, ki points and spell slots can be spent fairly interchangeably. If you know a spell from a spellcasting class, you can cast it by spending ki equal to the level of slot you would otherwise use (subject to your normal ki spending limits) or by spending an appropriate overcharge to cast a spell at 6th-level or higher; similarly, if you know a spell from a ki-using class, you can expend a spell slot of the appropriate level to cast it instead. <p>If a class ability would let you use a spell slot for any non-casting purpose (such as Paladin's Smite, or Sorcerer's metamagic pool recharging), you can spend ki equal to the desired slot's level (again, subject to your ki spending limits, or spending an appropriate overcharge for higher-level slots); similarly, if you have an ability that costs ki, you can instead expend a spell slot of a level equal to or higher than the ki cost.<h2>Interactions That I Think Are Fine</h2><p>Ki-locks mostly function like normal warlocks, but their interactions with two other spellcasting classes do change a little.<p>The Paladin/Warlock combo relies on quickly-recharging warlock slots to power more frequent Smites. The only change in using Ki-lock is that the Paladin can do more lower-level smites; a Pal3/War17, for example, would have 17 ki points, potentially powering 17 +2d8 smites per short rest, versus the RAW-lock which gets 4 +5d8 smites per short rest. The Ki-lock can also burn all their overcharges to recharge an extra 22 ki points per long rest, for more smites, while the RAW-lock is limited to using their Mystic Arcanum for their original spellcasting purpose.<p>So, theoretically this just means that a Paladin could be adding +2d8 to nearly every attack over a short rest. That's useful, sure, but it means they're <i>not</i> opening combat with a powerful +5d8 smite and likely taking an enemy out right away. The raw numbers look bigger, but you really have to take the action economy into account when evaluating this sort of thing. The weaker, more frequent smites probably roughly balance out with the smaller number of more powerful smites that the RAW-lock is restricted to.<p>(That said, the Ki-lock still <i>can</i> open combat with a big smite, then use small smites later in combat, which is probably a best-of-both-worlds thing. Impact unclear; it's probably still usually better from an action-economy perspective to do larger smites less frequently.)<p>The other interaction is with Sorcerer; the "Coffee-lock" can unweave their Warlock slots into metamagic points repeatedly over multiple short rest, and re-weave them into Sorcerer slots that last until a long rest. This interaction is mostly just a degenerate rules-abuse that isn't worth explicitly disallowing in rules, in favor of just house-banning such nonsense, but Ki-lock doesn't actually make it any more powerful. A 10/10 mix can produce 13 metamagic points out of ki every short rest, producing a 5th level slot and a 4th level slot; a RAW-lock can only produce 10 (for a 5th and 2nd slot), but ➀ a RAW-lock can produce <i>15</i> points per short rest at 11th level; they're just right at the cusp of a big power-gain, and ➁ Pact Magic/Spellcasting multiclassing is <i>absolute shit</i> in the RAW rules; if you use the "each counts ⅓ to the other" multiclassing rules I list up above with RAW-lock, you immediately get the 15 points per short rest. (And I recommend doing so; the ⅓ rule actually works really well overall.)<p>So overall, the multiclass interactions seem to be well-handled and nice.http://www.xanthir.com/b4vn0New Syntax for JS "Function Stuff"2018-08-07T01:23:43+00:002018-08-04T02:46:06+00:00<p>For the last little while, various people in TC39 have been developing several different proposed additions to JS, all trying to make various sorts of "function manipulation" easier and more convenient to work with.<p>At this point it's clear that TC39 isn't interested in accepting <i>all</i> of the proposals, and would ideally like to find a <i>single</i> proposal to accept and reject the rest. This post is an attempt to holistically lay out the problem space, see what problems the various proposals address well, and find the minimal set of syntax proposals that will address all the problems (or at least, help other people decide which problems they feel are worth fixing, and determine which syntaxes cover those problems).<p>(Note, this post is subject to heavy addition/revision as I learn more stuff. In particular, the conclusion at the end is subject to revision as we add more problems or proposals, or decide that some of the problems aren't worth solving.)<h2>The Problems</h2><p>As far as I can tell, these are the problems that have been brought up so far:<ol><li><p><code>.call</code> is annoying <p>If you want to rip a method off of one object and use it on an arbitrary other object as if it were a method of the second object, right now you have to either actually assign the method to the second object and call it as a method (<code>obj.meth = meth; obj.meth(arg1, arg2);</code>), or use the extremely awkward <code>.call</code> operation (<code>meth.call(obj, arg1, arg2)</code>).<p>This sort of thing is useful for generic protocols; for example, most Array methods will work on <i>any</i> object with indexed properties and a length property. We'd also like to, for example, create methods usable on arbitrary iterables, without forcing authors into a totally different calling pattern from how they'd work on arrays (<code>map(iter, fn)</code> vs <code>arr.map(fn)</code>).<p>Relatedly, method-chaining is a common API shape, where you start from some object and then repeatedly call mutating methods on it (or pure methods that return new instances), like <code>foo.bar().baz()</code>. This API shape can't easily be done without the functions actually being properties of the object, and the syntax variants are bad/confusing to write (<code>baz(bar(foo))</code>, for example).<li><p><code>.bind</code> is annoying<p>If you want to store a reference to an object's method (or just use it inline, like <code>arr.map(obj.meth)</code>), you can't do the obvious <code>let foo = obj.meth;</code>, because it loses its <code>this</code> reference and won't work right. You instead have to write <code>let foo = obj.meth.bind(obj);</code> which is super annoying (and impossible if <code>obj</code> is actually an expression returning an object...), or write <code>let foo = (...args) =&gt; obj.meth(...args);</code>, which is less annoying but more verbose than we'd prefer.<li><p>Heavily-nested calls are annoying. <p>Particularly when writing good functional code (but fairly present in any decently-written JS imo), a lot of variable transformations are just passing a value thru multiple functions. There are only two ways to do this, both of which kinda suck.<p>The first is to nest the calls: <code>foo(bar(baz(value)))</code>. This is bad because it hides a <i>lot</i> of detail in minute structural bits, particularly if some of the functions take more than one argument. You end up having to do some non-trivial parsing yourself while reading it, to match up parens appropriately, and it's not uncommon to mess this up while writing or editing the code, putting too many or too few close-parens in some spots, or putting an arg-list comma in the wrong spot. You can make this a little better with heavy line-breaking and indentation, but then there's still a frustrating rightward march in your code, it's still hard to edit, and multi-arg functions are still hard to read (and really easy to forget the arg-list commas for!), because the additional arguments might be a good bit further down the page, by which point you've already lost your train of thought following the nesting of the first argument.<p>The second way to handle this is to unroll the expression into a number of variable assignments, with the innermost part coming first and gradually building up into your answer. This does make reading and writing much less error-prone, but lots of small temporary variables come with their own problems. You now have to come up with <i>names</i> for these silly little single-use variables, and it's not immediately clear that they're single-use and can be ignored as soon as they get used in the next line. (And unless you create a dummy block, the variable names <i>are</i> in scope for the rest of the block, allowing for accidental reference.) <i>Some</i> of the temporary variables might have a meaningful concept behind them and be worthy of a name, but many are likely just semantically a "partially-processed value" and thus not worthy of anything more meaningful than <code>temp1</code>/<code>temp2</code>/etc.<p>Further, this changes the shape of the code - what was once an expression that could be dropped inline anywhere is now a series of statements, which is much more limited in placement. For example, this expression might have been in the head of an <code>if</code> expression, and now has to be moved out to before it; this prevents you from doing easy <code>else if</code> chains.<li><p>Partially-applying a function is annoying.<p>If you want to take an existing function and fill in <i>some</i> of its arguments, but leave it as a function with the rest to be filled in later, right now you have to write something like <code>let partialFoo = (arg1, arg3) =&gt; foo(arg1, value, arg3);</code>. This is more verbose and annoying than ideal, especially since this sort of "partial application" is very common in functional programming (for example, filling in all but one of a function's arguments, then passing it to <code>.map()</code>).<p>In particular, the problem here is that the <i>important</i> part of the expression is the arguments you're filling in, but the way you write it instead requires naming all the parts you're <i>not</i> filling in, then referencing those names <i>a second time</i> in the actual call, obscuring the values you're actually pre-filling. This is also especially awkward in JS if your function takes an option-bag argument and you're trying to fill in some of those arguments, but let the later caller fill in the rest; you have to do some shenanigans with <code>Object.assign</code> to make it work.<li><p>Supporting functor &amp; friends is annoying<p>"Functor", "Applicative, "Monad", and others are ridiculous names, but represent surprisingly robust and useful abstractions that FPers have been using for years, capturing very common code patterns into reusable methods. The core operation between them is some variant of "mapping" a function over the values contained inside the object; the <i>problem</i> is that in JS, this is always done with an inverted fn/val relationship vs calling: rather than <code>fn(val)</code>, you always have to write <code>val.map(fn)</code> or some variant thereof.<p>JS <i>does</i> specially recognize one functor, the Promise functor, with special syntax allowing you to treat it more "normally"; you can call <code>fn(await val)</code> rather than having to write <code>val.then(fn)</code>. Languages like Python also have some specialized syntax for the Array functor in the form of list comprehensions, letting you write a normal function call. But in heavily-FP languages, there's generally a <i>generic</i> construct for dealing with functors in this way, such as the "do-notation" of Haskell, which both makes it easier to work with such constructs, and makes it easier to recognize and reason about them, rather than having to untangle the specialized and ad-hoc interactions JS has to deal with today.</ol><h2>The Possible Solutions</h2><p>There are a bunch! I'll list them in no particular order:<ol><li><a href="https://github.com/tc39/proposal-pipeline-operator/wiki#proposal0-original-minimal-proposal" title="">&quot;F#&quot; pipeline operator</a>, spelled <code>|&gt;</code>. Takes a value on the LHS and a function on the RHS, calls the function on the value. So <code>&quot;foo&quot; |&gt; capitalize</code> yields <code>&quot;FOO&quot;</code>. You can chain this to continue piping the result to more functions, like <code>val |&gt; fn1 |&gt; fn2</code>.<li><a href="https://github.com/js-choi/proposal-smart-pipelines" title="">&quot;Smart mix&quot; pipeline operator</a>, also spelled <code>|&gt;</code>. Takes a value on the LHS, and an expression on the RHS: if the expression is of a particularly simple "bare form", like <code>val |&gt; foo.bar</code>, it treats it like a function call, desugaring to <code>foo.bar(val)</code>; otherwise the RHS is just a normal expression, but must have a <code>#</code> somewhere indicating where the value is to be "piped in", like <code>val |&gt; foo.bar(#+2)</code>, which desugars to <code>foo.bar(val+2)</code>.</ol><p> Smart-mix also has the closely-related pipeline-function prefix operator <code>+&gt;</code>, where <code>+&gt; foo.bar(#+2)</code> is a shorthand for <code>x=&gt; x |&gt; foo.bar(#+2)</code>, with some niceties handling some common situations.<ol><li><p><a href="https://github.com/tc39/proposal-bind-operator" title="">Call operator</a>, spelled <code>::</code>. Takes an object on the LHS and a function-invocation on the RHS, calls the function as a method of the object. That is, given <code>foo::bar()</code>, this ends up calling <code>bar.call(foo)</code>. The point of this is that it <i>looks like</i> just calling <code>foo.bar()</code>, but it doesn't require that the <code>bar</code> method actually live on the <code>foo</code> object.<p>Can also be used as a prefix operator, called the "bind" operator. Takes a method-extraction on the RHS, and returns that method with its <code>this</code> appropriately bound. That is, given <code>::foo.bar</code>, this ends up calling <code>foo.bar.bind(foo)</code>. <li><p>Partial-function syntax, spelled <code>func(1, ?, 3)</code>. Implicitly defines a function that takes arguments equal to the number of <code>?</code> glyphs, and subs them into the expression in order when called.<li><p>Others?</ol><h2>Which Solutions Solve Which Problems?</h2><ul><li>The F# pipeline operator solves problem 3 partially. (You can unnest plain, unary function calls easily. <i>Anything else</i> requires arrow functions, or using functional tools that can manipulate functions into other functions.)</ul><p> Paired with partial-functions it solves more cases easily, but not all. You can write <code>val |&gt; foo(?, 2)</code> to pipe into n-ary functions, but still can't handle <code>await</code>, operator expressions, etc. Can technically do <code>val |&gt; foo.call(?, ...)</code> as the equivalent to smart mix's <code>val |&gt; #.foo(...)</code> or call operator's <code>val::foo(...)</code>, but kinda awkward.<ul><li>The "smart mix" pipeline operator solves problem 3 more completely. (With topic-form syntax you can trivially unnest anything. Bare-form syntax lets you do some common "tower of unary functions" stuff with a few less characters, same as "F#" style.)<li>The "smart mix" pipeline-function operator solves problems 2 and 4 well. (With bare-form syntax, <code>+&gt;foo.bar</code> creates a function that calls <code>foo.bar(...)</code>, solving the bind problem in two characters. With topic-form syntax, <code>+&gt;foo(#, 2, ##)</code> fills in the second argument of <code>foo()</code> and creates a function that'll accept the rest. Option-bag merging is still difficult/annoying.) <br><li>The call operator solves problem 1 well. If you write the ecosystem well, it also solves problem 5 okay. (For example, write a generic <code>map</code> function that takes the object as <code>this</code> and a function as argument, and calls <code>this.[Symbol.get(&quot;fmap&quot;)](fn)</code>. Then if the functor object defines a "fmap" operation, you can write <code>obj::map(fn1)::map(fn2)</code>, similar to Haskell's <code>obj &gt;&gt;= fn1 &gt;&gt;= fn2</code> syntax. )<li>The bind operator solves problem 2 well.<li>The partial-function operator solves problem 4 okay, but with some issues. (Unclear what the scope of the function is - in <code>let result = foo(bar(), baz(?))</code>, is that equivalent to <code>let result = foo(bar(), x=&gt;baz(x));</code>, or <code>let result = x=&gt;foo(bar(), baz(x));</code>? Related to that, is <code>foo(?, bar(?))</code> two nested partial functions, or a single partial function taking two arguments? Can you write a partial function that only uses some of the passed-in arguments, or uses them in a different order than they are passed in?)</ul><p>So, inverting this list:<ol><li>The call problem is well-solved by the call operator only.<li>The bind problem is well-solved by the bind operator, and the bare-syntax pipeline-function operator. (They differ on whether the method is extracted/bound immediately (bind operator), or at time of use (pipeline-function operator).)<li>The nesting problem is somewhat solved by "F#" pipeline operator, and better solved by "smart mix" pipeline operator.<li>The partial-function problem is somewhat solved by the partial-function operator, and better solved by the topic-syntax pipeline-function operator.<li>The functor problem is somewhat solved by the call operator, but not super well.</ol><p>So, if you think all the problems deserve to be solved, currently the minimal set that does everything pretty well is: call operator, "smart mix" pipeline, and pipeline function.