entries

At OSCON this past year, I was a just little surprised by the still-shrinking
Perl track. What really surprised me, though, was the entirely absent Ruby
track. I tried to figure out what it meant, and whether it meant anything, but
I didn't come to any conclusions. Even if I'd more carefully collected actual
data, I'm not sure I could've made any really useful conclusions.

Instead, I came to a flimsier, wobblier conclusion: the Perl track could have
more, better talks that would appeal to more people, including people from
outside of Perl. I spoke to some OSCON regulars about this and nobody told me
that I was deluded. When I got home, I asked a few people whether they'd ever
considered coming to give a talk at OSCON. I got a few replies something like
this:

I hadn't really, but maybe I should. What would I talk about, though?
Talking about stuff I do in Perl wouldn't make sense, because OSCON isn't a
Perl conference.

OSCON is an interesting conference. It's ecumenical — or it could and
should be. In practice, though, it can be a bit cliquish. I was
disappointed when I first saw lunch tables marked as the "Python table" or
"JavaScript table." I was told (and believe) that people asked for this sort
of thing as a way to find people with the same interests, but I think that one
of the most interesting things about OSCON is the ability to talk shop with
people whose shop is quite unlike your own. It leads to interesting
discoveries.

This only works, though, if you really talk about what you really do. If I
said, "Well, I filter and route email with a lot of Perl and a little C,"
nobody's going to learn anything interesting from me. On the other hand, I
could tack on a few more sentences about the specific problems we encounter and
how we get past them. "High performance, highly configurable email filtering
is stymied by the specific 'commit' phases of SMTP, so we've had to spend a
little time figuring out how to do as much rejecting as early as possible, but
everything else as late as possible." Once you're talking about specific
problems, people can relate, even if they don't know much about the domain.

Hearing about interesting solutions to problems can often help me think about
new possible solutions to my own problems, so what I like is to hear people
talk about their specific solutions to specific problems. I seek these talks
out. I've basically given up on talks like "The Ten Best Things About Go" or
"A Quick Intro to Clojure." They can be interesting, but generally I find them
wishy-washy. They're not compelling enough to get me to commit to doing
serious work in a new language, and they don't discuss any single problem in
enough detail to inspire my to rethink things.

So I think that, in general, talks about really specific pieces of software are
the best, and that means talks about software in Perl (or Python or Bash or
Go...) because that shows the actual solution that was made. Most of these
talks, I think, would be interesting to all sorts of people who don't use the
underlying language or system. If you work on an ORM in Python, would a talk
on DBIx::Class be interesting? Yes, I think it could be. Could a talk on
q.py be useful for just about anybody who debugs code? Yes. And so on.

I'm really hoping to see some interesting real-problem-related talks show up
this year, and plan to go to whichever ones look the most concrete. I also
hope to give some talks like that. Talks like that are my favorite to give,
and I look forward to spending more time talking about solving real problems
than talking about abstractions.

OSCON's call for participation tends to come out in January. That should be
plenty of time to think about our most interesting solved problems!

Over the last few weeks, I've done a bit of pair programming across the
Internet, which I haven't done in years. It was great! Most of this was with
Ingy döt Net and Frew Schmidt.

As is often the case, the value wasn't only in the work we did, but in the
exchange of ideas while doing it. I got to see both Ingy and Frew using their
tools, and it made me want to steal from them. It also helped me get a handle
on what things I didn't want to change in my own setup, and why. It's
definitely something I'd like to do more often.

Both Ingy and Frew were using tmux, the
terminal multiplexer. tmux is a lot like GNU screen, which I've been using
for at least fifteen years. If you're not using either one, and you use a
unix, you really ought to start! They help me get a lot of my work
parallelized and simplified. I first learned of tmux a few years ago when
I learned that several members of the Moose core dev team has started using it
instead of screen. I tried to switch at the time, but it didn't work out.
It crashed too much, its Solaris support seemed spotty, and basically it got in
my way. Now, inspired by looking at what Ingy and Frew were doing, I felt like
trying again. I sat down and read most of the tmux
book and was convinced in theory.
Although I don't like every difference between screen and tmux, there were
clear benefits.

Then I got to work actually switching, which meant producing a tolerable
.tmux.conf. I
started with the one I'd made years before and slowly added to it as I read
more about tmux's features. It's clear that I've got more improvements to
make, but they're going to require a few months of using my current config to
figure things out.

When I paired with Ingy, we used
PairUp, his instant pairing
environment. Basically, you provision a Debian-like VM using whatever system
you want (we were using RackSpace, but I tried it with EC2, also) and, with one
command, create a useful environment for pairing in a shared tmux session.
We didn't actually work on anything. Intead, he showed me PairUp and we
encountered enough foibles along the way that we got to pair on fixing up the
pairing environment. It was fun.

I saw a lot of the tools he was using, as we went, and one of them was his
dotfile manager. I've seen a lot of dotfile managers, although I've never
really switched to using one. Instead, I was using a fairly gross hack of my
own, using GNU make to install my dotfiles. The tool that Ingy was using,
... was interesting enough to get me to
switch. I've converted almost all of my config repositories to using it, and I
feel good about this.

... isn't a huge magic change in how to look at config files, and that's why
I like it. It's also not just "your dotfiles in a repo." It's got two bits
that make it very cool.

First, it is configured with a list of repositories containing your
configuration:

Each one of these repositories is kept in sync in ~/.../src, and the files in
them are symlinked into your home directory. Any file in the first repo takes
precedence over files in later repositories, so you can establish canonical
behaviors early and add specialized ones later.

The second interesting bit is provided by the loop-dots repository above. It
sets up a number of config files (like .zshrc and .vimrc to name just two)
that loop over the rest of the dots repositories, sourcing subsidiary files.
So there's a global .zshrc, but almost the only thing it does is load the
.zshrc files of other repositories. This makes it very simple to divide up
your config files into roles. I can have a rjbs-private-dots that just adds
on my "secret data" to my normal dot files. At work, I'll have an
rjbs-work-dots that sets up variables needed there.

Finally, there's another key benefit: each repository is basically just a
bunch of dot files in a repo, even though ... is more than that. If I ever
decide that ... is nuts, bailing out of using it is very simple. I don't
need to convert lots of things out of it, I just need to replace the ...
program with, say, cp.

I'm only about a week into this big set of updates, but so far I think it's
going well. Of course, time will tell. I haven't yet updated my Linode box,
where I do quite a lot of my work, to use my ... config. Tomorrow…

I picked up DEFCON a few months ago
on Steam. It's a game inspired by the "big nuclear war boards" we saw in
movies like Dr. Strangelove or, closer to the mark, WarGames. Each player
controls a section of the world. The game starts with a few very short bits of
placing units and quickly turns into a shooting war. Players launch fighters,
deploy fleets, and eventually sound out bombers, subs, and ICBMs. The game
looks gorgeous.

I was intrigued by "office mode." In this mode, the game is time limited, runs
in a window, and stays mostly out of your way. My understanding was that it
would be a good fit to run while working my day job. I'd just check in on it
once in a while to issue new orders, but mostly I could ignore it. After all,
a lot of time was sure to be missiles flying through the air. I got in touch
with some friends to organize a game, and Florian Ragwitz and I gave it a shot
today.

Unfortunately, it wasn't quite what I expected. The first few phases of the
game were quite rapid-fire and required a good bit of my brain for about twenty
minutes. After that, things calmed down, but not enough. It was not a great
background activity. I couldn't, for example, check in for five minutes at a
time between pomodori.

On the other hand, it was fun. I guess I should spend a bit more time
fighting AIs, though, because Florian utterly destroyed me. I think I had one
weapon hit a target, doing a fair bit of damage to Naples. Meanwhile, he
destroyed about half the population of the USSR. (Florian was playing as
Europe, and I was the USSR. Strangely, Europe, not the USSR, controls Kiev,
Warsaw, and Dnipropetrovsk.)

Over a decade ago, Paul Elliott wrote a tiny piece of counterfactual history
called The Gygax/Arneson
Tapes. It
recounts the history of the world's most famous role-playing game, Mazes &
Minotaurs, in which the players take on larger-than-life Greek-style heroes in
Sword and Sandal adventures.

A while later, the amazing Olivier Legrand "dug up and published" the original
1972 rules for Mazes & Minotaurs. Of
course, in reality he wrote it. All of it. It's a complete, good, playable
RPG written based on a little half page of inspiration, also inspired by the
little brown books of D&D.

Then, later, he produced the 1987 "revised"
edition. This gives us the
three core books you'd expect: the player's manual, the Maze Master's guide,
and the creature compendium. Later came the M&M Companion, Viking & Valkyries
(an alternate setting), and perhaps most amazingly of all, Minotaur
Quarterly, an excellent
magazine of add-on material for RM&M. Of course, sometimes it included
"republished" articles from the days of OM&M.

The whole set of books is well done. They're all written as if the false
history is true, and with a bit of tongue in cheek, but they're still good,
playable games.

For about a year and a half, give or take, I ran a modified M&M
game and it went well. I might run it
again some day, either in that same setting or in the canonical Mythika, if I
get around to watching a bunch more
peplum films. I advise all
fellow old school RPG fans to give M&M a look.

Preface

When I wrote Dist::Zilla, there were a few times that I knew I was introducing
encoding bugs, mostly around Pod handling and configuration reading. (There
were other bugs, too, that I didn't recognize at the time.) My feeling at the
time was, "These bugs won't affect me, and if they do I can work around them."
My feeling was right, and everything was okay for a long time.

I put off fixing this for a long time, because I knew how deeply the bugs ran
into the foundation. I'd laid them there myself! There were a number of RT
tickets or GitHub pull requests about this, but they all tended to address the
surface issues. This is really not the way to deal with encoding problems.
The right thing to do is to write all internal code expecting text where
possible, and then to enforce encode/decode at the I/O borders. If you've
spent a bunch of time writing fixes to specific problems inside the code, then
when you fix the border security you need to go find and undo all your internal
fixes.

My stubborn refusal to fix symptoms instead of the root cause left a lot of
tickets mouldering, which was probably very frustrating for anybody affected.
I sincerely apologize for the delay, but I'm pretty sure that we'll be much
better off having the right fix in place.

The work ended up getting done because David Golden and I had been planning for
months to get together for a weekend of hacking. We decided that we'd try to
do the work to fix the Dist::Zilla encoding problems, and hashed out a plan.
This weekend, we carried it out.

The Plan

As things were, Dist::Zilla got its input from a bunch of different sources,
and didn't make any real demand of what got read in. Files were read raw, but
strings in memory were … well, it wasn't clear what they were. Then we'd jam
in-memory strings and file content together, and then either encode or not
encode it at the end. Ugh.

What we needed was strict I/O discipline, which we added by fixing libraries
like Mixin::Linewise and Data::Section. These now assume that you want text
and that bytes read from handles should be UTF-8 decoded. (Their documentation
goes into greater detail.) Now we'd know that we had a bunch of text coming in
from those sources, great! What about files in your working directory?

Dist::Zilla's GatherDir plugin creates OnDisk file objects, which get their
content by reading the file in. It had been read in raw, and would then be
mucked about with in memory and then written back out raw. This meant that
things tended to work, except when they didn't. What we wanted was for the
files' content to be decoded when it was going to be treated as a string, but
encoded when written to disk. We agreed on the solution right away:

Files now have both content and encoded_content and have an encoding.

When a file is read from disk, we only set the encoded content. If you try
reading its content (which is always text) then it is decoded according to its
encoding. The default encoding is UTF-8.

When a file is written out to disk, we write out the encoded content.

There's a good hunk of code making sure that, in general, you can update either
the encoded or decoded content and they will both be kept up to date as needed.
If you gather a file and never read its decoded content before writing it to
disk, it is never decoded. In fact, its encoding attribute is never
initialized… but you might be surprised by how often your files' decoded
content is read. For example, do you have a script that selects files by
checking the shebang line? You just decoded the content.

This led to some pretty good bugs in late tests, hitting a file like
t/lib/Latin1.pm. This was a file intentionally written in Latin-1. When a
test tried to read it, it threw an exception: it couldn't decode the file!
Fortunately, we'd already planned a solution for this, and it was just fifteen
minutes work to implement.

There is a way to declare the encoding of files.

We've added a new plugin role, EncodingProvider, and a new plugin,
Encoding, to deal with this. EncodingProvider plugins have their
set_file_encodings method called between file gathering and file munging, and
they can set the encoding attribute of a file before its contents are likely
to be read. For example, to fix my Latin-1 test file, I added this to my
dist.ini:

[Encoding]
filename = t/lib/Latin1.pm
encoding = Latin-1

The Encoding plugin takes the same file-specifying arguments as PruneFiles. It
would be easy for someone to write a plugin that will check magic numbers,
or file extensions, or whatever else. I think the above example is all that
the core will be providing for now.

You can set a file's encoding to bytes to say that it can't be decoded and
nothing should try. If something does try to get the decoded content, an
exception is raised. That's useful for, say, shipped tarballs or images.

Pod::Weaver now tries to force an =encoding on you by @Default

The @Default pluginbundle for Pod::Weaver now includes a new Pod::Weaver
plugin, SingleEncoding. If your input has any =encoding directives,
they're consolidated into a single directive at the top of the document… unless
they disagree, in which case an exception is raised. If no directives are
found, a declaration of UTF-8 is added.

For sanity's sake, UTF-8 and utf8 are treated as equivalent… but you'll end
up with UTF-8 in the output.

You can probably stop using Keedi Kim's Encoding Pod::Weaver plugin now.
If you don't, the worst case is that you might end up with two mismatched
encoding directives.

Your dist (or plugin) might be fixed!

If you had been experiencing double-encoded or wrongly-encoded content, things
might just be fixed. We (almost entirely David) did a survey of dists on the
CPAN and we think that most things will be fixed, rather than broken by this
change. You should test with the trial release!

Your dist (or plugin) might be broken!

...then again, maybe your code was relying, in some way, on weird text/byte
interactions or raw file slurping to set content. Now that we think we've
fixed these in the general case, we may have broken your code specifically.
You should test with the trial release!

The important things to consider when trying to fix any problems are:

files read from disk are assumed to be encoded UTF-8

the value given as content in InMemory file constructors is expected to be
text

FromCode files are, by default, expected to have code that returns text;
you can set (code_return_type => 'bytes') to change that

your dist.ini and config.ini files must be UTF-8 encoded

DATA content used by InlineFiles must be UTF-8 encoded

if you want to munge a file's content like a string, you need to use
content

if you want to munge a file's content as bytes, you need to use
encoded_content

If you stick to those rules, you should have no problems… I think! You should
also report your experiences to me or, better yet, to the Dist::Zilla mailing
list.

Most importantly, though, you should test with the trial release!

The Trial Release

Thanks!

I'd like to thank everyone who kept using Dist::Zilla without constantly
telling me how awful the encoding situation was. It was awful, and I never got
more than a few little nudges. Everyone was patient well beyond reason.
Thanks!

Also, thanks to David Golden for helping me block out the time to get the work
done, and for doing so much work on this. When he arrived on Friday, I was
caught up in a hardware failure at the office and was mostly limited to
offering suggestions and criticisms while he actually wrote code. Thanks,
David!

I've written a bunch of code that deals with APIs behind OAuth before. I wrote
code for the Twitter API and for GitHub and for others. I knew roughly what
happened when using OAuth, but in general everything was taken care of behind
the scenes. Now as I work on furthering the control of my programmatic day
planner, I need to deal with web services that don't have pre-built Perl
libraries, and that means dealing with OAuth. So far, it's been a big pain,
but I think it's been a pain that's helped me understand what I'm doing, so I
won't have to flail around as much next time.

I wanted to tackle Instapaper first. I knew just what my goal automation would
look like, and I'd spent enough time bugging their support to get my API keys.
It seemed like the right place to start. Unfortunately, I think it wasn't the
best service to start with. It felt a bit like this:

Hi! Welcome to the Instapaper API! For authentication and authorization,
we use OAuth. OAuth can be daunting, but don't worry! There are a lot of
libraries to help, because OAuth is a popular standard!

By the way, we've made our own changes to OAuth so that it isn't quite
standard anymore!

For one thing, they require xAuth. Why? I don't know, but they do. I futzed
around trying to figure out how to use
Net::OAuth. It didn't work.
Part of seemed to be that no matter what I did, the xAuth parameters ended up
in the HTTP headers instead of the post body, and it wasn't easy to change the
request body because of the various layers in play. I searched and searched
and found what seemed like it would be a bit help:
LWP::Authen::OAuth.

It looked like just what I wanted. It would let me work with normal web
requests using an API that I knew, but it would sign things transparently. I
bodged together this program:

Great! With this done, I can get my list of bookmarks and give myself points
for reading stuff that I wanted to read, and that's a big success right there.
I mentioned my happiness about this in #net-twitter, where the OAuth experts
I know hang out. Marc Mims said, basically, "That looks fine, except that it's
got a big glaring bug in how it handles requests." URIs and OAuth encode
things differently, so once you're outside of ASCII (and maybe before then),
things break down. I also think there might be other issues you run into,
based on later experience. I'm not sure LWP::Authen::OAuth can be entirely
salvaged for general use, but I haven't tried much, and I'd be the wrong person
to figure it out, anyway.

Still, I was feeling pretty good! It was time, I decided, to go for my next
target. Unfortunately, my next target was Feedly, and they've been sitting on
my API key request for quite a while. They seem to be doing this for just
about everybody. Why do they need to scrutinize my API key anyway? I'm a
paid lifetime account. Just give me the darn keys!

Well, fine. I couldn't write my Feedly automation, so I moved on to my third
and, currently, final target: Withings. I
wanted code to get my last few weight measurements from my Withings scale. I
pulled up their API and got to work.

The first roadblock I hit was that I needed to know my numeric user id, which
they really don't put anyplace you can find it. I had to dig for about half an
hour before I found it embedded in a URL on one of their legacy UI pages.
Yeesh!

After that, though, things went from tedious to confusing. I was getting
directed to a URL that returned a bodyless 500 response. I'd get complaints
about bogus signatures. I couldn't figure out how to get token data out of
LWP::Authen::OAuth. I decided to bite the bullet and figure out what to do
with Net::OAuth::Client.

As a side note: Net::OAuth says "you should probably use Net::OAuth::Client,"
and is documented in terms of it. Net::OAuth::Client says, "Net::OAuth::Client
is alpha code. The rest of Net::OAuth is quite stable but this particular
module is new, and is under-documented and under-tested." The other module I
ended up needing to use directly, Net::OAuth::AccessToken, has the same
warning. It was a little worrying.

This is how OAuth works: first, I'd need to make a client and use it to get a
request token; second, I'd need to get the token approved by the user (me) and
turned into an access token; finally, I'd use that token to make my actual
requests. While at first, writing for Instapaper, I found Net::OAuth to feel
overwhelming and weird, I ended up liking it much better when working on the
Withings stuff. First, code to get the token:

The thing that had me confused the longest was that coderef in $session. Why
do I need it? Under the hood, it looks optional, and it can be, but it's
easier to just provide it. I'll come back to that. Here's how you use the
program:

When you run the program, authorize_url generates a new URL that can be
visited to authorize a token to be used for future requests. The URL is
printed to the screen, and the user can open the URL in a browser. From there,
the user should be prompted to authorize access for the requesting application
(as authenticated by the consumer id and secret). The website then redirects
the user to the callback URL. I gave "oob" which is obviously junk. That's
okay because the URL will sit in my browser's address bar and I can copy out
two of its query parameters: the token and the verifier. I paste these into
the silently waiting Perl program. (I could've printed a prompt, but I
didn't.)

Now that the token is approved for access, we can get an "access token." What?
Well, the get_access_token method returns a Net::OAuth::AccessToken, which
we'll use something like an LWP::UserAgent to perform requests against the API.
I'll come back to how to use that a little later. For now, let's get back to
the $session callback!

To use a token, you need to have both the token itself and the token secret.
They're both generated during the call to authorize_url, but only the token's
value is exposed. The secret is never shared. It is available, though, if
you've set up a session callback to save and retrieve values. (The session
callback is expected to behave sort of like CGI's venerable param routine.)
This is one of those places where the API seems tortured to me, but I'm putting
my doubts aside because (a) I don't want to rewrite this library and (b) I
don't know enough about the problem space to know whether my feeling is
warranted.

Anyway, at the end of this program we spit out the token and token secret and
we exit. We could instead start making requests, but I always wanted to have
two programs for this. It helps me ensure that I've saved the right data for
future use, rather than lucking out by getting the program into the right
state. After all, I'm only going to get a fresh auth token the first time.
Every other time, I'll be running from my saved credentials.

This starts to look good, to me. I make an OAuth client (the code here is
identical to that in the previous program) and then make an AccessToken.
Remember, that's the thing that I use like a LWP::UserAgent. Here, once I've
got the AccessToken, I get a resource and from there it's just decoding JSON
and mucking about with the result. (The data provided from the Withings
measurements API is a bit weird, but not bad. It's certainly not as weird as
many other data I've been given by other APIs!)

I may even go back to update my Instapaper code to use Net::OAuth, if I get a
burst of energy. After all, the thing that gave me trouble was dealing with
xAuth using Net::OAuth. Now that I have my token, it should just work… right?
We'll see.

A few years ago I heard about the game Microscope and
it sounded way cool. In summary: it is.

It is in some ways like a role-playing game, but in other ways it's something
else entirely. When you play Microscope, you're not telling the story of a few
character, and you're not trying to solve a puzzle. You're building a history
on a large scale. It's meant for building stories on the scale of decades,
centuries, or millennia.

The game starts with a few things being decided up before play really begins:

what's the general theme of the history being built?

what things are out of bounds

what things are explicitly allowed

From the start, play rotates. It is a game, although it's a game without
victory conditions. Each round of the game, each player makes one or two moves
from the short list of possible moves. The possible moves, though, are all of
great importance to the final outcome. Basically, each player may:

declare the occurance of a player-described epoch anywhere within the timeline

add a major event to an existing epoch

invite the rest of the table to narrate a specific scene within the timeline

As with many other story-building games, once a fact is established, it cannot
be contradicted. Since there's not really any challenge to getting your facts
onto the table, the game is entirely co-operative. There is no fighting over
the story allowed. Instead, there's a rule for suggesting "wait, before you
write that down, maybe it would be cooler if…"

I only managed to play Microscope once, but it went pretty well. I think after
two or three more games, it would be great fun.

I had originally wanted to start a regular set of Microscope games. Whoever
committed to each round would show up first for a game of Microscope,
establishing a setting. At the end of the session, the players could pick a
point (or points) within the history where they'd like to play a traditional
RPG, and then we'd have three sessions of that. It struck me as likely to be a
ton of fun, but I'm not sure I can really wrangle up players for it. Here's
the pitch I wrote myself:

Monthly Microscopy

Microscope is a game of fractal history building. When you play Microscope,
you start with a big picture and you end with a complex history spanning
decades or centuries. Microscope is a world-building game.

My plan is to play Microscope over and over, building new world, and then
running traditional tabletop games in those worlds.

Every month, we'll play a game of Microscope. The big picture will be
determined before we play, so everyone who shows up will have at least some
idea what to expect. (Knowing the big picture only gets you so far in
Microscope, though!)

At the end of the game, we'll have our setting described by a set of genre
boundaries and specific facts about the world. We'll have to figure out, now,
what kind of RPG we want to play in that world. When during the timeline does
it take place? Who are the characters? These are answered by bidding.

At the end of each session, each player in attendance gets five points. If
it's your first game, you get twenty. At the end of every Microscope game,
players can suggest scenarios for the month's RPG, and then bid on a winning
suggestion using their points. Each player may bid as many of his or her
points across as many of the suggestions as he or she would like. The bids are
made in secret, and all bid points are used up.

There will be three post-Microscope sessions each month. They might form a
mini-campaign, or they might be three unrelated groups of characters, as
determined by the winner of the plot auction.

Each game will be scheduled at least a week in advance, but won't have a fixed
schedule. Times and days will move around to be friendly to different time
zones and schedules. Microscope games will be played with G+ Hangouts and
Docs. RPG sessions will be played on Roll20 — but we might use Skype for voice
chat if their voice chat remains as problematic as it's been.

In Perl 5.10, the idea of a lexical topic was introduced. The topic is $_,
also known as "the default variable." If you use a built-in routine that
really requires a parameter, but don't give it one, the odds are good that it
will use $_. For example:

s/[“”]/"/g;
chomp;
say;

These three operations, all of which really need a parameter, will use $_.
The topic will be substituted-in by s///, chomped by chomp, and said by
say. Lots of things use the topic to make the language a little easier to
write. In constrainted contexts, we can know what we're doing without being
explicit about every little thing, because our conversation with the language
has been topicalized.

Often, this leads to clear, concise code. Other times, it leads to horrible,
hateful action at a distance. Those times are the worst.

Somewhere down the call stack, log_eventsometimes calls utter. utter
assigned to the topic, but didn't localize, and if nothing between your code
and utter localized, then it will assign to your topic, which happens to be
aliased to an element in @files. The filename gets replaced with a logging
string, the string fails to pass the (-f && -r) test, so it isn't
investigated. This is a bug, but it's not a bug in perl, it's a bug in your
code. Is it a bug that this bug is so easy to write?

Well, that's hard to say. I don't think so. It's quite a bit of rope, though,
that we're giving you with a default, global variable that often gets
aliased by default!

If the variable wasn't global, though, this problem would be cleared up.
We'd have a topic just for the piece of code you're looking at, and you could
hold the whole thing in your head, and you'd be okay. We already have a kind
of variable for that: lexical variables! So, Perl 5.10 introduced my $_.

So, to avoid having your topic clobbered, you could rewrite that loop:

When log_event is entered, it has no way to see your lexical topic — the one
with a filename in it. It can't alter it, either. It's like you've finally
graduated to a language with proper scoping! The built-in filetest operators
know to look at the lexical topic, if it's in effect, so they just work. What
about investigate_file? It's a user-defined subroutine, and it wants to be
able to default to $_ if no argument was passed.

That underscore in the prototype says "if I get no arguments, alias $_[0] to
whichever topic is in effect where I was called." That's great and does just
what we want, but there's another problem. We put a (_) prototype on our
function. We actually needed (_@), because we take more than one argument.
Or stated more simply: the other problem is that now we're thinking about
prototypes, which is almost always a road to depression.

Anyway, what we've seen so far is that to gain much benefit from the lexical
topic, we also need to update any topic-handling subroutine that's called while
the topic is lexicalized. This starts to mean that you're auditing the code
you call to make sure that it will work. This is a bummer, but it's only one
layer deep that you need to worry about, because your lexical topic ends up in
the subroutines @_. It does not, for example, end up in a
similarly-lexicalized topic in that subroutine. Phew!

The story doesn't end here, though. There's another wrinkle, and it's a pretty
wrinkly one.

One of the cool things we can do with lexical variables is build closures over
them. Behold, the canonical example:

sub counter {
my $i = 0;
return sub { $i++ }
}

Once $_ is a lexical variable, we can close over it, too. Is this a problem?
Maybe not. Maybe this is really cool:

for my $_ (@messages) {
push @callbacks, sub { chomp; say };
}

Those nice compact callbacks use the default variable, but they have closed
over the lexical topic as their default variable. Nice!

Even though they look like blocks, the things between squiggly braces at try
and catch are subroutines, so there's a calling boundary there. When the
sub passed to catch is going to get called, the exception that was thrown has
been put into $_. It's been put into the global topic, because otherwise
it just couldn't work. It can't communicate its lexical topic into a
subroutine that wasn't defined within its lexical environment. Subroutines
only close over lexicals in their defining environment.

Speaking of which, there's a lexical $_ in the environment in which the catch
sub is defined. In case you're on the edge of your seat wondering: yes, it
will close over that topic. The $_ in the catch block won't match a regex
against the $_ that has the exception in it, it will match against the
lexical topic established way back up at the top of the for loop. What about
log_exception? Well, it will get one topic or the other, depending on its
subroutine prototype.

And, hey, that's one of the two ways we can fix the catch block above:

…and that's why my $_ became experimental in Perl 5.18.0. It seems like it
just didn't work. It was a good idea to start with, and it solves a real
problem, and it seems like it could make the whole language make more sense.
In practice, though, it leads to confusing action-at-a-distance-y problems,
because it pits the language's fundamentals against each other. If we fix the
lexical topic, it will almost certainly change how it works or is used, so
relying heavily on its current behavior would be a bad idea. If we can't fix
the lexical topic, we'll remove it. That makes relying on its behavior just as
bad. When relying on a feature's current behavior is a bad idea, we mark it
experimental and issue warnings, and that's just what we've done in v5.18.0.

I must have done something right when I attended YAPC: :Asia 2011, because they
invited me back this year. I was *delighted* to accept the invitation, and I'm
glad I did.

I said I'd give a talk on the state of things in Perl
5, which I'd done
at YAPC::NA and OSCON, and which had gone well. It seemed like the topic to
cover, given that I was presumably being invited over in large part due to my
current work as pumpking. I only realized at the last minute that I was giving
the talk as a keynote to a plenary session. This is probably good. If I'd
known further in advance, I might have been tempted to do more editing, which
would likely have been a mistake.

Closer to the conference, I was asked whether I could pick up an empty slot and
do something, and of course I agreed. I had some pipe dreams of making a new
talk on the spot, but cooler heads prevailed and I did a long-shelved talk
about Dist::Zilla.

Both talks went acceptably, although I was unhappy with the Dist::Zilla talk.
I think there's probably a reason I shelved it. If I do talk about Dist::Zilla
again, I'll write new material. The keynote went very well, although it wasn't
quite a full house. I wasn't opposite any speaker, sure, but I was competing
with the iPhone 5s/5c launch. Ah, well! I got laughs and questions, both of
which were not guaranteed. I also think I got played off about ten minutes
early, so I rushed through the end when I didn't need to.

This wouldn't have happened, if I'd stuck to my usual practices. Normally when
I give a talk, I put my iPhone on the podium with a clock or timer on it, and I
time myself. I had been using an iOS app called Night Stand for this, the last
few years, but I couldn't, on Friday. I had, for no very good reason, decided
to upgrade my iPhone and iPad to iOS 7 on the morning before the conference.
Despite briefly bricking both devices, I only encountered one real problem:
Night Stand was no longer installing. After my keynote, I went and installed a
replacement app, and chastised myself for not sticking to my usual routine.

By the time I was giving that keynote, I'd been in town for four days. A lot
of activity tends to follow YAPCs, so it would've been nice to stick around
afterward instead, but I was concerned about getting my body at least
somewhat onto Tokyo time beforehand. Showing up to give a presentation half
dead didn't seem like a good plan.

The trip wasn't great. I left home around 5:30 in the morning and headed to
the bus stop. Even though it was going to be 80°F most of my time in Tokyo, it
was only 40°F that morning, and I traveled in long pants. I agonized over
this, and had thought about wearing sweat pants over shorts, or changing once I
got to the airport. I decided this was ridiculous, though. It turned out,
later, that I was wrong.

I flew out of Newark, which was just the way it always is. I avoided eating
anything much because prices there are insane, but when my flight was delayed
for three hours, I broke down and had a slice of pizza and an orangina. I also
used the time to complete my "learn a new game each week" goal by learning
backgammon. I killed a lot of time over the next few days with that app. It
didn't take long to get bored of my AI opponent, but I haven't yet played
against a human being.

The flight was pretty lousy. I'd been unable to get an aisle seat, so I wasn't
able to get up and move around as much as I wanted. Worse, the plane was
hot. I've always found planes to be a little too warm on the ground and a
little too cool in the air. The sun was constantly baking my side of the
plane, though, so it was nearly hot to the touch. I was sweating and gross,
and I wished I had switched to shorts. The food was below average. I chose a
bad movie to watch. When we finally landed, immigration took about an hour. I
began to despair. It would 24 hours of travel by the time I reached the
Pauley's, where I would stay. Was I really going to endure another awful 24
hours in just six days?

My spirits were lifted once I got out of the airport. (Isn't that always the
way?) I changed my dollars to yen, bought a tiny bottle of some form of Pepsi,
and went to squint at a subway map.

On my previous trip, I had been utterly defeated by the map of the subway at
Narita. It looks a lot like any other subway map, but at each station are two
numbers, each 3-4 digits. Were they time? Station numbers? Did I need to
specify these to buy a ticket? The ticketing machines, though they spoke
English, were also baffling. I was lost and finally just asked the station
agent for help getting to Ueno Station.

This time, I felt like an old hand. I had forgotten all about the sign, but
its meaning was immediately clear. They were prices for travel to each
station, in yen, for each of the two lines that serviced the route. I fed
¥1200 into a ticket machine, bought a ticket, and got on the Keisei line toward
Ueno. I probably could've done it with the machine's Japanese interface! I
felt like a champ. Later, of course, I'd realize that the Keisei line takes a
lot longer than the Skyliner, so maybe it wasn't the best choice… but I still
felt good. Also, that long ride gave me time to finally finish reading It
Can't Happen Here. Good
riddance to that book!

My sense of accomplishment continued as I remembered the way to Marty and
Karen's place. When I got in, I called home and confirmed that I was alive. I
said that before we did anything else, I needed a shower. Then we chatted
about this and that for a few hours and I decided that I didn't need to eat,
just sleep. When I woke up, the sun was already up! It was a great victory
over jet lag! Then I realized that it was 5:30 a.m., and the sun just gets up
quite early in Tokyo. Land of the rising sun, indeed!

I got some work done and called home again. (Every time I travel, FaceTime
grows more excellent!) Eventually, Karen and I headed out to check out the
things I'd put onto my "check out while in Tokyo" list. First up, the Meiji
Shrine!

We went to shops, did some wandering, and did not eat at Joël Robuchon's
place in Rippongi. (Drat!) We got soba and retired for the night. The next
day, we met up with Shawn Moore for more adventures. We went to Yoyogi Park,
got izakaya with Keith Bawden, Daisuke Maki, et al., and
Shawn and I ended our night with our Japanese Perl Monger hosts. We had a
variety of izakaya food, but nothing compared, for me, to a plate of sauteed
cabbage and anchovy. I could've eaten that all night. I also learned, I
think, that I don't like uni. Good to know!

The next day, Shawn, Karen, and I headed down to Yokohama. Shawn and I had to
get checked into our hotel. We planned to get to Kamakura to see the statue of
the Amida Buddha, but got too late of a start. They both shrugged it off, but
I felt I was to blame: we had to wait while I un-bricked my iPhone after my
first attempt to upgrade its OS. Sorry! (Of course, they got to go later,
so I'm not that sorry!)

Before leaving Minami-Senju, though, we got curry. Shawn had been very excited
for CoCo Curry on our 2011 trip, and I was excited for it this time. Their
curry comes in ten levels of hotness. I'd gotten level five, last time, and
this time got six. In theory, you have to provide proof that you've had level
five before (and, you know, lived) in order to get level six. I didn't have my
proof, though, and I thought I might need Shawn to badger the waitress for me.
Nope! I got served without being carded. I had found level five to be fairly
bland, and so I expected six to be just a bit spicy. It was hot! I didn't get
a photo! I really enjoyed it, and would definitely order it regularly if it we
had a CoCo Curry place in Pennsylvania.

If I go back to Tokyo, I will eat level seven CoCo Curry. This is my promise
to you, future YAPC::Asia organizer. Yes, you may watch to see if I cry.

Our hotel was just fine. The only room I could get was a smoking room (yuck)
but that was the only complaint I had, and I knew what I was getting into
there. For some reason we turned on the television, and sumo was on. We
stared at this for a while, transfixed. It didn't last long, though. The
spectacle was interesting, but the sport much less so, at least to me. Karen
hit the road, Shawn and I worked on slides in earnest, and then we headed out
to look for food. I put Shawn in charge (this was a common theme of my trip)
and he found an excellent yakiniku
place. We ordered a bunch of stuff with no idea what it was, except the
tongue, and were not disappointed. (Shawn warned me at the outset: "I don't
know a lot of food words.")

After some more slide wrangling, we crashed and, the next morning, were off to
the conference.

YAPC::Asia is a strange conference for me. On both of my trips there, I've
been an invited speaker, and felt very welcome… but feeling welcome isn't the
same as feeling like a part of things. The language barrier is very
difficult to get past. It's frustrating, because you find yourself in a room
full of brilliant, funny, interesting people, but you can't quite
participate. It's sort of like being a young child again.

Of course, that's what happens when the room is full of Japanese-speakers
listening to another Japanese-speaker. It certainly need not be the case in
one-on-one conversation. I chatted with Daisuke Maki, Kenichi Ishigaki,
Hiroaki Kobayashi, and some others, but it was far too few and too infrequent.
It was much easier to stick to talking to the people I already knew. In
retrospect, this was pretty stupid. While it's true that I don't see (say)
Paul and Shawn and Karen very often, I can talk to them whenever I want, and I
know what topics to ask them about and so on.

This year, YAPC::Asia had eleven hundred people. So, that's something like a
dozen that I knew and 1088 that I didn't. Heck, there were even a few
westerners I didn't go pester, where there'd be no language issue. I wanted to
try to convince more of the amazing talent in the Japanese Perl community to
come hack on perl5.git, and for the most part, I did not do this outside of my
in-talk exhortation. In that sense, my YAPC::Asia was a failure of my own
making, and I regret my timidity.

In every other aspect, the conference was an amazing success as far as I could
tell. It was extremely friendly, professional, energetic, and informative. I
sat through a number of talks in Japanese, and they were really interesting.
People sometimes talk about how there's "CPAN" and "Darkpan" and that's that.
You're either working with "the community" or you're not. The reality is that
there are multiple groups. Of course "we" know that in "the" community. How
much crossover is there between the Dancer community and the Perl 5 Porters?
Some. Well, the Japanese Perl community — or, rather, the community in Japan
that made YAPC::Asia happen — has some crossover with the community that
makes YAPC::NA happen, but there are large disjunct segments, and they're
solving problems differently, and it's ridiculous to imagine that we can't
learn from each other. Even if it wasn't self-evident, it was evident in the
presentations that were given.

After attending the largest YAPC ever, by quite a lot (at 1100 people!) it was
also sad to learn that this may be the last YAPC::Asia in Tokyo for some time.
The organizers, Daisuke Maki and the enigmatic "941" have been doing it for
years, and have declared that they're done with it. It seems unlikely that
anyone will step in and run the conference in their stead, at least in Tokyo.
There may be, they suggested, a change to regional Perl workshops: one in
Sapporo, one in Osaka, and so on. Perl workshops are great, but will I make it
to the Osaka Perl Workshop? Well, we'll see.

If I do, though, I'm going to do my best Paul Fenwick impression and force
everyone there to talk to me all the time.

When the conference was over, Karen, Marty, Paul, Shawn and I headed to dinner
(with Marcel, Reini, and Mirjam) and then to… karaoke! At first, Marty was
reticent and not sure he'd stick around. Paul's opening
number changed his mind, though,
and we sang ridiculous songs for ninety minutes. I drank a Zima. A Zima! I
thought this was pretty ridiculous, but Paul one-upped me, or perhaps
million-upped me, by ordering a cocktail made with pig placenta. I declined
to sample it.

The next day, after a final FaceTime chat with Gloria and a final high five for
Paul, I headed out to the airport. In 2011, I cut it incredibly close and
nearly missed my plane, and I wasn't going to do that this time. Miyagawa
pointed me toward the Musashikosugi JR line and warned me that the ticket
terminals there were confusing. He was right, too. I wasted ten minutes
trying to figure them out before finally asking the station agent for help. If
I'd just started there, I would've made an earlier train and not ended up
sitting on a bench for forty minutes. So I ended my last train ride in Tokyo
much as I began my first one: baffled by the system, reduced to pleading for
help. I didn't mind, really. I'd just finished an excellent trip and was
feeling great. (I also felt pretty good about blaming the computer and not
myself, but that's another matter.)

Narita was fine. Great, even! The airline staff treated me like a king. I
got moved to an aisle seat with nobody beside me! I killed time in the
United lounge, had a few free beers, and transferred some movies to my iPad.
In short order, we were aboard and headed home. The flight was only eleven
hours, customs was quick, and soon (finally!) I was reunited with my family and
off to Cracker Barrel for a "welcome back to America" dinner.

It was a great YAPC, and the most important thing I learned was the same as
always: I'm there to talk to the people, not listen to the talks. I'll do
better next time!

These two examples highlight cases where lexical references to anonymous
subroutines would not have worked. The first argument to sort must be a
block or a subroutine name, which leads to awful code like this:

sort { $subref->($a, $b) } @list

With our greppy, above, we get to benefit from the parser-affecting behaviors
of subroutine prototypes. Although you can writesub (&@) { ... }, it has
no effect unless you install that into a named subroutine, and it needs to be
done early enough.

On the other hand, lexical subroutines aren't just drop-in replacements for
code refs. You can't pass them around and have them retain their
named-sub behavior, because you'll still just have a reference to them. They won't be "really named." So if you
can't use them as parameters, what are their benefits over named subs?

First of all, privacy. Sometimes, I see code like this:

package Abulafia;
our $Counter = 0;
...

Why isn't $Counter lexical? Is it part of the interface? Is it useful to
have it shared? Would my code be safer if that was lexical, and thus hidden
from casual accidents or stupid ideas? In general, I make all those sorts of
variables lexical, just to make myself think harder before messing around with
their values. If I need to be able to change them, after all, it's only a one
word diff!

Well, named subroutines are, like our variables, global in scope. If you
think you should be using lexical variables for things that aren't API, maybe
you should be using lexical subroutines, too. Then again, you may have to be
careful in thinking about what "aren't API" means. Consider this:

package Service::Client;
sub _ua { LWP::UserAgent->new(...) }

In testing, you've been making a subclass of Service::Client that overrides
_ua to use a test UA. If you make that subroutine lexical, you can't
override it in the subclass. In fact, if it's lexical, it won't participate in
method dispatch at all, which means you're probably breaking your main class,
too! After all, method dispatch starts in the package on which a method was
invoked, then works its way up the packages in @INC. Well, package means
package variables, and that excludes lexical subroutines.

So, it may be worth doing, but it means more thinking (about whether or not to
lexicalize each non-public sub), which is something I try to avoid when coding.

So when is it useful? I see two scenarios.

The first is when you want to build a closure that's only used in one
subroutine. You could make a big stretch, here, and talk about creating a DSL
within your subroutine. I wouldn't, though.

Well… I might write it like that, but it won't work. logger is defined in
one package (presumably main::) and then called from two different packages.
Subroutine lookup is per-package, so you won't find logger. What you need is
a name lookup that isn't package based, but, well, what's the word?
Lexical!

So, you could make that a lexical subroutine by sticking my in front of the
subroutine declaration (and adding use feature 'lexical_subs (and, for now,
no warnings 'experimental::lexical_subs')). There are problems, though, like
the fact that caller doesn't give great answers, yet. And we can't really
monkeypatch that subroutine, if we wanted, which we might. (Strangely abusing
stuff is more acceptable in tests than in the production code, in my book.)
What we might want instead is a lexical name to a package variable. We have
that already! We just write this:

our sub logger { ... }

I'm not using lexical subs much, yet, but I'm pretty sure I will use them a
good bit more in the future!

Having finished the Zork trilogy, it was time for me to continue on into the
great post-Zork canon. I was excited for this, because it means lots of games
that I haven't played yet. First up: Starcross. I was especially excited for
Starcross! It's the first of Infocom's sci-fi games, and I only remembered
hearing good things. I'd meant to get started on the flight to YAPC::Asia, but
didn't manage until I'd begun coming home. On the train to Narita, things got
off to a weird start.

First, I realized I needed to consult the game's manual to get started. I'm
not sure if this was done for fun or as copy protection, but fortunately I had
a scan of the file I needed. After getting into the meat of the game, it was
time to get mapping. Mapping Starcross took a while to get right, but it was
fun. The game takes place on a huge space station, a rotating cylinder, in
which some of the hallways are endless rings. I liked the idea, but I think
that up/down, port/starboard, and fore/aft were used in a pretty confusing way.
I'm not sure the map really made sense, but was a nice change of pace without
being totally incomprehensible.

The game's puzzles had a lot going for them. It was clear when there was a
puzzle to solve, and it was often clear what had to be done, but not quite how.
Some objects had multiple uses, and some puzzles had multiple solutions.
Unfortunately, it has a ton of the classic text adventure problems, and they
drained the fun from the game at nearly every turn.

The game can silently enter unwinnable state, which you don't work out until
you can't solve the next puzzle. (It occurs to me that an interpreter with its
own UNDO would be a big help here, since I don't save enough.)

There are tasks that need to be repeated, despite appearances. Something like
this happens:

There are guess-the-verb puzzles, which far too often have as the "right" verb
a really strange option. For example, there's a long-dead spaceman, now just a
skeleton in a space suit.

> LOOK IN SUIT
It's a space suit with a dead alien in it.
> SEARCH SKELETON
You don't see anything special.
> EXAMINE SKELETON
It sure is dead.
> TOUCH SKELETON
Something falls out of the sleeve of the suit!

Argh!

There's a "thief" character that picks up objects and moves them around. It's
used to good effect (as was the thief in Zork Ⅰ) but it wastes time. Wasting
time wouldn't be a problem, if there wasn't a part of a time limit built into
the game. The time limit can be worked around, but it means you need to play
the game in the right order, which might mean going back to an early save once
you work that out. (Why is it that I love figuring out the best play order in
Suspended, but not anything else?) Even that wouldn't be so bad, in part
because I happily I had started by solving a number of puzzles that can be
solved in any order, but there was a problem. Most of the game's puzzles
center around collecting keys, so by the end of the game you're carrying a
bunch of keys, not to mention a few objects key to getting the remaining keys…
and there's an inventory limit. It's not even a good inventory limit, where
the game just says "you can't carry anything more." Instead, it's the kind
where, when you're carrying too much, you start dropping random things.

Argh!

It did lead to one amusing thing, at least, when I tried to pick up a key and
accidentally dropped the space suit I was wearing.

Still, the game is good. I particularly like the representational puzzles,
like the solar system and repair room. Its prose is good, but neither as
economical as earlier games nor as rich as later ones, making it inferior to
both. As in earlier games, I'm frustrated by the number of things mentioned
but not examinable. Getting "I don't know that word [which I just used]" is
worse than "you won't need to refer to that." I'm hoping that the larger
dictionaries of v5 games will allow for better messages like that. I've got a
good dozen games until I get to those, though.

Next up will be Suspended. I'm not sure how that will go, since I've played
that game many times every year for the past decade or so. After that, The
Witness, about which I know nearly nothing!

I always feel a little amazed when I realize how many of the things that really
interest me, today, are things that I was introduced to by my father. Often,
they're not even things that I think he's passionate about. They're just
things we did together, and that was enough.

One of the things I really enjoyed doing with him was playing text adventures.
It's strange, because I think we only did three (the Zork trilogy) and I was
not very good at them. I got in trouble for sneaking out the Invisi-Clues hint
book at one point and looking up answers for problems we hadn't seen yet. What
was I thinking?

Still, it's stuck with me, and I'm glad, because I still enjoy replaying those
games,
trying to write my own, and reading about the
craft.
Most of my (lousy, unfinished) attempts to make good text adventures have been
about making the game using existing tools. (Generally, Inform
6. Inform 7 looks amazing, but also
like it's not for me.) Sometimes, though, I've felt like dabbling in the
technical side of things, and that usually means playing around with the
Z-Machine.

Most recently, I was thinking about writing an assembler to build Z-Machine
code, and my thinking was that I'd write it in Perl 6. It didn't go too badly,
at first. I wrote a Perl 6 program that built a very simple Z-Machine
executable, I learned more Perl 6, and I even got my first
commit
into the Rakudo project. The very simple program was basically "Hello, World!"
but it was just a bit more complicated than it might sound, because the
Z-Machine has its own text encoding format called ZSCII, the Zork Standard Code
for Information Exchange, and dealing with ZSCII took up about a third of my
program. Almost all the rest was boilerplate to output required fields of the
output binary, so really the ZSCII code was most of the significant code in
this program. I wanted to write about ZSCII, how it works, and my experience
writing (in Perl 5)
ZMachine::ZSCII.

First, a quick refresher on some terminology, at least as I'll be using it:

a character set maps abstract characters to numbers (called code points)
and back

an encoding maps from those numbers to octets and back, making it possible
to store them in memory

We often hear people talking about how Latin-1 is both of these things, but in
Unicode they are distinct. That is: there are fewer than 256 characters in
Latin-1, so we can always store an character's code point in a single octet.
In Unicode, there are vastly more than 256 characters, so we must use a
non-identity encoding scheme. UTF-8 is very common, and uses variable-length
sequences of bytes. UTF-16 is also common, and uses different variable-length
byte sequences. There are plenty of other encodings for Unicode characters,
too.

The Z-Machine's text representation has distinct character set and encoding
layers, and they are weird.

The Z-Machine Character Set

Let's start with the character set. The Z-Machine character set is not one
character set, but a per-program set. The basic mapping looks something like
this:

The next thing to note is the "extra characters," which is where you'll be
headed if you're not just speaking English. Those 96 code points can be
defined by the programmer. Most of the time, they basically extend the
character repertoire to cover Latin-1. When that's not useful, though, the
Z-Machine executable may provide its own mapping of these extra character by
providing an array of words called the Unicode translation table. Each
position in the array maps to one extra character, and each value maps to a
Unicode codepoint in the basic multilingual
plane. In other
words, the Z-Machine does not support Emoji.

So: ZSCII is not actually a character set, but a vast family of many possible
user-defined character sets.

Finally, you may have noticed that the basic mapping table gave (unassigned)
code points from 0x0FF to 0x3FF. Why's that? Well, the encoding mechanism,
which we'll get to soon, lets you decode to 10-bit codepoints. My
understanding, though, is that the only possible uses for this would be
extremely esoteric. They can't form useful sentinel values because, as best
as I can tell, there is no way to read a sequence of decoded codepoints from
memory. Instead, they're always printed, and presumably the best output you'll
get from one of these codepoints will be �.

Here's a string of text: Queensrÿche

Assuming the default Unicode translation table, here are the codepoints:

This all seems pretty simple so far, I think. The per-program table of extra
characters is a bit weird, and the set of control characters (which I didn't
discuss) is sometimes a bit weird. Mostly, though, it's all simple and
reasonable. That's good, because things will get weirder as we try putting
this into octets.

Z-Machine Character Encoding

The first thing you need to know is that we encode in two layers to get to
octets. We're starting with ZSCII text. Any given piece of text is a sequence
of ZSCII code points, each between 0 and 1023 (really 255) inclusive. Before
we can get to octets, we first built pentets. I just made that word up. I
hope you like it. It's a five-bit value, meaning it ranges from 0 to 31,
inclusive.

What we actually talk about in Z-Machine jargon isn't pentets, but
Z-characters. Keep that in mind: a character in ZSCII is distinct from a
Z-character!

Obviously, we can't fit a ZSCII character, which ranges over 255 points, into a
Z-character. We can't even fit the range of the ZSCII/ASCII intersection into
five bits. What's going on?

In all cases, the value at the bottom is a ZSCII character, so you can
represent a space (␠) with ZSCII character 0x020, and encode that to the
Z-character 0x00. So, where's everything else? It's got to be in that range
from 0x00 to 0x1F, somehow! The answer lies with those funny little "shift in"
glyphs under 0x04 and 0x05. The table above was incomplete. It is only the
first of the three "alphabets" of available Z-characters. The full table would
look like this:

Strings always begin in alphabet 0. Z-characters 0x04 and 0x05 mark the next
character as being in alphabet 1 or alphabet 2, respectively. After that
character, the shift is over, so there's no shift character to get to alphabet
0. You won't need it.

So, this gets us all the ZSCII/ASCII intersection characters… almost. The
percent sign, for example, is missing. Beyond that, there's no sign of the
"extra characters." Now what?

We get to the next layer of mapping via A2-06, represented above as an
ellipsis. When we encounter A2-06, we read two more Z-characters, join the two
pentets, interpret the resulting dectet as a little-endian 10-bit integer, and
that's the ZSCII character being represented. So, in a given string of
Z-characters, any given ZSCII character might take up:

one Z-character (a lowercase ASCII letter)

two Z-characters (an uppercase ASCII letter or one of the symbols in A2)

We start off with a four Z-character sequence, then a two Z-character sequence,
then a few single Z-characters. The whole string of Z-characters should be
fairly straightforward. We could just encode each Z-character as an octet,
but that would be pretty wasteful. We'd have three unused bits per
Z-character, and in 1979 every byte of memory was (in theory) precious.
Instead, we'll pack three Z-characters into every word, saving the word's high
bit for later. That means we can fit "!«" into two words like so:

! 05 14 0b00101 0b01110
« 05 06 05 03 0b00101 0b00110 0b00101 0b00011

…so…

0001 0101 1100 01011001 1000 1010 0011

Red and blue runs are the bits of our Z-characters. You can see that each word
is three complete Z-characters. The green bits are the per-word high bits.
This bit is always zero, except for the last word in a packed string. If we're
given a pointer to a packed string in memory (this, for example, is the
argument to the print_addr opcode in the Z-Machine instruction set) we know
when to stop reading from memory because we encounter a word with the high bit
set.

Okay! Now we can take a string of text, represent it as ZSCII characters,
convert those to Z-characters, and then pack the whole thing into pairs of
octets. Are we done?

Not quite. There are just two things I think are still worth mentioning.

The first is that the three alphabet tables that I named above are not
constant. Just like the Unicode translation table, they can be overridden on
a per-program basis. Some things are constant, like shift bits and the use of
A2-06 as the leader for a four Z-character sequence, but most of the alphabet
is up for grabs. The alphabet tables are stored as 78 bytes in memory, with
each byte referring to a ZSCII code point. (Once again we see code points
between 0x100 and 0x3FF getting snubbed!)

The other thing is abbreviations.

Abbreviations make use of the Z-characters I ignored above: 0x01 through 0x03.
When one of these Z-characters is seen, the next character is read. Then this
happens:

offset is the offset into the "abbreviations table." Values in that table
are pointers to memory locations of string. When the Z-Machine is printing a
string of Z-characters and encounters an abbreviation, it looks up the memory
address and prints the string there before continuing on with the original
string. Abbreviation expansion does not recurse. This can save you a lot of
storage if you keep referring to the "localized chronosynclastic infundibulum"
throughout your program.

First we fix up newlines. Then we map the Unicode string's characters to a
string of ZSCII characters. Then we map the ZSCII characters into a sequence
of Z-characters. Then we pack the Z-characters into words.

At every point, we're dealing with Perl strings, which are just sequences of
code points. That is, they're like arrays of non-negative integers. It
doesn't matter that $zscii is neither a string of Unicode text nor a string
of octets to be printed or stored. After all, if someone has figured out that
esoteric use of Z+03FF, then $zscii will contain what Perl calls "wide
characters." Printing it will print the internal ("utf8") representation,
which won't do anybody a lick of good. Nonetheless, using Perl strings keeps
the code simple. Everything uses one abstraction (strings) intead of two
(strings and arrays).

Originally, I wrote my ZSCII code in Perl 6, but the Perl 6 implementation was
very crude, barely supporting the basics of ASCII-only ZSCII. I'm looking
forward to (someday) bringing all the features in my Perl 5 code to the Perl
6 implementation, where I'll get to use distinct types (Str and Buf) for the
text and non-text strings, sharing some, but not all, of the abstractions as
appropriate.

Until then, I'm not sure what, if anything, I'll use this library for. Writing
more of that Z-Machine assembler is tempting, or I might just add abbreviation
support. First, though, I think it's time for me to make some more progress on
my Great Infocom Replay…

This post is tagged programminganddnd. I don't get to do that often,
and I am pleased.

For quite a while, I've been using random tables to avoid responsibility for
the things that happen in my D&D games. Instead of deciding on the events that
occur at every turn, I create tables that describe the general feeling of a
region and then let the dice decide what aspects are visible at any given
moment. It has been extremely freeing. There's definitely a different kind of
skill needed to get things right and to deal with what the random number gods
decide, but I really enjoy it. Among other things, it means that I can do more
planning well in advance and have more options at any moment. I don't need to
plan a specific adventure or module each week, but instead prepare general
ideas of regions on different scales, depending on the amount of time likely
to be spent in each place.

I was happy with some stupid little gimmicks. I color-coded tables to remind
me which dice they'd need. The color codes matched up to colored boxes that
showed me the distribution of probability on those dice, so I could build the
tables with a bit more confidence. It was easy, but I found myself wanting to
be able to drill further and further down. What would happen is this: I'd
start with an encounter table with 19 entries, using 1d20+1d8 as the number
generator. This would do pretty well for a while, but after you've gotten
"goblin" a few times, you need more variety. So, next up "goblin" would stop
being a result and would start being a redirection. "Go roll on the goblin
encounter table."

As these tables multiplied, they became impossible to deal with in Numbers.
Beyond that, I wanted more detail to be readily available. The encounter entry
might have originally been "2d4 goblins," but now I wanted it to pick between
twelve possible kinds of goblin encounters, each with their own number
appearing, hit dice, treasure types, reaction modifiers, and so on. I'd be
flipping through pages like a lunatic. It would have been possible to inch a
bit closer to planning the adventure by pre-rolling all the tables to set up
the encounter beforehand and fleshing it out with time to spare, but I wasn't
interested in that. Even if I had been, it would have been a lot of boring
rolling of dice. That's not what I want out of a D&D game. I want exciting
rolling of dice!

I started a program for random encounters in the simplest way I could. A table
might look something like this:

type: list
pick: 1
items:
- Cat
- Dog
- Wolf

When that table is consulted, one of its entries is picked at random, all with
equal probability. If I wanted to stack the odds, I could put an entry in
there multiple times. If I wanted to add new options, I'd just add them to the
list. If I wanted to make the table more dice-like, I'd write this:

This rolls a d4 to get a result, then rolls it again for another result, and
gives both. If either of the results is a 3, then it rolls 1-4 more times for
additional options. The output looks like this:

Why are some of those things indented? Because the whole presentation of
results stinks, because it's just good enough to get the point across. Oh
well.

In the end, in the above examples, the final result is always a string. This
isn't really all that useful. There are a bunch of other kinds of results that
would be useful. For example, when rolling for an encounter on the first level
of a dungeon, it's nice to have a result that says "actually, go roll on the
second level, because something decided to come upstairs and look around."
It's also great to be able to say, "the encounter is goblins; go use the goblin
encounter generator."

(No, this is not from an actual campaign. "Instant death" is a bit much, even
for me.)

Here, we see a few of Roland's other features. The mapping with file in it
tells us to go roll the table found in another file, sometimes (as in the case
of the first result under result 5) with extra parameters. We can mix table
types. The top-level table is a die-rolling table, but result 5 is not. It's
a list table, meaning we get each thing it includes. One of those things is a
list table with a pick option, meaning we get that many things picked
randomly from the list. Result 7 says "roll again on this table two more times
and keep both results." Result 8 says, "nothing happens after all."

Result 6 under result 6 is one I've used pretty rarely. It returns a
hash of data. In this case, the encounter is with a spy, but he has a cover
job, found by consulting the job table.

Sometimes, in table like this, I know that I need to force a given result. If
I haven't factored all the tables into their own results, I can pass -m to
Roland to tell it to let me manually pick the die results, but to let each
result have a default-random value. If I want to force result six on the above
table, but want its details to be random, I can enter 6 manually and then hit
enter until it's done:

In other words, it's basically a YAML-ified version of a
Basic D&D monster block. There are a few additional fields that can be put on
here, and we see some of them. For example, per-unit can decorate each unit.
(We're expecting 2d4 men, because of the num field, but if you look up at the
previous encounter table, you'll see that we can override this to do things
like force an encounter with a single creature.) In this case, we'll get a
bunch of men, some of whom may be infected or zombified.

Not every value is treated the same way. The number encountered is rolled and
used to generate units, and the hd value is used to produce hit points for
each one. Even though it looks like a dice specification, damage is left
verbatim, since it will get rolled during combat. It's all a bit too
special-casey for my tastes, but it works, and that's what matters.

Here, one time out of ten, roboelfs are encountered with a Monolith. That
could've been a redirect to describe a monolith, but for now I've just used a
string. Later, I can write up a monolith table using whatever form I want.
(Most likely, this kind of thing would become a dict with different
properties all having embedded subtables.)

Right now, I'm really happy with Roland. Even though it's sort of a mess on
many levels, it's good enough to let me get the job done. I think the problem
I'm trying to solve is inherently wobbly, and trying to have an extremely
robust model for it is going to be a big pain. Even though it goes against my
impulses, I'm trying to leave things sort of a mess so that I can keep up with
my real goal: making cool random tables.

Earlier this year, I lamented the state of "workspaces" in
Chrome. I said that I'd settled on
using Tabs Outliner, but that I basically didn't like it. The author of the
plugin asked me to elaborate, and I said I would. It has been sitting in my
todo list for months and I have felt bad about that. Today, Gregory Meyers
commented on that blog post, and it's gotten me motivated enough to want to
elaborate.

I agree with everything Gregory said, although not everything he says
is very important to me. For example, the non-reuse of windows doesn't bother
me all that much. On the other hand, this nails it:

Panorama is intuitive. I didn't have to read a manual to understand how to
use it. TO comes with an extensive list of instructions... because it is
not intuitive. Now, supplying good instructions is better than leaving me
totally lost. But it's better to not need instructions at all. I have to work
much harder to use TO.

I wanted to use Tabs Outliner as a way to file tabs into folders or groups and
then bring up those groups wholesale. For example, any time I was looking at
some blog post about D&D that I couldn't read now, I'd put it a D&D tab group.
It's not just about topical read-it-later, though. If I was doing research on
implementations of some standard, I might build a tab group with many pages
about them, along with a link to edit my own notes, and so on. The difference
is that for "read it later," traditional bookmarks are enough. I'd likely only
bring them back up one at a time. I could use Instapaper for this, too. For a
group of research tabs (or other similar things), though, I want to bring them
all up at once and have changes to the group saved together.

This just doesn't seem like that Tabs Outliner is good at.

Let's look at how it works. This is a capture of the Tabs Outliner window:

It's an outliner, just like its name implies. Each of the second-level
elements is a window, and the third level elements are tabs.

At the top, you can see the topical tab groups I had created to act like
workgroups that I could restore and save. I can double-click on, say, Pobox
and have that window re-appear with its six tabs. If I open or close tabs in
the window, then close the whole window, the outliner will be up to date. If
this was all that Tab Outliner did, it might be okay. Unfortunately, there are
problems.

First, and least of all, when I open a tab group that had been closed, the
window is created at a totally unworkable size. I think it's based on the
amount of my screen not taken up by the Tab Outliner window, but whatever the
case, it's way, way too small. The first thing I do upon restoring a group is
to fix the window size. There's an option to remember the window's original
size, but it doesn't seem to work. Or, at least, it only seems to work on
tab groups you've created after setting the preference which means that to fix
your old tab groups, you have to create a new one and move all the tabs over by
hand, or something like that. It's a pain.

Also in the screenshot, you'll see a bunch of items like "Window (crashed Aug
17)". What are those? They're all the windows I had open the last time I
quit. Any time you quit Chrome, all your open windows, as near as I can tell,
stay in Tab Outliner, as "crashes." Meanwhile, Chrome re-opens your previous
session's windows, which will become "crashes" again next time you quit. If
you have three open windows, then every time you quit and restart, you have
three more bogus entries in the Tab Outliner. How do you clean these up? Your
instinct may be to click the trash can icon on the tab group, but don't! If
you do that, it will delete the container, but not the contents, and the tabs
will now all have to be deleted individually from the outliner. Instead first
collapse the group and then delete it with the trash icon.

Every once in a while, I do a big cleanup of these items.

Here's what I really want from a minimal plugin: I want to be able to give a
window a name. All of its tabs get saved into a list. Any time I update them
by adding a tab, closing a tab, or re-ordering tabs, the list is updated
immediately. If I close the window, the name is still somewhere I can click to
restore the whole window. If I have a named window open when I quit Chrome,
then when I restart Chrome, it's still open and still named. Other open
unnamed windows are still open, and still unnamed.

This would get me almost everything I want from tab groups, even if they don't
get the really nice interface of Panorma / Tab Exposé.

I've been slowly switching all my code projects to use GitHub's bug tracking
(GitHub Issues) in addition to their code hosting. So far I'm pretty happy
with it. It's not perfect, but it's good enough. It's got a tagging system so
that you can categorize your issues according to whatever set of tags you want.
The tags are called labels.

Once you figure out what set of labels you want, you realize that you then have
to go set the labels up over and over on all your repos. Okay, I guess. How
long could that take, right?

Well, when you have a few hundred repositories, it can take long enough.

And what happens when you decide that blazing red was a stupid color for
"non-critical bugs" and maybe you shouldn't have spelled "tests" with three
t's.

Fortunately, GitHub has a really good API, and
Pithub seems, so far, to be a very nice
library for dealing with the GitHub API!

I've finally finished (for now, anyway) another hunk of code in my ever-growing
suite of half-baked productivity tools. I'm tentatively calling the whole mess
"Ywar", and would paste the dictionary entry here, but all it says is "obsolete
form of 'aware'". So, there you go. I may change that later. Anyway, a rose
by any other name, or something, right?

Every morning, I get two sets of notices. The Daily
Practice sends me an overview of my calendar,
which makes it easy to see my streaks and when they end. Remember the
Milk sends me a list of all the
tasks due that day. These messages can both be tweaked in a few ways, but
only within certain parameters. Even if I could really tweak the heck out of
them, I'd still be getting two messages. Bah!

Fortunately, both TDP and RTM let me connect and query my data, and that's just
what I do in my new cronjob. I get a listing of all my goals in TDP and figure
out which ones are expiring on what day, then group them by date. Anything
that isn't currently safe shows up as something to do today. I also get all my
RTM tasks for the next month and put them in the same listing. That means that
each email is a summary of the days of the upcoming month, along with the
things that I need to get done on or before that date. That means I can (and
should) start with the tasks listed first and, when I finish them, keep working
my way down the list.

In theory, I could tweak the order in which I worked based on time estimates
and priorities on my tasks. In practice, that's not going to happen. This
whole system is held together by bubblegum, paperclips, and laziness.

Finally, the emails implement a nagging feature. If I've tagged a RTM task
with "nag," it will show up on today's agenda if I haven't made any notes on it
in the last two weeks. I think I'm better off doing this than using RTM's
own recurring task system, but I'm not sure yet. This way, all my notes are on
one task, anyway. I wanted this "automatic nagging" feature for tasks that I
can't complete myself, but have to remind others about, and this was actually
the first thing I implemented. In fact, it was only after I got my autonag
program done that I saw how easily I could throw the rest of the behavior onto
the program.

Here's what one of the messages looks like:

Right now it's plaintext only. Eventually, I might make a nice HTML version,
but I'm in no rush. I do most of my mail reading in mutt, and the email
looks okay in Apple Mail (although I think I found a bug in their
implementation of quoted-printable!). The only annoyance, so far, is the goofy
line wrapping of my per-day boundary lines. In HTML, those'd look a lot
better.

I'll post the source code for this and some of my other TDP/RTM code at some
point in the future. I'm not ashamed of the sloppy code, but right now my API
keys are just sitting right there in the source!

As with the other Zork games, my enjoyment of Zork Ⅲ was affected by the fact
that I played it when I was young. Quite a few of the puzzles stuck with me,
and it helped me work out an answer quickly in cases where I might have
remained stumped for too long. I'm not sure whether I should read anything
into this, so I won't.

I liked the general feel of the game. It was just a bit elegiac, but not
pretentiously so. The prose is still (mostly) very spare, which is something I
want to try to improve in my next attempt to make a text game. There's still
some good humor, too. The writing is good.

I liked most of the puzzles, too. Most especially, I liked that the game
subverts, several times, the idea that The Adventurer in Zork is a murder
hobo. Sure, you can kill and steal, but
you'll never become Dungeon Master if you do. The game makes it pretty clear,
too, that you're a horrible person if you act like you did in Zork Ⅰ:

The hooded figure, fatally wounded, slumps to the ground. It gazes up at you
once, and you catch a brief glimpse of deep and sorrowful eyes. Before you can
react, the figure vanishes in a cloud of fetid vapor.

I'm hoping that I'll find the Enchanter trilogy to be a good follow-up to the
Zork games, because I never played more than a few turns of those, and I'll be
forced to pay more attention to detail and give more thought to solving
puzzles.

Of the puzzles in Zork Ⅲ, I think that the Scenic View puzzle and the mirror
box may be my favorites. They were interesting, unusual, and solving them made
me feel clever. A few of the puzzles were no so great. The cliff puzzle is
well known for being annoying: why would you think to just hang around in a
room for no reason? You wouldn't. The Royal Museum puzzle is just great, but
how are you supposed to tell that the gold machine moves? Or why would LOOK
UNDER SEAT differ from EXAMINE SEAT? It's these little details that remind
you that the Infocom games were still figuring out how to stump the user
without annoying the user.

The Dungeon Master was a good ending for the Zork trilogy. I'm not sure
whether it's the best of the three games, but I think that they form a nice
set. After feeling sort of let down by Deadline, Zork Ⅲ has me feeling
reinvigorated. Next up: Starcross!

THAC0, armor class, saving throws, skill, and other applications of the d20
(body)

More than a few times, when I've told people that I play an older version of
D&D, I've gotten a slightly horrified look and the question, "Is that the one
with THAC0?" What's so awful about THAC0? I ask, but the answers are vague.
"It doesn't make any sense! It's bizarre!"

I think that most of the time, "the one with THAC0" means the second edition of
AD&D, but pretty much every D&D before third edition had THAC0. It just means
that your character has a certain target number to hit something with armor
class zero. To hit armor class 0.

So, you roll a twenty sided die, add your target's armor class, and see whether
your result is the target or better. This is basically how every single roll
works in third edition, so why is it weird?

d20 + modifier >= target

I think people get confused because it's tied to descending armor class, where
it's better to have a low armor class. Of course it is! If your AC is a
bonus to your enemy's to-hit roll, you want it to be low. The third edition
system is mathematically identical. It just swaps the modifier and target.
The enemy now determines the target, not the modifier, because the target is
the enemy's armor class. The character's modifier is now (basically) constant
based on his level.

So stop complaining that THAC0 is confusing!

One thing I like about THAC0 is that it is a little different than what
people are used to. I think later editions of D&D try to boil everything down
to one universal mechanic, which makes it harder to simply drop one optional
bolt-on (like the seafaring rules from Cook's expert rules) or add other
optional rules (like psionics). If there's a universal mechanic, everything
needs to fit into them. If everything has its own simple, self-contained
set of rules, you can monkey around without breaking the whole game.

I'm still thinking about non-combat challanges in this context. (This is where
a "skill system" would often come into play, but I don't like the implications
of enumerating all a character's skills.) A common mechanic is to try to roll
at or under the relevant attribute. So, if you've got a 14 Charisma and you're
trying to intimidate the town guard, you've got to roll under a 14. A natural
20 is the worst you can do, so it's a critical failure. A natural 1 is a
great success. On a tie, the character with the higher attribute wins.

Building from that, some systems say that instead of 1 being perfect, you shoot
for your attribute's value. That way, in a Strength contest between two
characters, the winner is the one who rolls highest without exceeding his or
her score wins the contest. Attribute scores don't need to be revealed or
compared. Zak S. wrote about this, and points out the obvious (if silly)
problem: it's not very exciting for everyone else at the table to see the die
roll and stop on 13, even if that is the critical success value. Everybody
wants to make a big noise at a 1 or a 20.

I've been thinking about this a lot lately as I prepare to switch my 4E
campaign to a hacked B/X. I'm thinking about extending my list of saving
throws to add a few more categories
and just using that. I'm also tempted to just say "we're gonna use FUDGE dice"
and bring in FAE-style Approaches, because those seem pretty awesome.

As usual, what I really need is more table time to just test a bunch of bad
ideas and see which is the least worst.

This post has been sort of rambling and pointless. Please allow me to pay the
Joesky
tax:

The high priests of Boccob are granted knowledge of secrets and portents, but
often at great price. Some of these powers (initially for 4E) are granted to
the highest orders while they undertake holy quests:

Boundless Knowledge. Can learn any fact through a turn of meditation.
The cost:

I replaced "remember where I was in the list" with "keep the list in a text
file."

I used The Daily Practice to keep track of
whether I was actually getting work done on the list regularly.

About a month later, I automated step
2. I just had my cron job keep
track of the SHA1 of the file in git. If it changed, I must have done some
work.

Yesterady, as I start month three of the regime, I've invested a bit more time
in improving it, and I expect this to pay big dividends.

My process is something like this, if you don't want to go read old posts:

keep a list of all my projects

group them into "bug queue cleared" and "project probably complete" and
"work still to do"

sort the lists by last-review date, descending; fall back to alphabetical
order

every time I sit down to work on projects, start with the first project on
the "work to do" list, which has been touched least recently

when new bugs show up for the other two lists, put them into the "work to
do" list at the right position

This was not a big problem. I kept the data in a Markdown table and when I'd
finish review, I'd delete a line from the top, add it to bottom, and add
today's date. The step that looked like it would be irritating was #5. I'd
have to keep an eye on incoming bug reports, reorder lists, and do stupid
maintenance work. Clearly this is something a computer should be doing for me.

So, the first question was: can I get the count of open issues in GitHub?
Answer: yes, trivially. That wasn't
enough, though. Sometimes, I have older projects with their tickets still in
rt.cpan.org. Could I find out which projects used which bugtracker? Yes,
trivially. What if the project
uses GitHub Issues, but has tickets left in its RT queues? Yes, I can get
that.

Those are the big things, but once you pick up the data you need for figuring
them out, there are other things that you can check almost for free: is my
GitHub repo case-flattened? If so, I want to fix it. Is the project a CPAN
dist, but not built by Dist::Zilla? Did I forget to enable Issues at GitHub?
Am I missing any "Kwalitee point" on the CPANTS game
scoreboard?

Writing the whole program took an hour, or maybe two, and it will clearly save
me a fair bit of time whenever I do project review. I even added a
--project switch so that I can say "I just did a round of work on
Some::Project, please update my last reviewed date." It rebuilds the YAML file
and commits the change to the repo. Since it's making a commit, I also added
-m so I can specify my own commit message, in case there's something more to
say than "I did some work."

This leaves my Markdown file in the lurch. That wouldn't bother me, really,
except that I've been pointing people at the Markdown file to keep track of
when I might get to that not-very-urgent bug report they filed. (I work on
urgent stuff immediately, but not much is urgent.) Well, no problem here: I
just have the program also regenerate the Markdown file. This eliminates the
grouping of projects into those three groups, above. This is good! I only
did that so I could avoid wasting time checking whether there were any bugs to
review. Now that my program checks for me, there's no cost, so I might as
well check every time it comes up in the queue. (Right now, it will still
prompt me to review things with absolutely no improvements to be made. I doubt
this will actually happen, but if it does, I'll deal with it then.)

The only part of the list that mattered to me was the list of "stuff I don't
really plan to look at at all." With the automation done, the list shrinks
from "a bunch of inherited or Acme modules" into one thing: Email-Store. I
just marked it as "never review" and I'm done.

So, finally, this is my new routine:

If The Daily Practice tells me that I have to do a code review session…

…or I just feel like doing one…

…I ask code-review what to work on next.

It tells me what to work on, and what work to do.

I go and do that work.

When I'm done, I run code-review --project That-Project and push to github.

Note that the only part of this where I have to make any decisions is #5, where
I'm actually doing work. My code-review program (a mere 200 lines) is doing
the same thing for me that Dist::Zilla did. It's taking care of all the stuff
that doesn't actually engage my brain, so that I can focus on things that are
interesting and useful applications of my brain!

It prints every line of the input, along with how long it had to wait to get
it. It can be useful for tailing a log file, for example. I wanted to write
something similar, but to just tell me how long each line of my super-simple
program took to run. I decide it would be fun to do this with a Devel module
that would get loaded by the -d switch to perl.

I wrote one, and it's pretty dumb, but it was useful and it did, in the end, do
the job I wanted.

When you pass -d, perl sets $^P to a certain value (on my perl it's 0x073F)
and loads perl5db.pl. That library is the default perl debugger. You can
replace it with your own "debugger," though, by providing an argument to -d
like this:

$ perl -d:SomeThing ...

When you do that, perl loads Devel::SomeThing instead of perl5db.pl. That
module can do all kinds of weird stuff, but the simplest thing for it to do is
define a subroutine in the DB package called DB. &DB::DB is then called just
before each statement runs, and can get information about just what is being
run by looking at caller's return values.

One of the bits set on $^P tell it to make the contents of each loaded file
available in a global array with a funky name. For example, the contents of
foo.pl are in @{"::_<foo.pl"}. Woah.

My stupid timer keeps track of the amount of time taken between statements and
prints your program back at you, telling you how long was spent on each line,
without measuring the breakdown of time spent calling subroutines loaded from
elsewhere. It expects an incredibly simple program. If you execute code on
any line more than once, it will screw up.

Still, it was a fun little exercise, and maybe demonstrative of how things
work. The code documentation for this stuff is a bit lacking, and I hope to
fix that.

It is amazing how bad Yahoo!'s "family account" experience is. I want to make
an account for my six year old daughter to use to upload her photos to Flickr.
Googling for Yahoo! family accounts and flickr finds this text:

Yahoo! Family Accounts allow a parent or legal guardian to give
consent before their child under 13 creates an account with Yahoo!. A
child is someone who indicates to us that they are under the age of
13.
[...]
Your child may have access to and use of all of Yahoo!'s products
and services, including Mail, Messenger, Answers, mobile apps, Flickr,
Search, Groups, Games, and others. To learn more about our privacy
practices for specific products, please visit the Products page of our
Privacy Policy.

(Emphasis mine, of course.)

I had to search around to find where to sign up. I had to use the
normal signup page. I filled the form out and kept getting rejected.
"The alternate email address you provided is invalid." After trying
and trying, I finally realized that they have proscribed the use of
"yahoo" inside the local part. So, com.yahoo.mykid@mydomain.com was
not going to work. Fine, I replaced yahoo with ybang. Idiotic,
but fine.

After I hit submit, I was, utterly without explanation, given a sign
in page. I tried to sign in several times with the account I just
requested, but was told "no such account exists."

Instead, I tried to log in with my own account, and that worked. I
was taken to a page saying, "Do you want to create a Family Account
for your child?" Yes, I do! Unfortunately, the CAPTCHA test that
Yahoo! uses is utterly awful. It took me half a dozen tries to get
one right, and I've been human since birth. Worse, the form lost data
when I'd resubmit. It lost my credit card number — which is excusable
— but also my state. Actually, it was worse: it kept my state but
said "this data is required!" next to it. I had to change my country
to UK, then back to USA, then re-pick my state. Then it was
satisfied.

Finally, I got the account set up. I was dropped to the Yahoo! home
page, mostly showing me the daily news. (To Yahoo!'s credit, none of
this was horrible scandal rag stuff for my six year old. Less to
their credit, the sidebar offered me Dating.) I verified my email
address and went to log her in to Flickr. Result?

We're sorry, you need to be at least 13 years of age to share your
photos and videos on Flickr.

So, what now? Now I create a second account as an adult, upload all
her photos there, and give her the account when she's older, I guess.
Or maybe I'll use something other than Flickr, since right now I'm
pretty sick of the many ways that Yahoo! has continued to make Flickr
worse.

I've got nearly every goal on my big board lit up. So, now I'm getting into
a routine of getting all the regular things done. Next up, I'm going to try to
get better at doing the one-off tasks I have to do, like file my expenses,
arrange a piano tuning, and that sort of thing. For this, I'm going to try
using Remember the Milk. I've used it in
the past and liked it fine, but I didn't stick with it. I think that if I
integrate it into my new routine, it'll work.

I've put in a few tasks already, and gotten some done. Today, I added some
tasks with the "nag" tag, telling me that they're things I need to bug other
people about until they get done. Other tasks, I'm creating with due dates.
Yet others are just general tasks.

My next step will be to use the Remember the Milk API (with
WebService::RTMAgent,
probably) to help deal with these in three ways:

require that I never have any task more than one day overdue (I'm cutting
myself a little slack, okay?)

require a new note on any "nag" task every once in a while

require that ... well, I'm not sure

That lousy #3 needs to be something about getting tasks done. I think it will
be something like "one task in the oldest 25% of tasks has to get done every
week." I think I won't know how to tweak it until I get more tasks into RTM.

Maybe I'll do that on vacation next week. That sounds relaxing, right?

I have no doubt that automating a bunch of my
goals on The Daily
Practice has helped me keep up with doing them. As I keep
working to stay on top of my goals, though, I'm finding that the effects of TDP
on my activity are more complex and subtle than I had anticipated.

The goals that are getting the most activity are:

automated

already started

achievable with a small amount of work performed frequently

My best streak, for example, is "review p5p commits." All I have to do, each
day, is not have any unread commit notifications more than a week old. Every
day, we have under two dozen notices, generally, so I can just read the ones
that come in each day and I'm okay. If I miss a day, I'm still good for a
while. After that comes "catch up with p5p," which is the same.

The next goals are in the form "do work on things which you will then record by
making commits in git." For example, I try to keep more on top of bug
reports lately. So far, so good.
These goals are still going strong, and have been going strong for as long as
my other automated goals. The score is lower, though, because they don't show
up as done each day, but only on days I do the work. Despite that, the
structure of the goals is the same: make sure the work is done before each safe
period is over. This suggets an improvement to TDP: I'd like my goals'
scores to be their streak lengths, in days, rather than the number of times
I've performed something. This seems obvious to me, in retrospect.

The goal that trails all of these is "spend an hour on technical reading." I
didn't get started on that immediately. Once I did, though, I've been
motivated to keep the chain going. My strong suspicion, though, is that I
only felt motivated because I had already established streaks with my easier to
perform, automaticaly-measured goals. Still, my intuition here is that it's
much easier to get going once at least a single instance is on the big board.
Unless there's a streak at all, there's no streak to break. This suggests
another improvement, though a more minor one. Right now, scores are only
displayed for streaks with more than one completion. You don't see a score
until you've done something twice. I think it would be better to keep the
streaks looking visually similar, to give them all equal value. After all, the
value isn't that I did something 100 times in a row, but that for 100 days, it
was getting done.

Then come the goals that I haven't started at all. These goals are just
sitting there, waiting for me to start a streak. Once I do start, I think I'll
probably stick to it, but I have to overcome my initial inertia. Once I get
it started, I get my nice solid line, and then I have a reason to keep it
going. On the other hand, if I have no streak, there is no incentive to get
started. I think this is a place to make improvements: just like I'd
rather see scoring mean "every day in the streak is worth one point," I'd like
to see "every day that a goal is not safe counts as a cumulative negative
point." Now I can't just put in goals that I might start eventually. Leaving
a goal undone for a long time costs me. I think there's something more to be
found here, around the idea that something done irregularly, even if not
meeting the goal, is better than something utterly ignored. Right now, that
isn't reflected. Maybe that's for the best, though.

These aren't the real "dangers" to my productivity that I've seen in using TDP.
There are two main things that I've worried about.

First, TDP sometimes squashes the value in doing more than one's goal. For
example, my bug-fixing task says I have to do an hour of work every three days.
On a certain day, I might feel motivated to do more than one hour of work. I
may feel like I'm on a roll. I will not be rewarded for doing so. In theory,
I could get two points for the day instead of one, but it won't actually extend
my streak, which is what really counts. That is: if my streak is extended, I'm
earning a day off from the task, so I have more time to do other work. This is
what should happen if I do extra work today. It isn't what happens, though,
which makes it a strange economy.

A related phenomenon is that if I were to write two journal entries today, I
would benefit from saving one to publish later, because then the streak would
extend from that day. It feels like a disincentive to actually do the extra
work today, although this may be a problem with me that I need to work out on
my own. In fact, there is a flip-side to this problem: if I do extra work now
to extend my streak beyond its usual lenght, I'm breaking the regularity of my
schedule, which might not fit in with the idea of getting into a schedule.

I don't really buy that, though.

The other problem is that once you buy into the idea that you must keep your
streaks going — which is a pretty motivating idea — you're prioritizing things
for which goals have been created over things for which they have not.
Possibly you're heavily prioritizing them. It's important to remain aware of
this fact, because there's a danger that any other work will be neglected only
because you haven't thought to put it on the big board.

There are categories of tasks, too, that I've been struggling not to
unconsciously deprioritize because they can't be usefully made into long-term
goals. I'm trying to learn new Decktet games, to
make plans to see friends more often, to work on spare time code projects, and
so on. These are more "to do" items, and TDP is not a todo list. I think I'm
going to end up having to write automation to connect it to a todo list
manager, much as I did for my code review todo. Otherwise, I'll chug along
with my current routine, but will stagnate by never doing new things.

These are good problems to have. They're the problems I get to have once I'm
making reliable progress at keeping up with clear responsibilities or promises.
Nonetheless, they are problems, and I need to recognize them, keep them in
mind, and figure out how to overcome them.

1. Race (Elf, Dwarf, Halfling) as a class? Yes or no?

Yes. I like the idea of matching classes up with monster manual entries. I
have a Fighter class for now, but will probably break it up to Bandit and
Soldier, eventually, to match my game's monster manual. So, I match elf or
dwarf up with the monster manual entry. Goblins, in my manual, break down into
a number of very distinct groups. If someone was to play a goblin, I'd make a
per-group class, or at least have rules for specialization within the goblin
class, the same way I customize clerics per-church.

2. Do demi-humans have souls?

The nature of the personal life force of a sentient creature is sort of
blurry in my game, but the rough answer is, "Yes, but some demi-humans have
different kinds of souls." Elves are often unable to interact with the
technology of the ancient empire, for example, because it doesn't consider them
to be alive at all.

3. Ascending or descending armor class?

Descending. I really like using THAC0, I find it very easy to do the math in
my head. In fact, everyone I've played with does, once I get them to stop
reacting violently to "this stupid thing I hate from 2E" and see how simple the
rule is.

4. Demi-human level limits?

5. Should Thief be a class?

Yes. Actually, a few classes. When I get around to it, I want to break Thief
into Assassin, Burglar, and Dungeoneer. Or, the other way to put this is: I
do like the idea of classes for skill-based archetypes, but I think that
Thief, as written, is not a very good such class. I'm not sure who it best
represents in the fiction? With its d4 hit dice, it's neither the Grey Mouser
nor Conan, both of whom would otherwise be decent candidates.

6. Do characters get non-weapon skills?

Kinda. I should really codify it. Basically, I assume that characters are
good at the stuff related their archetype. (This is part of why I like more
specialized classes than "Fighter.") If the player wants to declare that his
or her character has an unusual skill for some reason, I'll allow it at least a
few times.

I don't like skill lists.

7. Are magic-users more powerful than fighters (and, if yes, what level do they take the lead)?

We're using pretty basic fighter and magic-user classes, most of the time.
Even the tweaks I'd like to make won't change the balance much I think. So, at
low levels, the fighters are more powerful. So far, we haven't seen any
magic-user survive long enough to overtake the fighters.

I've been slowly tweaking the rules to try to change the balance just a little.

8. Do you use alignment languages?

No.

I have publicly stated my bafflement by alignment
languages
before, and although I was glad to get a pretty clear answer as to why they
existed, I didn't think they were really justified. When different cults have
secret languages, they're just secret languages.

10. Which is the best edition?

Heh.

Right now, I use the Moldvay Basic Set as the go-to reference, with plenty of
stuff from Cook's Expert Set. I'd like to read Holmes, as I have read good
things, and it looks like at least I should steal some of its rules for stuff,
but I don't have a copy. I stole some of the psionics rules from 2E, and 1E
has tons of tables and stuff to steal. I'm hacking in something like Action
Points when I backport my 4E campaign to Basic. They're all fun, but I think
Moldvay is a great framework from which to start hacking, and that's what I've
done.