Second, when my old WordPress site got thoroughly hacked, I had the opportunity during the recovery process to reread every post I've ever made here at Gibberish. There are a few common threads, but the predominant one jumped out at me because posts I made about it as much as fourteen years ago are still current: why is it still so much damned work to do certain bits of web development that literally every site does? Why hasn't any layer of the tech infrastructure grown to handle them, or at least help?

Take as an example my number one nemesis, user authentication. Whether you're doing OAuth for logging in via Twitter and Facebook, and therefore doing intense multistep tangoes of cryptographic token trading, or just trying to let people recover their forgotten passwords without creating security holes, getting auth right is so hard that you can't even trust it to a library. Half the time, the libraries for your chosen platform are incomplete or have major security bugs; the rest of the time, their authors throw up their hands and say (...not really wrongly) that individuals should take responsibility for understanding such sensitive areas of their code.

But apps of any seriousness need to have logins... right?

Sometimes, when something is a hard problem to solve, that's because it's the wrong problem to solve. The web wasn't designed to support single-page applications, and it certainly wasn't particularly designed for what are essentially single-user applications. Canva is the example that springs to mind right now, but there are plenty of others: tools that are centered on individuals creating things, that happen to run in a web browser. Collaboration features sometimes come in, but those are only salient for a handful of apps; features that support discovering and sharing content from other users are often kind of a joke and always a bolt-on that could be entirely separate. Apart from those, these apps could have been written as native apps. Many argue that they should be.

The first single-user web app was the web itself - making a page of your own, before anyone dreamed that a service would ever do it for you. Everyone talked about how great and revolutionary "view source" was; it democratized creation, and so forth. But then things got a lot more complicated, and despite code-sharing services like CodePen and (on a lower level) StackOverflow, the web's default openness became increasingly meaningless. When so much of the meaning in the web was in its connections, why not do your publishing through connections? Why have anything of your own?

"View source" is not enough, hasn't been enough for a long time, was never really enough. The lack of a full, secure set of in-browser tools for trading and sharing code, and the resulting need for server-based services to pick up the slack, is possibly the fundamental flaw that led us to our current state of total cultural capture by a handful of huge corporations, mainly Facebook in my country. The CEO of Facebook is widely expected to run for president sometime soon.

Well, I'm working on a single-user web application, because the languages we build the web with are languages I speak, and because it's still so powerful to just turn up in the user's browser without their having to download and install anything. But I'm building it against APIs that are only available in Beaker, an experimental web browser that adds support for the peer-to-peer protocol known as Dat. Dat implies a whole new way of doing multi-user web apps, as distributed networks of compatible files on various Dat sites, brought together via JavaScript and augmented by more traditional web services. But that stuff's not even what I care about: when you're browsing a site via Dat, Beaker gives you a "fork this site" button.

If you find something you like, you can clone it, and just start making changes to your own copy. Like it's a HyperCard stack you got off a BBS in 1989.

That leaves out the importance of the web's connections - Beaker doesn't track or share these forkings, the way GitHub does - and the multi-user side of Beaker's capabilities is definitely going to be its main selling point. But I think this simple, SneakerNet-style brand of sharing could be more important than anyone guesses, perhaps even Beaker's own creators, for the simple reason that nobody has to log into anything. Sharing and collaboration can still be done, but those are separate applications, finally snapped clean. And no, Beaker isn't the first browser to do things like this - it's just modern, insightfully designed, and built on an IPFS-like base that does plenty of cool tricks. Since it's built with Webkit, you could make it your go-to web browser and scarcely notice you were doing it (It Happened To Me!). Unless you're on Windows - they're still working on that - I strongly recommend picking it up and exploring.

I mean, that’s a thing, right? Us devs are all talking all the time
about the tool chain and the libraries and the npms and just how hard
building the web has become. But we’re talking about it wrong. The
problem is not that writing JavaScript today is hard; the problem is we
still have to write so much of it.

There isn’t a front-end development crisis; there is a browser
development crisis.

Think about how many rich text editors there are. I don’t mean how many
repositories come up when you search for “wysiwyg” or whatever on
GitHub; I mean how many individuals out there had to include a script to
put a rich text editor in their page. And for how long now? Ten years?
Sure, we got contenteditable, but how much human suffering did that
bring us?

Where is <textarea rich=”true” />? Where, for the Medium-editor fans,
is <textarea rich=”true” controls=”popup” />?

Believe me, I have already thought of your cynical response. There are
9,999 reasons to call this a pipe dream. I don’t have time for them,
thanks to all the Webpack docs I have to read. I’m not talking about
things that’d break the web here – I don’t want us to try to build
jQuery or React into the JS engine. I’m talking about things that are
eminently polyfillable, no matter how people are deploying them now. And
do I want to start another browser war? Yes – if that’s the only way my
time can be won back.

Lots of web pages have modal content – stuff that comes up over the top
of other stuff, blocking interaction until it’s dismissed. It was
pointed out to me on Twitter that there have already been not one, but
two standards for modals in JS, both of which have been abandoned. But
they tried to reach toward actual, application-level modals, which
already constitutes a UX disaster even before you add the security
problems. By contrast, the web modals you see in use today are just
elements in the page; a <modal> element, that you can inspect and
delete in Dev Tools if you want, makes perfect sense. It might not
replace a ton of code, but every little bit helps.

It doesn’t stop with obvious individual elements, although that may be
the best initial leverage point (reference the <web-map /> initiative and its Web Components-based polyfill, the better to slot
neatly into a standards proposal). There are plenty of cowpaths to pave.
We need to start looking at anything that gets built over and over again
in JS as a polyfill… even if for a standard that might not have been
proposed yet.

You know what a lot of web sites have? Annotations on some element or
another, that users can create. They have some sort of style, probably;
that’s already handled. They might send data to a URL when you create
them; that could be handled by nothing more than an optional attribute
or two. While you’re at it, I want my Ajax-data-backed typeahead
combobox. But now that we’re talking to servers…

You know what a lot of web sites have? Users. I’m not the first to point
out that certificates have been a thing in browsers for pretty much the
entire history of the web, but have always had the worst UX on all the
civilized planets. There is no reason a browser vendor couldn’t do a
little rethinking of that design, and establish a world in which
identity lives in the browser. People who want to serve different
content to different humans should be able to do it with 20% of the code
it takes now, tops. (Web Access Control is on a standards track. Might
some of it require code to be running on the server? Okay – Apache and
Nginx are extensible, and polyfills aren’t just for JS; they’re for PHP
too.)

And all of that implies: you know what a lot of web sites have? ReST
APIs. Can our browser APIs know more about that, and use it to make Ajax
communication way more declarative without any large JS library having
to reinvent HTML? Again, it’s been like ten years. ReST is a thing.

While we’re talking reinvention, remember the little orange satellite-dish icons that nobody could figure out? Well, if we didn’t
want to reinvent RSS, maybe we shouldn’t have de-invented it to begin
with. In the time since we failed to build adequate feed-reading tools
into browsers and the orange icons faded away, nearly all of the value
of the interconnected web has been captured for profit by about three
large companies, the largest being Facebook. For all practical purposes
in America, you can no longer simply point to a thing on the web and
expect people who read you to see it. Nor can you count on them seeing
any update you make, unless you click Boost Post and kick down some
cash.

Users voted with their feet for a connected web, which had to be built
on one company or another’s own servers – centralized. It had to be
centralized because we weren’t pushing forward on the strength of the
web’s connective tissue, making it easy enough to get the connections
users wanted. And credit where it’s due to Facebook and Twitter (and
Flickr before them) for doing the hard work of making the non-obvious
obvious – now we know, for example, that instead of inscrutable little
orange squares in the location bar, we should put a Follow button in the
toolbar whenever a page has an h-feed microformat in it. Or a bunch of
FOAF resources marked out in RDFa, for that matter.

Speaking of microformats and RDF and bears oh my[1], it might be
time to stop laughing at the Semantic Web people, now known as the
Linked Data people. While we’ve been (justifiably) mocking their
triplestores, they’ve quietly finished building a bunch of really
robust data-schema stuff that happens to be useful for a clear and
present problem: that of marking things up for half a dozen different
browser-window sizes. Starting with structured data is great for that.
Structured data may also be helpful for the project of making browsers
help us do things to data by default, instead of having to build
incredibly similar web applications, over and over and over again.

But Mike, you’re thinking, if the browsers build all these things
we’ve been building in JS as in-browser elements, then everything will
look the same! To which I say, yes – and users will stand up and
applaud. They love Facebook, after all, and there ain’t no custom CSS on
my status updates. It’s not worth it. Look, I don’t want to live in a
visual world defined by Bootstrap any more than you do, but it’s time
for the pendulum to swing back for a little while. We need to spend some
time getting right about how the web works. Then we can go back to
sweating its looks. And it’s not as if I’m asking for existing browser
functionality to go away.

But Mike, you’ve now thought hard enough that you’re furiously typing
it into a response box already, you have no idea. Seriously, you have
no idea how hard it would be to do all this. Well, you don’t spend 20
years building the web, as I have, without getting at least some idea
of how hard some of this will be. But you’re right, it will be stupid
hard. And I’ve never been a browser engineer, so I have no real idea how
hard.

And you, I counter, have no idea how worth it all the hard work will
be. To break Facebook’s chokehold on all the linking and most of the
journalism, or, if that doesn’t move you, to just see what would
happen, what new fields would open up to us if connection were free
instead of free-to-play. To bring users some more power and consistency
whether individual web builders lift a finger or not. And yes, to bring
front-end web development back a little bit towards the realm of the
possible and practical.

Flash is dead; that is good. Apple may have dealt the decisive blow, but
browser vendors did most of the legwork, and now as a direct result we
have browser APIs for everything from peer-to-peer networking to 3D, VR,
and sound synthesis. All of that is legitimately awesome. But for all
the talk about defending the open web, that stuff only got done because
a software-platform vendor (or three – Google and Microsoft’s browser
teams helped a bunch) detected a mortal threat to the market of its
product. When Mozilla threw its first View Source conference in Portland
last year, that was my biggest takeaway: Mozilla is a software platform vendor, first and foremost, and will make decisions like one. It
happens to be a nonprofit, which is great, but which may also contribute
to its proclivity to protect itself first. That self-interest is what
will drive it to do things.

So. Dear Mozilla: there is a new mortal threat to the market of your
product. It is the sheer weight of all this code, not in terms of load
time, although that’s bad enough, but development time. The teacups of
abstraction are wobbling something awful, and we need you to have our
backs. You employ people who are way smarter than me, and they can
probably think of way better things to put into place than the examples
I’ve got here. That isn’t the point. The point is there has to be less
code to write. Pave some cowpaths. Make Browsers Great Again. Or
something. Please. Thank you.

[1] Because I know they’ll get all in my mentions, I hasten
to add that microformats were created by an entirely different tribe of
developers than RDF-and-such, and were in fact created as a direct
response to how awful RDF was to deal with at the time. And yeah, it was
pretty awful to deal with… at the time. Now it’s better, and I kind of
think team Linked Data has regained the edge. I tried really hard not to
make this piece into a Semantic Web/IndieWeb beef tape. I’m sorry.

One of my favorite memories of childhood is lying on the floor of my
Dad’s office at Northbrae Community Church – he was the minister, about
which I have a great story that has been cut for time – in Berkeley,
California, reading Peanuts strips, of which my Dad had several
collections shelved alongside Bibles, commentaries, philosophy and
whatnot. I want to say those comic strip collections had pride of place
on that bookshelf, but the truth is I was lying on the floor and that’s
the only reason I spotted them there on the lowest shelf. So who knows?

Somehow I got to reading some about Charles Schulz and his approach to
his work. Maybe there was an interview in the back of one of the books.
Early on, I took in his opinion that the reason we all love Charlie
Brown so much is that “he keeps on trying.” To kick the football, to
talk to the little red-haired girl, to win a baseball game, to belong.

I didn’t connect with that sentiment. It’s not clear what I did do with
it, but I never felt like that was a reason to love Charlie Brown. At
the time I just loved him instinctually. Here was this kid who, like me,
didn’t really fit in, and got a lot of shit thrown at him for no reason
that he could see – maybe just because others needed some entertainment.
Just like him, I didn’t have the tools to deal with that harassment, not
without poisoning myself a little bit inside every time, and just to mix
metaphors and switch over to the Peanuts animated cartoons, none of
the adults seemed to be speaking my language when I asked them what to
do. Their advice – just ignore them when they pick on you! – might as
well have been a series of muted trumpet sounds.

I didn’t love Charlie Brown because he kept on trying – I loved him
because the alternative was loving a world that thinks some people are
just better than others, and that those people who don’t seem to have
the world’s favor should certainly never ask why or why not. They should
just keep on trying. (Charles Schulz, by the way, was a lifelong
Republican donor.)

Now, I’m notorious for reading literature a bit shallowly (and yes,
Peanuts is literature, up there with The Great Gatsby as some of the
greatest and most iconically American of the 20th century, but that’s
another post), and I miss layers of meaning sometimes. My dad pointed
out as I was writing this that reading Charlie Brown more generally as
hope, and specifically as a tragic hero defined by his inability to give
up hope, is a pretty strong reading that also supports that Schulz
quote. Personally, I could see Schulz connecting with Charlie Brown more
on the level of commitment to one’s job; the fact that Schulz could do
the same gags with Charlie Brown for 50+ years and never have to deal
with him changing is something he could feel good about (n.b. his own
career as a cartoonist, and the occasional strips about Brown’s father,
a barber, and his connection to that craft). Charlie Brown kept showing
up for work, which Schulz and others could admire and enjoy on more than
one level.

But permit me an indulgence. Lately I’ve been nursing this crackpot
theory that the American Civil War actually started in England in the
1600’s. I have another theory on the side, more straightforwardly
supportable, that said war is also ongoing. To get at my case for its
beginning, though, I’ve gone to Albion’s Seed: Four British Folkways in
America by historian David Hackett Fischer. One of the so-called
folkways – a “normative structure of values, customs and meanings” –
Fischer chronicles is that of the Royalist side of the English Civil War
that became known as the Cavaliers.

The Cavaliers were, as you might guess, known for having horses when
their opponents more often didn’t, but also for mostly being wealthy and
interested in letting you know they were wealthy, and for their interest
in having big estates with really, really big fuck-off lawns; a
particular style of being landed as well as moneyed. The English Civil
War separated the monarchy from political power – if not quite for good,
and as it turns out, Puritans make lousy rulers – but it didn’t
separate the Cavaliers from the kind of power that they had. And when
England got cold for them in the 1640’s, a lot of them moved to more
receptive territory in the colonies, namely in Virginia and points
south. Fischer draws a strong correlation between this migration and the
“Southern Strategy” that put conservatism back into its current power in
America.

In the English Civil War, the King and the Cavaliers were opposed by a
bunch of factions which, thanks in part to the close-cropped Puritan
hairstyle, became collectively known as Roundheads. I was so happy when
I heard that. I imagined that round-headed kid, good ol’ Charlie Brown,
in peasant clothes holding up a pike, demanding an end to the divine
right of kings. Permit me that.

I allow that Charlie Brown is an awkward symbol for forces aligned
against conservatism. He doesn’t win much, for starters. There’s also
the uncomfortable invitation to misogyny in the relationship between
failed jock Charlie Brown and frequent football holder Lucy Van Pelt,
which a certain flavor of person will accept wholeheartedly. Speaking of
which, one facet of Charlie’s woes is a major contributor to the entitlement we now see in certain nerd cultures gone sour. (There was a
point when it could easily have done that in me. I’m still not entirely
sure how I avoided this.)

Instead, I ask you to respond to Charlie-Brown-the-symbol the way I did
as a child, but couldn’t articulate until recently: negatively. I want
you to tell him to stop being who he is, to grow out of his
perhaps-essential nature and start making demands. But stay his friend,
by demanding that the forces that make his world step into the frame and
be seen, lose the muted trumpets this time, and name their reasons for
letting this world exist. Charlie Brown has hope,
but he shouldn’t need it.

This is obviously personal for me. I didn’t become tough and wise by
virtue of recreational abuse at the hands of my peers; any wisdom I have
I was able to get in spite of their best efforts. Any strength is left
over from what they sapped. Some kids might respond to abuse and
interpersonal adversity by getting stronger, but if you’re writing off
the ones who don’t as losers, or trying the same methods over and over
of teaching them to cope, you’re indulging yourself in a toxic,
convenient fantasy. Making others feel small to feel bigger yourself is
no more inevitable a part of human life than humans killing one another
for sport. Polite society eliminated one of those; it can lose its taste
for the other.

When people become identified with a power they take for granted, they
go halfway into bloodlust when you threaten to mitigate that power in
even the smallest way. In the end, that’s the basis of conservatism. But
the power to take a shit on someone, at some point, when we’ve decided
it’s okay, might be one that we all identify with. So I don’t have a lot
of hope that we’ll change this in my lifetime, or even make a dent. But
I want to stop kicking the football. I want to start asking the
question.

In late October I declared November to be NaNoTwiMo – National No
Twitter Month – and took the month off of Twitter. I pledged neither to
read posts nor to make them, except in emergencies. I declared an
emergency for the day I finally got user creation working for
theha.us, my multi-user instance of the up-and-coming “distributed
social network” tool Known. (I say “up-and-coming” when I ought to say
“coming someday,” since the distributed part is still unimplemented, but
uh, I’ll get into that later.) And I decided not to count the occasional
trip to the profile page of a tech person who’d recently announced
something – the public nature of Twitter often makes it more useful than
email for open-source-related communications. And I cheated a few times.

Why do this when Twitter is more or less where I live online these days?
Because Twitter, corporately speaking, is steadily becoming less
committed to letting me direct my own attention. I can turn off the
display of retweets, but not globally – just one friend at a time – and
Twitter now also occasionally offers me something from someone a friend follows, apropos of nothing. I can use a list, for those times that I
only want updates from the people dearest to me, but lists now ignore my
no-retweets settings. Without that ability to turn down the noise when I
want, I find that using Twitter makes me less happy. And this is all to
say nothing of Twitter’s then-ongoing refusal to do anything systemic to
manage its abuse problem and protect my most vulnerable friends.
(Things have since gotten a hair better on that front.)

In a post on Ello that’s no longer visible to the public, net analyst
Clay Shirky wrote, “really, the only first-order feature that anyone
cares about on a social network is ‘Where my dogs @?’” It is
devastatingly, sublimely true. It is astonishing how much people will
put up with to be where their people are.

For November, when I had something to say I generally put it on Ello.
My account, like Shirky’s, is set only to be visible to other registered
Ello users (I have invites if you’re curious). I’m not sure why I’m
doing that, as it doesn’t make things private per se; Shirky is also
aware of this and thoughtful about how different levels of privacy
influence a piece of writing. It feels right sometimes to talk this way
in a different room, even if the door isn’t closed. The most surprising
thing about the last month is how many people – how many of my friends –
not only came over to Ello when I raised it as an option, but stayed.
They didn’t burn their Twitter accounts down behind them, and they
didn’t show up a lot; I’m often the only voice I can see above the fold
in my Ello Friends stream. But there were Monica and Jesse and Jenny and
Megan, showing up now and then, posting things that are longer than 140
characters, the way we thought we would (and did for a while!) at
Google+.

But that’s not a movement. It’s a pleasant day trip, and it might be
over.

It’s an article of faith in the tech community that a social network can
always hollow out the way MySpace did when a new competitor reaches a
certain level. But that was a different world. Almost ten years ago,
right? Getting all the kids to move is a whole other ballgame from
moving the kids, plus their parents, plus the brands and photo albums
and invitations and who knows what else. Not to be too
specific; I’m just citing Facebook as an example, my beef isn’t with
them in particular. (Facebook also beat MySpace in part by being
perceived as high status, and what’s higher status than every celebrity
you could name having an @-name?)

The last ten years have made us awfully demanding in some ways. If you
ship social software to the web, it had better have every feature that
people might want and have it immediately, because it will be taken for
always-and-forever being what it is when the first wave of hype hits. No
minimum viable product is going to win over the mass. Even more
frustrating is the IndieWeb movement: I may be about to display myself
here as one of those who give up hope when a feature is missing, but I’m
also in a position to know that the rate of progress of open-source
distributed social networks has been ludicrously slow. We finally have
an almost-viable open-source product, analogous to WordPress – that’s
the aforementioned Known – but it still has no interface for following
people, whether on the same site or elsewhere. The code infrastructure
is there, but there’s no way to use it yet. I guess all its hardcore
users are still using standalone RSS readers like good Web citizens or
something, but the mainstream was never interested in fiddling with
that. (Nor will standalone RSS readers support private posts.) Given
the, er, known impatience of the mass for anything that doesn’t do all
of the things already, I’m starting to worry that the indie web won’t
have what it needs to get traction when the time is ripe (that is, when
Twitter finally falls over).

Maybe I’m only running a Known instance, or caring at all, out of
nostalgia. I’m old enough to remember the web we lost. On the other
hand, there’s an important sense in which we got what we (I) wanted –
we’re all together, all connected… and it’s terrible. Clay Shirky has
an idea – a whole book in fact – about the cognitive surplus of a
population having been liberated by the 40-hour work week and creating a
kind of crisis where we didn’t know what to do with ourselves, until
television stepped in. Like the gin pushcarts on the streets of London
after the industrial revolution, television stopped us from having to
figure out what was wrong and fix it. In (Shirky’s) theory, the internet
is our equivalent to the parks and urban reforms that made gin pushcarts
obsolete – but what if all that connection is actually a crisis of its
own? I think a lot about something Brian Eno wrote in 1995 in his book
A Year With Swollen Appendices (he was writing about terrorism, but it
applies): “the Utopian techie vision of a richly connected future will
not happen – not because we can’t (technically) do it, but because we
will recognize its vulnerability and shy away from it.”

We may be shying away already, by using mass-blocking lists and tools
and the like. Maybe that’s not so bad, provided that Twitter’s
infrastructure can keep up. But then, we’re usually willing to do as
little as we can to stay comfortable instead of getting to the root of
the problem. I’m back on Twitter now, using a second account in place of
a list, which isn’t ideal (lists can be private). But where else am I
going to tell my friends when I’ve found something better?

It’s happening again as I write this, with tilde.club: at first people
were excited about the stripped-down, back-to-basics user experience of
a plain UNIX server designed for serving web pages, and the aspect where
logged-in users could chat at the command line gave the place the
feeling of an actual social network. But now the initial excitement is
spinningdown and people are updating their pages less often;
whether the chat is still hopping, I couldn’t say – I don’t have an
account – but I guarantee you it’s changing.

What do we need from the social network that’s next, the one that we
actually own? (You could argue as to whether it’s coming, but no need
for that right now.) I propose that the moment we get bored is the most
important moment for the designer of an app to consider. Right? Because
what’ll people do with whatever revolutionary new web thing you put in
front of them? If my experience on both sides of the transaction is any
guide, they’ll probably get sick of it, and fast.

There are so many kinds of boredom, though. There’s the smug
disappointment of paddling your surfboard over to what looks like the next wave, only to find that it “never” crests. A more common pair,
though: there’s the comedown – when something was legit exciting but
then the magic leaves – and then there’s the letdown, when something
seems exciting at first blush but you investigate and find the glamour
was only skin deep. Most systems have more to fear from the latter. New
systems that are any good, though, don’t often have a plan for the
former. Distributed social networking needs one.

What do people need at first, and then what do they need later?

At first:

Co-presence (hanging out)

Discovery (more and more users!)

Things to play/show off with (hashtags, what have you)

Later:

Messaging (purpose-driven – I need to get hold of *that* person)

Defense (from spam, griefing, and attention drains of various kinds
– generally, but not entirely, from the public)

Things to use and enjoy (tools and toys that aren’t purely social)

One’s needs from the first list never go away, exactly. You’ll always
want to bring something up to the group now and then (where “the group”
is whoever you’re actually personally invested in conversation with),
and play and discovery don’t die. But we see so much more design for
that first list – probably because a commercial social network needs to
privilege user acquisition over user retention… or thinks it does. And
as a whole culture we are only now coming around to the importance of
designing for defense, despite the evidence having been here for 35 years.

It’s hard to keep coding when the bloom is off the rose of a project.
One way to keep yourself motivated, when the work is unpaid, is to take
the perspective of that un-jaded, excited new user, discovering and
fooling around. This naturally leads to features that appeal to that
mindset. A major obstacle we face in developing the decentralized,
user-owned permanent social network is making faster progress while
maintaining the mindset that will result in a grownup network for
grownups.

There’s this story that you hear people tell, of a lost glorious age
taken away by those with no right to it, and its last, struggling few
defenders. This lost age is a time when there was no challenge to, by
which I mean not even the smallest noticeable difference from, a
standard hierarchy of power. All difference is challenge, you see,
because this person, this storyteller who values this lost age, is so
closely identified with their own power that any possible attack on it
might be an attack on their very selves. It ends up that the most
important job of conservatism is to protect “the private life of power”:
the intimate insults, whether in the home or on the nightly news, that
stop masters (or those who think of themselves as masters in training)
from feeling like masters. “Every great political blast – the storming
of the Bastille, the taking of the Winter Palace, the March on
Washington – is set off by a private fuse: the contest for rights and
standing in the family, the factory, and the field. […] That is why
our political arguments – not only about the family but also the welfare
state, civil rights, and much else – can be so explosive: they touch
upon the most personal relations of power.”

This analysis, like the quotes above, comes from Corey Robin‘s The
Reactionary Mind, a polarizing book for people on both sides of the
ideological fence. Lots of folks on the American left believe that the
red-meat culture-war side of right-wing politics is just a cover story,
a theatrical shell over their real, merely corporatist agenda. Robin
proposes not only that the two conservative agendas are really one, but
that the people who espouse them are not crazy; instead, they have a
large and well-constructed body of philosophy behind them – they just
see no problem with its being built on an idea as sick as “some are fit,
and thus ought, to rule others.” This possibility frightens a lot of
middle-class progressives, because it means that we will have to fight
after all, and fight hard. The liberal middle class hates fighting. We
hate the thought that we can’t all just get along if we finally explain
the facts well enough.

This anxious aversion to conflict, I have to admit, is probably what has
driven a lot of my online research into roleplaying. You’d think that
when it comes to games, the stakes would be so low that there wouldn’t
be much fighting, and certainly not much anxiety over it. But many
people in online RPG-discussion circles seem to have a permanent hate-on
for new-style gaming, to the point where some have made that hatred a
banner of online identity. It’s confusing, at first blush; I mean, how
can people not grasp that they can just go on playing whatever they
like? Why react not just so strongly, but so persistently? It doesn’t
stop online, either. Many folks in the real world who’ve tried to
introduce new games or gaming techniques to traditional roleplayers have
been rebuffed with accusations that seem out of all proportion.

My anxiousness has declined a great deal since I’ve realized what’s
going on: roleplaying games, to date, have generally embodied a number
of power relationships – between players and the fiction, and between
players and the GM. For the last forty years, the roleplaying hobby
has invested most of its hopes for any feeling of fairness in the
loosey-goosiest game ever invented in the role of the game master, or
most often the Dungeon Master. The GM/DM has been invested not only with
the final say over any matter that comes up for adjudication, but with
control over the game’s opposing forces. Players venerate the people who
manage this conflict of interest well, while anyone who can’t – while
being given precious little systemic support for doing so – has, over
the life of the hobby to date, mostly just been shamed.

It’s been traditional for a long time, as well, for the GM to be the
social host of the game, as well as to decide who is invited to be part
of the game and who’s not. Since one incompatible player can ruin
everyone’s fun and a lot of players regard play opportunities as a
scarce, valuable commodity, the GM role can be a massive source of
social power.

On top of these, there’s the power of the storyteller. In some RPG
subcultures, the GM is expected to be the main driver of the narrative.
If players want to do anything of great consequence to the plot, they
can’t just up and do it – they either need to cooperate deliberately
with the GM, or they simply understand that what they’re at the table to
actively do is something else (perhaps fighting the monsters that have
been placed in the encounter, perhaps just being a bystander to a good
story). All fine, and all perhaps necessary when the rules don’t much
help all players get a satisfying story simply through their play
actions, but all certainly adding to the social power of the GM role.
Great storytellers are respected across cultures.

The GM-and-players relationship is not the only power relationship in
RPGs. A player who has mastered the rules, or other skills required to
play well, successfully enough to get whatever he or she wants out of
the game, gets many forms of power, including some social ones. In a
collaborative game like a traditional RPG, do you help the other players
when they struggle with rules? When, and on what terms? Do you help them
with strategy, or do you deride them as dragging the group down? What if
you don’t have that mastery and your contributions to the game are
getting blocked by people who do – do you then build a relationship with
the GM, such that you depend on her to keep that blocking player in
check so you can contribute?

These are all power relationships that invite personal identification.
How often do we hear GMs identify as such, almost like it’s an
ethnicity? How often do they talk about “their players” in a vaguely or
explicitly paternal way? And in the end, what identification could be
more personal than one’s role in a game full of stuff made up by
oneself and one’s friends? Especially a game that’s not essentially
different from the game you played for countless hours in your
childhood?

So, you have people who for whatever reason are closely, personally
identified with their position of power at the gaming table – no matter
whether that position is high or low. Non-RPG story games upset these
positions. They become a threat.

I am not saying that people who defend traditional RPGs necessarily hold
conservative politics in other arenas, although [as I’ve said
elsewhere], it shouldn’t be forgotten that D&D was born amongst
Midwestern armchair generals who didn’t like hippies much. RPGs also
quickly found cultural footing in the science fiction and fantasy
fandoms, which have their own strong currents of conservatism to this
day. But conservatism can also be quite compartmentalized; you might
have no beliefs about a natural status order of economic roles, but
strong ones about an order of genders, as one example. (Not to forget
liberal activists who end up showing off, and defending, their privilege
– nor people who identify destructively with a permanent role of outcast
or spoiler.)

I’m also not trying in general to make the problems with our
conversations about RPGs out to be a bigger or more important problem
than they really are. It’s enough, to me, that RPG conservatism poses
problems for anyone who wants to work towards a better hobby-wide
conversation, find players for new games, or even just search on Google
for more information about them. Not even coming up with the new term
“story games” can help us with that one forever.

(By the way, all of the above also explains why D&D edition wars will
continue, despite almost every edition of D&D currently being back in
print.)

Entries are due by midnight PDT on Friday, April 19. Send me some mail
and either attach the game or give me a link. Put [Twine] in the
subject line of all entry emails.

I will be judging all entries and selecting a winner. Judgment criteria
include innovation in use of Twine mechanics, replay value, and
expressiveness/awesomeness/tendency to make milk come out of my nose.*
Bonus points for incorporating something I’ll recognize from story gaming but not being too hammy about it.

There will be a prize, valued at approximately $40 and not very useful.
I haven’t selected it yet.

I’ll be updating this post as needed with further news. Send email or
come find me on G+ if you’re dying to discuss something.

Someday, one of these crazy web things you make will catch on – yes, it
will! I believe in you! – and when you aren’t busy freaking out about
scaling it up, you will maybe want to spend a couple minutes thinking
about writing it so the rest of the world can read and use it. Here are
some ways to do your future self a favor when you do your markup and
styles.

Give descriptive IDs to all the things. Any JavaScript that’s gonna
translate the non-core text on your page is gonna need to find it first.
Even if you’re normally into using fewer IDs, consider the way you’d
want to work if you had to write a translation tool (or a translation).
No, lengthy CSS selectors are probably not that way. Nice,
human-readable ID attributes that succinctly describe the text contents
are the way to go.

Charsets, punk. UTF-8 declarations are part of HTML 5 Boilerplate
and other templates more often than not, but just to make sure, check
that you have <meta charset="utf-8"> in your <head>. If you’re
rocking a proper HTML5 doctype, that should be how to write the
<meta> tag.

Watch margin-left and margin-right. Either avoid calling out
these specific properties in your styles, or make sure your special-case
classes or alternate sheets for RTL will have no trouble overriding
them. Don’t go around sticking !importants on things that won’t be
important in Mandarin. Bear in mind that in the future, margin-start
(currently -moz-margin-start and -webkit-margin-start, and similarly
named -end properties) will automatically apply to the right thing in
RTL or LTR situations. But right now it’s good for impressing people in
job interviews and that’s about it.

How will those fancy new CSS properties know when things are RTL, you
ask? (I had to ask this, so don’t feel bad. Or feel bad, but do it for
me.) CSS3 has a direction property that takes rtl but defaults to
ltr. Also there’s the dir HTML attribute that takes the same values,
which has been around a while but is now (in HTML5) kosher to use on
absolutely any tag you like. Look ’em up for more.

There’s a thing psychologists call the fundamental attribution error.
You could summarize it (when married with its cousin, self-serving bias)
as “I Messed Up For A Good Reason, You Messed Up Because You Just Suck.”
Specifically, the reasons we give when we mess up tend to be external
factors, rather than some internal quality we identify with, whereas the
reasons we assume for other people’s mistakes or offenses are internal
rather than external – inherent to who they are. We make this mistake in
part because we have access to our own subjective experience but not
other people’s; if we did have that access, it would tell us a lot about
what’s really going on with them.

Now, when you play a roleplaying game, there’s an other person: your
character. To the degree that you aren’t just treating your character
like a pawn, you have to do some thinking about the reasons for what
they do, because you’re deciding what they do. But they don’t exist;
they’re only in your head.

It’s okay; you can be just as wrong.

My theory, for which I have some backup, is that we don’t have nearly
as much access to our roleplaying characters’ subjective experiences as
we think we do. See, the external conditions that surface into our
conscious awareness when we make decisions aren’t the only conditions
that apply. There are dozens (this may be low by an order of magnitude
or two) of involuntary chemical responses our brains have to things,
especially stressful things. They are far enough outside of conscious
control that you might as well call them external – they’re certainly
external to the conscious volition of everyone but the very highly
trained – and they are often the most salient conditions to the kinds of
judgments we make when we make the FAE. I am talking especially about
fear and embarrassment.

These chemical responses can turn into conscious feelings in unexpected,
and unexpectedly changeable, ways. I’ve been seeing the story make the
rounds lately of a 1974 study (the same year as the first publication of
D&D!) in which men who had just walked across a famously frightening suspension bridge were asked a set of questions just afterwards by an
attractive female interviewer who offered her phone number to the men
for follow-up questions. A control group was given time to recover
normal heart rate and such before being approached. Men in the control
group were significantly less likely to call the number and ask for a
date. The slight shift in context – hey, attractive woman! – took the
neurological arousal of fear and put it to an entirely different conscious purpose. That’s just one example.

None of this would have any implications for roleplayers, if it weren’t
for the way we often check in to our characters and imagine what they
would do: by stepping into our characters’ heads, and trying to see
through their eyes, at least metaphorically. Now, you do find the
occasional “immersive” roleplayer who claims to have a trance-like
ability to feel what their characters feel. But these claims are
unverifiable, and the idea that a master immersor’s brain chemistry
would have the necessary kind and depth of neurochemical accuracy to a
natural response strikes me as an extraordinary claim requiring
extraordinary proof. At any rate, most of us keep a little more mental
distance from our characters while we play.

Suppose you’re “roleplaying out” an in-character debate, in one of those
free-form roleplaying sessions where vague, grandly scoped political
debates take forever and never resolve. Apply our jokey summary this
way: “I caved in in the argument because my feelings ganged up on me.
You caved in in the argument because you just suck.” Except the “you” is
your own character, right? They don’t suck! They’re pretty awesome, in
fact! So why would they cave in? And so the argument goes on.

Well, actually, they’d cave for the same reason you would: involuntary
emotional responses. Maybe not under the exact same terms or at the
exact same time, but they won’t be free from those forces, unless their
neurology is substantially non-human.

If you want to make realistic decisions on your character’s behalf
when your character is in one of many kinds of stressful situations,
you must either apply some kind of external constraint (e.g. system),
or step away mentally from imagining your character’s conscious volition
(that is, think more authorially).

Already, though, I should remind you that the first word in that there
commandment is “if.” Realism isn’t the one true yardstick by which
story, play, or our weird story/play amalgam called roleplaying must be
judged. But (and here we go back to doing aesthetic theory), I think
roleplaying needs more realism in this particular neurological arena,
for two reasons.

The first is that content wherein the heroes never feel self-doubt,
fear, or a single moment’s weakness is trite. It’s fine for kids and, I
hasten to add, for games in which you aren’t there for the content –
that is, in which the main point is clearly the gamey business of
bashing monsters and thinking tactically. (However, the recent history
of indie video games shows us that tactical, traditionally heroic
gameplay need not conflict with other modes of gameplay that question or
even undercut them.) But my primary interest is in those games that
value story more highly.

The second is that lack of realistic judgment about our characters’
mental states contributes to many of the classic social problems in
gaming that split up groups and drive gamers out, as well as stopping
new participants from coming in. I don’t just mean the two-dimensional
stories and interminable arguments; I mean things like the rampant
sociopathy on characters’ part that tends to creep into many games, due
to the human inability to feel involuntary shame on behalf of their
made-up characters. (The other side of that coin is the ineffectiveness
of peer pressure on a fictional character; RPG lore is also full of
tales of the one player who insisted on making their character follow a
rigid moral code, screwing up the other players’ fun. In reality, a holy
knight who went around adventuring with a bunch of miscreants would find
themselves acquiescing to all but the most horrible crimes in pretty
short order.) Basically, besides all the other problems with saying “but that’s what my guy would do,” when someone says those words, you
should take a hard look at whether it really is.

There’s no limit to the number of ways system could possibly deal with
the problem. One is to break the one-to-one relationship of player
decisions to character decisions, by allowing more than one real brain a
shot at driving the fictional brain. If the incentives are lined up
properly, this could do the trick, but I don’t know of a good specific
example. Another possibility is to enforce a fiction change that lines
up with the desired player experience – a recent conversation between
Vincent Baker and E6 author Ryan Stoughton speculates on a game in which players’ characters are recast as robots who have certain rigid
programs that take over for their free will. I’m unconvinced that
particular game would put the players’ felt experience in the exact
right place, but it’s an elegant attempt.

The most historically popular option, though, are so-called “social
mechanics,” meant to handle things like mental stress and manipulation.
Social mechanics have a reputation amongst RPGers for not doing these
things all that well (often because they’re modeled closely on the main
historical lineage of RPG mechanics, those derived from wargaming). The
last ten or so years of design have produced systems that do the job
better, as well as systems that dodge the question entirely by operating
on a much less character-viewpoint-identified level – asking players to
think from time to time like authors or directors, as well as like their
characters. Entrenched roleplayers are famously resistant to either
approach, often advocating instead that we “just roleplay it out.”

One thing’s for sure, though: these players are not wrong when they call
social mechanics “mind control.” They’re just wrong that their own
minds aren’t being involuntarily controlled all the time.

But if you don’t like the two choices in my boldfaced rule above,
there’s actually a totally viable third one: simply accepting that some
of the character decisions in your game are going to cause problems, and
that it isn’t such a big deal to go on with your game knowing the
problems are there. Being wrong isn’t the worst thing that can happen to
you, and if your game is fun overall, then you should enjoy it.
Accounting for taste is the job of aesthetic theory, but that doesn’t
mean it’s, you know, possible.

It might not be clear to some readers of my series on defining story
games (inthreeparts!) just how it is that the rules of a
game, of all things, are supposed to interact with an ongoing fiction. I
mean, what do you do? Do you just flip a coin and say, “heads, my guy
beats your guy, and tails, your guy beats my guy”? And that doesn’t even
answer anything, because when do you do that, and under what
circumstances? Finally, just: what’s the point of using this rule, on
this story, when we could just freely make stuff up instead?

Starting with the first question: every game does it differently, that’s
part of the point of having different ones. And while not all games are
structured in a way that makes this plain, you could think of a story
game’s rules as a set of inputs and outputs – and indeed we already
have, in our loop diagrams. In the fat-green-loop variant of the
diagram, input comes from the fiction-y bits, into the rules, and the
rules put some specific addition or restriction back out into the
fiction. (This is leaving aside a certain level of rules, implicit for
long-time roleplayers, that govern the way we make stuff up: players
say what their characters do, one player per character in most games, et
cetera. Those rules and other implicit rules are always on. When I talk
about made-up stuff and rules being separate, assume for now that I mean
the diegetic content of the game versus explicit, procedural
mechanical interactions.)

Helpfully for the purpose of giving you an example, a recent design
trend has been back towards rules interactions that are brief, focused,
and very specific about when to apply them and what goes back into the
story. This trend was crystallized neatly by Apocalypse World, a game
by Vincent Baker, which puts the bulk of the rule interactions players
make into what it calls Moves. Here’s a sample move, from the Gunlugger
character’s playbook:

Fuck this shit: name your escape route and roll+hard. On a 10+,
sweet, you’re gone. On a 7–9, you can go or stay, but if you go it
costs you: leave something behind, or take something with you, the MC
will tell you what. On a miss, you’re caught vulnerable, half in and
half out.

Now, if your game’s loop is more black than green, you want rules that
let made-up stuff change play-by-the-rules in such a way that your
experience of play-by-the-rules is enhanced, not diminished. This opens
all sorts of questions of balance and fairness that remain challenging
for designers to this day. Our primary interest here, though, in case
you haven’t noticed, is fat-green-loop games. In mostly-making-stuff-up
games, you (predictably) want rules that let play-by-the-rules change
made-up stuff in such a way that your experience of made-up stuff is
enhanced, not diminished. That’s what the above is an example of. It
triggers when the character is in a specific situation (in this case,
wanting out of somewhere dangerous), and it complicates that situation
in certain known but flexible ways.

However, in making-stuff-up-oriented games we face the challenge that
our own process of collaboration, the just-talking-to-each-other part,
is in competition with the rules. More rules-oriented games don’t have
this problem; when they get more and more rules-y, they just trend
towards not being story games anymore. They remain story games because
no matter how small the green loop gets, it’s still there; the things
you get from it can’t be gotten any other way. In a fat-green-loop game,
you can similarly argue that at least one tacit rule will always remain
(the one that says “we’re making up a story”), but every rule that
actually makes one designed story game different from another could
conceivably fall away. To put it Vincent Baker’s way, if a given rule
doesn’t get a given group better results for their story than “vigorous
creative agreement” does, then there’s no reason for that group to use
that rule. Story games have to keep justifying their existence by
bringing players things that they didn’t already know they wanted.

The trick to that – that is, the aesthetic value in a given piece of
game design – lies in when you decide to make the input, what the rules
put back out, and in how it feels to use the rules to make that
transformation. All three of those things should support the goal of
play – there’s that weaselly phrase again! – to the
satisfaction of the designer and the players.

So can we put this together into a nice, concise package? Here’s Baker
again, who along with fellow habitual-RPG-theorist Ben Lehman has
lately been doing it like this:

A rule is something, spoken or unspoken, that serves to bring about
and structure your play.

Good rules encourage players to make interesting changes to the
game’s state.

“Interesting” is locally defined by play groups.

This is a bit of a change to the way RPG theory is heading. Some of you
may have heard, or read, about a little thing called “GNS,” and its
birthplace, a web forum called The Forge. It’s hard to separate the
two, perhaps because GNS stands for three different families of
“creative agenda” in RPGs that have been “observed to conflict with one
another at the table,” and throughout its recently-concluded life, The
Forge tended to cause conflict.

For most of the last decade-and-change, GNS – nevermind what it even
stands for – has been the nearest thing story games have had to a theory
of aesthetics. As it turns out, though, conflict isn’t a great basis for
an aesthetic theory: conflict is complicated, divisive, and utterly
subject to accumulated historical accident. When you try to make it a
part of your answer to “what should rules have to say to the story?” you
end up getting an argument about something else most of the time. On top
of that, all this theoretical work was being done on web forums, which
are notoriously poor at keeping arguments under control.

(It should be said, though, that when the seeds of GNS theory were
planted, the fight was kinda necessary. It was 1995 or so, and the state
of the roleplaying art was a muddle. There was a new 800-pound gorilla
on the block, a game called Vampire: The Masquerade, that had a
bell-clear stylistic vision and did the rare trick of actually,
for-reals having an effect on the larger culture outside of gaming. It
produced, and its progeny continue to produce, a ton of fun play. But…
its actual rules-bits did little or nothing to reinforce its style and
themes, and it came right out and admitted this, pushing and
popularizing the notion that satisfying story-play and the use of rules
were mutually exclusive things. To get away with this trick, it had to
lend weight to some long-standing fallacies like the socially suspect
notion of the gamemaster as master-Svengali-storyteller. All of that was
theory that needed to be destroyed for the art to move forward, and GNS
helped to destroy a lot of it. So, good. GNS served its purpose, and now
we have a different job, that needs different tools.)

The new orientation around “interesting” is a much better foundation. In
some ways, it’s a cheat – not coincidentally, the same cheat that we
made in our definition of story gaming. It allows the same necessary
flexibility in terms. In the name of “interesting,” you can bring in
anything that shapes human attention – and if you wanted to fully
understand roleplaying, you might have to bring in everything.

But as of now, we have a definition and a basic aesthetic theory. Once
you have those, what do you do? You might start by making a few more
specific aesthetic sub-theories, such as the one I promised you last
time. After that, though, there’s also some of the more structural stuff
in the Big Model, the GNS-associated theory that we should be careful
not to throw out with the bathwater. So we might talk about that next.
And of course, you can go play.