the scripts

This section will “set the stage” (if you will) for these production
enhancements. Only a little actual script is used here. This is about
providing the mechanisms whereby specific features (defined elsewhere) can
easily and reliably get worked into the production at the appropriate times.

It will help to talk about different locations in the site as “settings.” A
setting is a set of locations in the site, usually defined as a matching
pattern.

Set changes occur when going from one setting to another. When entering a
setting, the script manager will see to the necessary setup, and when leaving a
setting, the necessary teardown. That’s it.

Do not confuse preparing with doing. “Loading” a script is like preparing for a
production effect. A script should not do anything right away except register
the responses to its cues, in other words associate a setup and teardown process
with a particular setting.

The teardown routine is responsible for cleaning up the effect established in
the setup. If an effect is supposed to be present everywhere, a script can
associate itself with the root path (/) and do nothing in the teardown.

Script modules—and, when possible, their setup and teardown routines—should
be idempotent; that is, it should be safe to run them multiple times.

The special effects—established in the setup and teardown—can include
basically anything else that’s possible to do in a browser.

A couple of notes on performance:

Scripts are never responsible for anything essential to basic usage, so it’s
always safe to load them asynchronously. In other words, a script will never
block the user from entering.

Because scripts don’t take effect right away, it’s safe to pre-load any script
at any time. Pre-loading can be used to reduce delays when going to new
settings. (Of course, if it is supposed to be in effect now, then it should
already be loaded.)

The “show” only starts once. It proceeds through a number of scenes (the places
that the user visits).

A scope is a part of the site defined by one location and all of its
descendents.

What’s interesting about scopes? It’s common to define special behaviors that
apply only to a certain scope. In such cases, it’s necessary to know when the
user enters and exits that scope.

(Two locations are exclusive if their scopes do not overlap (and as such, they
can never overlap, even if new descendant locations are added). It’s also
common to consider moving between exclusive scopes.)

The simplest case is where you want to do something only once, and never again.
You don’t even need a special mechanism for that—just put some code in the
module.

The next simplest cases is where you want to do something whenever the location
changes at all, in any way. That’s currently handled by navigating.on(change).

The next simplest case is where you want to do something whenever the user
enters a certain location. The essential bits that you’d express are a
path/context matching function, a setup function, and a teardown function.

bind({match:(from,to)=>from.items.thing&&to.items.thing,transition:(from,to)=>{console.log("do stuff for going from ",from,'to',to);}});

Note this uses a different key, transition. A setup and teardown wouldn’t make
sense in this case.

How would you know that the matching function is expecting two arguments? Maybe
it’s always passed two arguments… but in the earlier cases (where you’re
ignoring it), then, you’d expect to to be first. That’s a little
counter-intuitive for the transitional case, but it’s less common.

The matching function work if you knew that thing were only defined for the
target scope. Otherwise, you’d need also to do a path pattern test.

So how could the bind function (or whatever you want to call it) be defined so
as to accomplish the above?

Basically, it’s like the register_effect function defined below.

But it assumes a context that contains the path, the path_pattern, and the items
(keys from the route match).

That’s really the only difference. But what would it take to get that stuff?
Getflow doesn’t expose the router, or push that information in state. The
reason for putting the onpushstate wrapper into willshake was to support
pushState calls from willshake itself (such as, I think, were done from the
audio module). That’s fair enough, but it means that you can’t expect whatever
state getflow includes when calling pushState. You’d have to be able to call
the router.

The point is that the setup function would only fire when you’re entering that
scope, and the teardown function would only fire when you’re leaving that scope.

How would bind detect a change in the case above where path_pattern is used?
Because match will return true in both cases. It could compare the keys as
well… but how can you make the intention clear? I suppose if you match on a
path pattern, the intention is clearly to consider different routes to be
different.

But what are the semantics of this? Does this in fact have anything to do with
“scopes” as I’m calling them? In other words, is bind (or “register”) supposed
to be attaching to a scope as such?

And what about the audio case? This doesn’t allow you to be agnostic about the
actual route. I don’t want the audio player to attach to any particular paths
(as much as possible), so it has to use the more primitive movement hooks.
Right? The audio itself has nothing to do with the route (in principle), and
its work depends on information that’s not available from the route context.

There’s just no way to write this in terms of scopes, if you can’t be specific
about what those scopes are. So either you’d have to implement audio separately
for each place that supports it (which I don’t want to), or—again—use a
different mechanism.

I don’t see that there’s a way to generalize this. The audio setup has to be
responsible for its own “detection” of whether setup and teardown are necessary.

Well, I suppose that you could have the match function use information from the
environment. That sounds like a bad idea.

register_effect(place=>place&&(/\/plays./).test(place.pathname||''),({new_place,old_place})=>{console.log("setup for play",new_place);},({new_place,old_place})=>{console.log("teardown for play",old_place);});

That’s nice in theory.

But what if you want to hook a transition between different context of the same
route? In other words, what if you wanted to do a play-specific setup? Then
changing from one play to another would not trigger a setup/teardown, if the
condition is just that the play route is matched. Same for hooking the changes
between scenes within a play?

What I actually do is just implement the “lower level” navigation hook that
register_effect assumed and tried to build on. It’s just a thing that happens
whenver the user changes location. Right now, features just listen for this and
handle their own setup and teardown.

Anyway, what’s a PubSub? It’s a publication/subscription mechanism—a thing
that lets a broadcaster not know or care who’s listening. If that sounds
exceedingly useful, that’s because it is. The publisher just “fires” when when
something happens, and the subscribers get dispatches.

At some point I also added that send method, apparently so that I could pass
multiple arguments and not worry about breaking any existing users of fire().
Those ought to be merged.

Yes, this is mostly just a Set where add is on (subscribe) and delete is off
(unsubscribe). But Set would need to be polyfilled, anyway.

That’s great, but how is navigating going to know when the location changes?

By hijacking history.pushState. Any piece of code that purports to change the
location has to call pushState at some point. What if pushState were actually
an informant for me, whom I’d secretly swapped with the real pushState, which
I’m holding hostage (in a closure). Well, that’s what happens here.

<<navigation hook>>// Returns a shallow copy of the parts of a Location within the domain.functionclone_location(_){return{pathname:_.pathname,search:_.search,hash:_.hash};}// Add a hook to history.pushState.functiononpushstate(action){consthistory=window.history;constreal_pushState=history.pushState;// Don't make this an arrow function! It uses `arguments`.history.pushState=function(state){constold_location=clone_location(window.location);constreal_result=real_pushState.apply(history,arguments);// Fire our hook *after* calling the native version, so that// window.location will reflect the new path, not the old one.action(state,old_location);returnreal_result;};}

<<capture history>>functionthat_happens_on_each_location(new_place,old_place){navigating.fire({new_place,old_place});constmore_modules=[].slice.call(document.head.querySelectorAll('meta[itemprop="module"]')).map(meta=>meta.content)// Load other scripts.require(more_modules,()=>{},function(error){console.log("oops no things",arguments);});}// YOU'D HAVE to do this on popstate as well, right?onpushstate(that_happens_on_each_location);that_happens_on_each_location(clone_location(window.location));window.addEventListener('popstate',event=>{that_happens_on_each_location(clone_location(window.location),);})

The “script manager” is somewhat like a stage manager, in having to show up
before the rest of the crew, and in beging responsible for making sure that all
scripts are executed on cue.1 By itself, it doesn’t do
anything worth looking at. But by coordinating the various actions, it helps
everyone else to stay focused on their parts.

This is the bootstrapper for scripts. Generally, we don’t want to care about
the order in which things happen, but this has to come first, because it’s the
setup for everything else.

It turns out that the module

The document will reference one script. That script will take care of the rest.

Regarding the z’s in that filename, I really want this script to be the last
thing in the document. It’s a hack I sometimes have to use. Like I said, the
system isn’t perfect.

The “stage manager” script (just below) will load scripts based on those meta
tags.

You can reference modules by their name; it’s not necessary to include the path.
All modules will live under /static/script, and since that’s the location of the
file indicated by data-main, this will be the default baseUrl (which is
good).2

This was moved in from the_web_server for more general use. Here and above need
update.

The prologue is the short script that comes before the module loader. It’s
important that this be kept short, but it is sometimes necessary to get
something done before the module loader. (See the web server.)

What is require.js? It’s a script that helps you get other scripts loaded in
the right order. In 2010, when RequireJS was first published (going by their
copyright, anyway), JavaScript did not have a first-class mechanism for making
and using modules. RequireJS—developed concurrently with the Asynchronous
Module Definition spec (AMD)—was designed to work with what was available at
the time. In other words, it’s written in plain old JavaScript.

And that might have been the end of the story.

But of course, the latest version of ECMAScript (2015) includes a module system,
and so theoretically provides a standard, built-in way of doing all this. As of
right now, no browsers implement it. None.3 It’s not
even listed in caniuse.com, which is generally a good place to stumble onto web
features you’ve never heard of before.4

Naturally, Babel lets you ignore this reality, by working with existing tools
like RequireJS to provide the present illusion of the future in which JavaScript
is even bigger. The rationale for using such an approach—which requires
additional build processing and dependencies—would be that you’ll be able to
seamlessly switch to the “Harmony” module system when browsers support it
natively.

That sounds good. But when will you really be able to ship code that targets a
feature which is currently non-existent? At some point Babel won’t be shimming
the future; it’ll be shimming the past. Either way, you’re stuck with
it.5

Besides, I tried all that stuff out (of course), and it’s very
wonky.6 For basic purposes, I just don’t see semantics that
aren’t provided equally well by AMD/RequireJS, which is a very small program
that solves a problem using available materials—the way good programs are
supposed to be.7

Bottom line, willshake uses require.js, and it is not considered it a shim (at
this time).

We could get it from the internet, either by direct download or via npm, but we
just keep a copy in the project.

This also ships our other script dependency, jQuery. Did I mention that one day
willshake won’t depend on jQuery? Well, it’s not as painful, as we’re using a
custom build that doesn’t include Sizzle. And maybe jQuery isn’t all that bad.

Speaking of development. In several places I ship these .src.js copies of
everything onto the site, just for easlier debugging. The reason for the wierd
naming scheme is that urlArgs is the easiest way to tell RequireJS to suffix
everything, and I keep the file extension so that the MIME type will be assumed
correctly. And the module name has to be munged consistently, or else you can
get two copies of it, and its initialization will run twice. That’s bad.

Of course, like all of willshake’s programs, JavaScript is written as part of
these documents. The script blocks are turned into files through the “tangling”
process (see the system). And again, JavaScript is delivered as text files.
Compared to other ways of shipping software, using scripts in a browser is easy.
It’s really just a matter of copying files, and referencing them.

But before getting to the web site, those JavaScript files go through a few
“preproduction” stages, for various reasons.

First of all, JavaScript is not really JavaScript. It’s really “ECMAScript,” a
name which tradition holds sounds like a skin disease. ECMAScript was the
result of people seeing that JavaScript was popular and that people should shake
hands and make it official.

Well, the big transition right now is from ECMAScript 5 to ECMAScript 2015.

Most browsers don’t fully support the latest “official” version. That doesn’t
stop people from using it now, though. Just as we use a special-purpose
language (Stylus) that can be translated into the stylesheets understood by
browsers, we can also treat the newest version of JavaScript as another language
that can be translated into the currently-supported version. For this,
willshake uses a tool called Babel.

I stopped using minified code during development because it’s too much of a pain
with no gain. So the only benefit of generating source maps would be to debug
problems in production. Even then, you can live without source maps at this
stage, because error reports (if you bothered to send them) could be resolved
against the ES5.

That said, if you do add source maps here, note that you need to use the
--out-file option (instead of writing to stdout), or else the source maps will
be written inline.

This only works with a Babel version prior to 6. I use babel@5.8. Why?
Because Babel 6 kept gettingslower at doing the same thing.

I would add that Babel 6—unlike previous versions—forces you to treat your
project as a “node” project, which willshake is not. This is because, out of
the box, Babel 6 does nothing at all, and the plugins and presets that actually
do things have to be installed “locally,” that is in my project folder. They
argue that this is a good practice which makes your project more “portable.” I
suppose there is something to this logic. Perhaps willshake should also include
its own copy of gawk, and wget, and Graphviz, and ImageMagick, and LaTeX, and
Stylus, and all the other tools it uses to take input and produce output.

At any rate, this humble system has not reached that state of advancement in
which it has a place for “prerequisites” that must be obtained after itself, nor
for “externals” that must live inside of it.

The --pure-funcs option lets you tell the compressor about specific functions
which are known to be side-effect free. Since statements without side-effects
are dropped when the --compress option is used, this option can be used to
remove console.log statements from the product. Again, I’ve basically given up
on developing against the minified scripts, anyway, so in effect, they are only
for production.

I’m throwing up my hands and shipping the full source for development use.

Of course, just about everything in a web page is “a piece of writing,” not
least the document itself.

And yet the script in a web document is one of the very things that you don’t
see.

The script is the piece of writing that is interpreted by the browser.

“Script” is a term of art in computers, which usually refers to programs that
are interpreted from text. And notwithstanding that today’s (and tomorrow’s)
JavaScript engines no longer regard JavaScript as “interpreted” in that sense,
it retains the spirit of a scripting language since it is almost always
delivered as plain text.

In other words, a script is a plan of action. Without scripts, a web page can’t
really do anything (except be scrolled). It’s “just a document.” It’s dead.
In fact, the original name of the scripting language introduced by Netscape in
1995 was “LiveScript.” (Legend has it that they changed it to “JavaScript”
because Java (a completely different language) was the hotness at that time, but
I’m staying out of that.)

It’s scripts that make a fluid experience possible. Without any scripts at all,
willshake works like a web site from the early 1990’s. Each time you click a
link, a new page is loaded. It’s like a series of stills.

Again, willshake works without any scripts at all. So “the show goes on” even
without scripts, or script management. It’s just not quite as good of a production.

For the sake of simplicity, this web site is a monolith.

All of the stylesheets and scripts are considered part of one big machine.

Many scripts and stylesheets will be relevant only in some places. But they
should not break anything in others. The only reason for breaking them up is
for the sake of efficient delivery. Any script (except maybe the first one) can
be downloaded (prefetched) and run at any time.

This is even more true with scripts, since, unlike stylesheet links, scripts
can’t really be removed after the fact. Removing a script tag won’t change the
fact that the script has already run, and whatever change it has wrought on the
state of the javascript VM certainly can’t be reversed (not in this world,
anyway). Therefore, it must always be safe to load any script. This view is
necessary as a consequence of the way that getflow works, but really it makes
things much simpler to think about, anyway.

console.log("um, this isn't loading, right?");define(['jquery'],function($){constSTATE_KEYS=['play','scene','anchor'];functionget_route(location){if(/^\/plays\/(\w+)(\/([\w.]+))?(\/|$)/.test(location.pathname)){return{play:RegExp.$1,scene:RegExp.$3,anchor:location.hash.slice(1)};}returnnull;}functionpage_init(state,oldLocation){/* Scroll to start */// The viewport will smoothly scroll to the location indicated by the// address's fragment identifier (if any).// // For new visits, this is handled automatically by the user agent// (although see load script). This is only for steps within a visit.if(oldLocation){scroll_to_anchor(window.location);}}// TRANSITIONAL: nominal export for use with loader.return{};});

The stage manager normally “calls the show” (i.e., gives commands to execute all
cues during performance) and accepts responsibility for maintaining the artistic
integrity of the production throughout the duration of its run.

Presumably Harmony modules will solve some problems that aren’t
solvable with JavaScript alone, or else they wouldn’t do it, right? Anyway, I’m
not saying they’re no better, just that they’re no better for willshake’s
purposes.

Willshake is an experiment in literate programming—not because it’s
about literature, but because the program is written for a human audience.

Following is a visualization of
the system. Each circle represents a document that is responsible for
some part of the system. You can open the documents by touching the
circles.

Starting with the project philosophy as
a foundation, the layers are built up (or down, as it were): the programming
system, the platform, the framework, the features, and so on. Everything
that you see in the site is put there by these documents—even this message.

Again, this is an experiment. The documents contain a lot of “thinking
out loud” and a lot of old thinking. The goal is not to make it perfect,
but to maintain a reflective process that supports its own
evolution.