Saturday, December 14, 2013

Actually, before we get to dissecting anything, let me emphasize again that the most humane approach to authentication is not requiring it. If you have a system that could be exposed to anonymous users, that's what you should do. If you've decided that you absolutely must have some sort of authentication step, then this doesn't seem to be a bad way to go.

And that's it. In a production system, you'd obviously want to wire everything up to some database system or other rather than using an in-memory hash-table, but this explains the concept well enough. You'd use this module by including it, then calling (new-account! [user account data goes here]) (which will return a newly generated passphrase) and (sign-in "a-passphrase-goes-here") (which will return either nil or the account data you associated with the given passphrase) as necessary.

That's the ASD file and package. The first makes sure you can load this system using asdf or quicklisp, and the second declares your imports and exports. I'm trying something new this time and refusing to use :use or :import-from and friends. I've gotten a couple comments to the effect that it gets a bit confusing if I import symbols directly rather than labeling them inline with the package they came from, so even though raskin-authdoes use things from both ironclad and cl-ppcre, the package.lisp file is staying minimal.

random-words creates a list of count random words by picking them out of a dictionary, which is +dict+ by default. You don't necessarily want these words to be unique, so we don't check for that. +dict+ is just some slightly sanitized output from /usr/share/dict/american-english, which is where Debian keeps the default English language dictionary. The result of that read is a vector of all words in the dict file that are composed entirely of lowercase letters. What we're doing, essentially is shuf -n [count] /usr/share/dict/american-english. Except we're filtering for some stuff, so that should really get piped through a grep or two. Use whatever method you'd like; the end goal is to get a list of count random words, from a list of ~60000 different words, each with an equal probability.

iterated-digest takes a count, a digest-spec and a message, and applies the specified digest to the messagecount times sequentially. We'll take a look at how you call it in a second.

*users* is a hash table that'll keep all of our user records[1], and both new-account! and sign-in are hopefully self explanatory. Let me linger on the rest of that though.

First, you absolutely positively need the *random-state* initialization. Without that line, your system will generate the same order of passphrases each time it starts up. Maybe that's not too big a deal in general, but I'm paranoid enough that I want proper, os-seeded randomness out when I'm generating authentication tokens.

That takes a particular passphrase string and returns the result of applying the :sha256 digest to it 10000 times. I guess you could make that :sha512 if you really wanted to.

Finally, fresh-passphrase does the job of calling random-words, concatenating the result, and checking whether the result of that is already on record. It keeps going until it generates a passphrase that no one else is using at the moment, and returns that. You can see that it scales somewhat with count of users registered, just to make sure we don't get into the situation where a particular passphrase length is particularly easy to guess.

That's it. Again, what I see here is reasonable security.

Thoughts

On the one hand, you don't get to salt passphrase hashes. Which means that if anyone manages to trick a user of this auth system into revealing their ciphertexts, they'll have a mildly easier time cracking the result. And, since every passphrase is unique, they can knock out some tiny number of possibilities as they go. You also can't easily change your hashing tactic in-flight. Hypothetically, if you chose the iterated :sha256 approach from above, and it then turned out that clever people found ways to compromise that hash, you wouldn't be able to switch your tactics on a live system easily, the way you could with a user-name-oriented system. You would be able to increase the number of hashings fairly easily; just modify your hash to do more iterations, and modify your registered users' passwords to make up the difference.

On the other hand, no one will ever have the passphrase 123 with this system. And, since they didn't pick it, they presumably won't have this same passphrase on any other service they frequent, which means a compromise here won't have to result in a mad dash to change their account passwords anywhere else fo fear of exploits.

The only other downsides seem to be that you can't choose a passphrase, and that if you forget your passphrase, you must create a new account.

Footnotes

1 - [back] - Because it's a hash table, and I don't bother doing any kind of locking, the system you see specified here very likely won't do for any multi-threaded use-cases. You can either add locks, or go the whole nine and replace that hash table with an external database, but I don't need either to see the basic properties of the system, so I didn't implement them.

Tuesday, December 10, 2013

I've got an idea to peel, and for a change, it's not even mine. I'm in the middle of reading through a Raskin book entitled "The Humane Interface", in which he suggests a different take on user authentication. In section 6-4-3[1], Raskin suggests that signing on to a system can be accomplished without requiring a user name. That is, instead of ... you know what, here, this is easier:

Users are doing more work than necessary when signing on to most systems. You first state who you are -- your "handle", "online name" or "system name" -- and then you provide a password. The name presumably tells the system who you are, and the password prevents unauthorized persons from using your account.

In fact, you are telling the system who you are twice. All that is logically required is that you type a password. There is no loss of system security: The probability of guessing someone's name and password depends on how the password was chosen, its length and the like. Finding the user's online name is usually trivial; in fact, it is commonly made public so that she can be communicated with. A badly chosen password, such as your dog's name, is the most common reason for poor security.

The technical argument that typing two separate strings of characters gives more security is false. If the online name is j characters and the password is k characters, the user, to sign on, must type j+k characters, of which only k characters are unknown to a potential interloper. If the password was chosen randomly -- this is the best you can do -- from a character set with q characters, the probability of breaking into the account on a single guess is 1 / q^k.

Jef Raskin -- The Humane Interface, p183

If you've given authentication systems anywhere near as much thought as I have, the trouble you should immediately see is that in a system like the one proposed above, a password must be unique to a user. Luckily, Raskin sees that one coming.

The question arises: How can you ensure that everybody has a unique password in a password-only system? What if two or more users chose the same password? The best option is to have the system assign them. This method can result in very unmemorable passwords, such as 2534-788834-003PR7 or ty6*>fj`d%d[2]. Another technique is to use a random pair of dictionary words, such as demijohn-shoestring, confirmed-tweezers or sulphur-dive. If a dictionary of 60,000 words is used, the chance of guessing a password on the first try is one in three billion, six-hundred thousand. Using three words puts the difficulty of guessing them beyond hacking with current technology; there are 2.16 x 10^14

such combinations, and guessing and checking a billion of these a day, beyond what can be done at present, would still take about 10^5 days, or 275 years. That's reasonably secure. User-created passwords, at least those more readily memorized by the user, are inherently less secure.

When the idea of improving the interface to a web site or a computer system by simplifying the sign-on process to require only a password is suggested, it is usually rejected on one of two grounds. Either the programmers say that that's just not the way it's done, or they say that they have no control over the sign-on procedure. But someone, of course, does have that control.

Jef Raskin -- The Humane Interface, p183,184

Before I discuss this idea with my self, I have to disagree with two points. First, the odds of guessing a correct password on the first try is not1 in 3 600 000 000, or 1 in (* 2.16 (expt 10 14))[3]. It's n in whichever-you-picked, where n is the number of users you have. With a password-only system, an attacker is no longer trying to guess a particular users' password, they are trying to guess any password already assigned by your system. Second, I'm not entirely sure that badly chosen passwords are any longer the most common reason for poor security[4], but rather utterly, mind-fuckingly stupid security design by password DB teams[5].

With that our of the way, lets all don our white hats[6], and imagine the proposed system in enough detail to implement it.

How does a user log in?

In the context of a web application, they've got one field to fill out, "passphrase", and one button to click, "Log In". The passphrase entered is then hashed and looked up in our user database; if it matches a passphrase hash we have on file, the user ID is retrieved and used to get the specified users' program state. We then continue along letting them do what they're actually here to do. In an ideal system, this authentication step would be entirely optional, allowing it to happen at the last possible moment, when a user needed to commit some piece of data to their server-side corpus.

This is easily the biggest introduced weakness I see in the proposed system. Because we only have a passphrase to work with, we can only use either an unsalted hash, or a per-server "salt" to keep our passphrases out of plaintext. If we didn't, that user lookup based on the password would take a long time. Scaling at On with number of users, with some fairly ridiculous constants tacked on. That's dangerous, because we're suddenly gambling that the rest of the application our auth system is embedded in won't allow any injection attacks, or leak database information any other way. Granted, because we're guaranteed to have unique passwords, such a disclosure isn't as easy to take advantage of as it might be, but it's still a concern.

What happens when the user enters a passphrase that isn't currently assigned?

There are really only two reasonable possibilities:

They get an artificial delay, followed by the above message. The standard log-in procedure also needs to have an equivalent delay, otherwise attackers might abort a guess before getting the response back, which would prevent them from actually being delayed in the practical sense. It doesn't have to be long; a second or two would be enough to prevent the kind of guess hammering I've got in mind, and it wouldn't be too annoying to users provided we put in a little spinning graphic in the meanwhile[7].

They get a "logged in" response with the default state in place, and no other warning. Effectively, an "incorrect" passphrase entry becomes a registration. Users might get annoyed at this one, since it would seem at first that their program state is gone.

Having thought about this for a bit, it becomes clear that there's only one reasonable possibility, and it's the first one.

How does the registration process work?

This might be context sensitive by application. For instance, Deal lets users play entirely anonymously. I can easily imagine a system wherein after 10 minutes of play time, a user just automatically got an in-game notice with a passphrase that would let them resume where they were. Because the server controls all the steps to a registration, it can happen behind the scenes with some game time effectively taking the place of a Captcha. This could be used with any system that lets you start off anonymously; wikis, bulletin boards, forums, etc.

That system, elegant as it might be from the implementation and usability side of things, wouldn't work for something like GoGet. Where the only possible reason to use the application is to go back later and check what you put in the first time. In that situation, you'd want the usual up-front "Register" button that would do the Captcha thing to make sure you're not a robot[8], and hand the user an account before they start doing stuff. Really, this might be re-designed too though; have the system start you off on a blank check-list, with an unobtrusive "Log In" form at the top of the page, with the added button "Save", which would register you and hand you a passphrase with which you could access the list you just made.

What do we do when passphrase exhaustion occurs

Granted, 216 000 000 000 000 is a large number, but it's not infinite, which means some clever bastard out there is going to find a way to cut it in half a few times for the purposes of guessing. And it doesn't take very many halvings to get that down to a tractable level. We have to deal with this problem a good deal sooner than "passphrase exhaustion"; if we get to the point where all passphrases are assigned, an attacker suddenly gets access to an account no matter which possibility they guess. But if we did something naive like hand out 2-word passphrases until they ran out, then an attacker who registers and receives a 3-word passphrase would know that any 2-word combination of our dictionary words will give them access to an existing account. We'd really want to generate new passphrases well before we ran out; at something like 10% exhaustion at a guess. Or better yet, don't limit passphrase length to two words, make it n random words, where n is something like

That should give attackers less purchase, and scale naturally with additional users.

What Have We Got?

Switching briefly over to my black hat, I can't see an attack on this system that would get you any traction above and beyond traditional password-based implementations[9]. That doesn't mean there isn't a way, of course. I'll present the idea to some discerning and devious thinkers to see what they can come up with. Otherwise, we've got some interesting properties here, mainly because the server-side is the one putting everything together. We have a passphrase system that

will not suffer the failure mode that someone will use this same passphrase everywhere[11], meaning that even if the system is compromised, all an attacker has is access to an account for one particular service

will never have to worry about a passphrase as shitty as "password" or "12345"

5 - [back] - Such as the refusal to use appropriate hashing algorithms, or inadvertent opening of various injection attacks.

6 - [back] - Mine's a tuque because it's cold out and I'm in Canada, but you should feel free to don your hacking fedora, trilby, stetson, what-have-you as regionally appropriate.

7 - [back] - Most authentication systems I interact with take longer anyhow.

8 - [back] - Or not, really. Depending on how much traffic your system can handle, how much you care about preserving disk spce, and whether you give your users the ability to use SMTP facilities, you might get away with putting in an artificial 3 or 4 second delay before registration completes rather than trying to prevent automatic sign-ups. That's what I plan to do, in any case.

9 - [back] - Apart from the situation where our ciphertext passwords have been leaked. Which, granted, isn't a high bar, but still.

10 - [back] - As in the situation in Deal that would automatically hand the user a passphrase ~10 into active use of an unregistered account.

12 - [back] - Since the passphrase acts as both a name and password, if you forget it, you just have to start a new account. Allowing the user to save as much of their data as possible locally would work to alleviate some of the pain from this.

Sunday, December 1, 2013

I've spent the past while putting together a minimal, single-threaded asynchronous server to simplify the deployment process. Almost done, and you can see the progress on this branch in this subdirectory. The remaining stuff left ToDo is:

Better Errors. I need to put together an appropriate assertion mechanism. Straight up assert works fine in a multi-threaded context, but does some mean things when you've only got the one thread. Normally, it wouldn't be that big a deal, but I have to special-case my handler-case statements for SIMPLE-ERROR in order to allow shell interruption. Unfortunately, they're both conditions of type simple-error, which means that if I do it naively, I either let both or neither through. What I'm planning to do is define a macro named something like http-assert, which will throw a type of error I can then safely convert into an HTTP-400 response.

Basic Static Files. Currently, I'm serving static files through nginx only. Which is the efficient way of doing it. However, one use-case I'm thinking of for Deal is that of a small, geographically disparate team setting up a private server for themselves. It's kind of a pain in the ass to have to set up a reverse proxy for that situation, so it would be nice if House provided a basic file handler for people to use.

That's ... going to get complicated though. A first crack at the implementation is here and here. That only works for text files so far[1], and it only works for a laughably small number of mimetypes. A more complete map can be found here or here, but I'm not going to be anywhere near as thorough; remember, this is an edge case. This lightweight server is not in the business of serving out static files in an efficient manner. That's what things like nginx are for, and I've got no doubt they're doing a better job than I could.

Touch Ups Sessions don't expire yet. And when they do, I'll want to give them the same kind of behavior hooks that I've got going for new-session. There's also the non-trivial matter of porting the rest of the Deal system to work better with the House server, but I get the feeling I'm most of the way there already.

Famous last words, right?

Footnotes

1 - [back] - I'm still trying to iron out kinks; in particular there seems to be some kind of character encoding issue left in the way that I just can't get my head around. I'll be asking on SO shortly.

Saturday, November 30, 2013

The underlying implementation of the SVG DOM and the HTML DOM is different in current browsers you see, so the standard HTML5 drag event doesn't apply to SVG nodes. Luckily, mousedown, mousemove and mouseupare supported, so you'd think it would be a straight-forward task to implement the fucker yourself. You probably imagine, as I did initially, something that takes a selector and a list of callbacks, and implements something similar to jQuery's .draggable() in ~30 lines of code by

binding a callback to the targets' mousedown, mousemove, mouseup events

storing the initial position of the element, and its delta from the mouse cursor

preventing default on mousedown

in addition to firing the callback, moving the target element by manipulating x and y coordinates using the current mouse position, initial delta and initial position

Maybe that's possible for the simple cases, but it's the edges that'll get you. And unless an implementation happens to dull all your edges, it's not really good enough.

Minor speed-bump. What if you need to drag multiple elements? If you were dealing with HTML elements, it wouldn't be such a big deal, but SVG elements have different properties to represent their coordinates. Rectangles and similar have an x and y that represents their top-left corner, circles and ellipses have cx and cy that represents their center, text elements have an x and y, but theirs represents the bottom-left coordinate, and I'm not even getting into the path elements. Bottom line, if you want to implement something that works, you're using transform settings. Also, you're not doing it naively through setAttribute, unless you're lucky enough to have a situation where you can guarantee that no other transformations will be applied to any draggable element[1]. The snippet that handles that particular piece looks like this in my codebase

If you're one of the sad bastards who don't have macros at their disposal, I guess you're doing that or something fairly similar manually[2]. Like I said though, no big deal either way.

Medium sized speedbump. If you want to do this on an arbitrarily sized element, specifically a very small one, you'll discover that moving your cursor at even moderate speeds is enough to escape from the mousemove event and leave your draggable behind. One possible solution here is to also bind the mouseleave event and hope you never need to move fast enough to escape that too. Another approach is to have your chosen mousedown set up a globalmousemove event, on body or html, use that to drag your element around, and have a global mouseup waiting to cut it off as soon as you're done[3]. A bit hacky, but doable.

Slightly larger speedbump. If you want to make these bindings switchable, you're in for a bit of a harder time. Not switchable as in "different objects should be able to do mildly different things", that's a given. I mean like "it should be possible to jump into a separate interaction mode where the same object does something mildly or wildly different under certain circumstances". If you want that, you need a level of indirection in your listener tree that you can swap out with other functions, and that level of indirection is going to be calling an externally specified function on each event trigger. Basically, you'll want to be working with hooks rather than listeners at this point[4]. I'll keep you posted on how this one goes in real life.

Large speedbump. Suppose you want to be able to use your dragging events, and a mouseup event on the same element. Better yet, suppose you wanted to implement drag/mousedown interactions, but let the user decide what layer to apply them on at any given time. Imagine a situation where you had the elements foo, overlapping bar, overlapping baz, and when a drag or mousedown hits, you want to let the user decide whether they want to be click/dragging foo and/or bar and/or baz. Near I can tell, there is no way of implementing this elegantly in terms of listeners on individual elements. What you need if you want this is a central listener that delegates particular events out to some intermediary functions, or eats them[5] as appropriate.

Keep in mind that the last two speedbumps I hit here probably won't be felt by most people going in the same direction. Still, I went into this figuring it'd take me a half hour at the outside to implement something workable. It ended up taking me the rest of the day, and will probably cost me another hour or two when I get back in on Monday.

Such is development sometimes, I suppose.

Footnotes

1 - [back] - If you are going to have other active transformations, using the setAttribute method would overwrite those, which is why it's a bad idea.

2 - [back] - if you are doing that, I should point out that the only reason I went the try/catch route here is that both =>> and =set-attribute take either an element or a set of elements as their first argument, and I wanted =translate to do the same. Since you probably won't have the same situation, you're likely better off with if/else.

3 - [back] - You wouldn't want to do this naively either, unless you knew there'd be no other mousemove events on that top-level element. If you did have that, you'd want to set up a hook that you could change out rather than messing with event listeners every time you dragged something.

4 - [back] - It just occurred to me that you might have no idea what I mean by "hook" in this context. Basically, something like this:

If you have something that looks like that, you can change some of the behavior of your global mousemove event by assigning a new callback to the mouseMoveHook variable. I'm sure it's been used elsewhere, but I learned the term "hook" from Emacs, which provides standard event hooks in a bunch of different situations, and does it more or less this way, modulo some syntactic sugar and multiple hook support.

5 - [back] - In the case of the trailing mouseup event after a drag concludes.

Tuesday, November 26, 2013

Step one was getting a basic system running. In fact, we've got two we're dealing with concurrently[1]. Both are kind of rickety because I'm still trying to work out what the essential parts of the approach are. One deals with OS threads and the other is built on top of cl-async. I'm liking the second one better for now, but am once again reserving judgment until I see how easily they explode.

What we want, being that we're Lispers and polyglots to the last, can best be described as "Emacs for Diagrams". And that doesn't seem to exist. There's a number of general-purpose vector editors out there at various levels of readiness, but those are more like "MS Word for Diagrams". There's a "Sublime for Diagrams" floating around in the form of yEd, and one piece of fallout from the Russian space program is basically "Notepad++ for Diagrams". Microsofts' own flowcharting suite is something like "Eclipse: The Extra Shit Version for Diagrams. Also You Can't See Our Source Code, Which Is Probably For The Best Because It's Mostly Garbage You'd Laugh Out Of Your Company In Any Other Context". Finally, if we want to use Emacs for diagrams, the only thing even remotely workable seems to be artist-mode. It's nice, I guess. Certainly better than prostate cancer. But that's not quite what we're looking for.

The thing we are looking for needs to be

A visual editor. This needs to be a tool to let humans construct diagrams and related diagrammatic artifacts for a variety of media.

A diagram editor. We're not looking for a general purpose editor. In particular, there are very, very few things we'll be rotating, nothing we'll be coloring[2], and we only need one font at a fairy small variety of font sizes and weights. We'll also never be dealing with embedded bitmaps, gradients, brush-strokes, or shapes more complex than rounded rectangles/ellipses. You'd be surprised how much junk that lets you cut out.

Fast. It shouldn't take longer to draw a diagram with this thing than it would take to describe it. This means a keyboard-oriented interface with easily re-bindable keys, reasonable performance, reasonable start-up, and no half-baked expert system getting in your way by trying to guess what you're trying to do.

Scriptable. It should be possible to simply and easily construct sequences of operations to be summoned later. It should be easy to run automated queries and transformations on the programs' output and potentially edit the result in a visual manner. It should be fairly simple to select a particular subset of elements and manipulate them in some way.

Flexible. We have no idea what the visual formalism we're finally going to adopt will look like. It might require fine placement of wires and connectors, it might involve different kinds of connections, and it will certainly involve multiple different connections between two nodes[3]. It also can't make any kind of semantic assumptions about what the content we're editing means, because that might change as we go through the discovery process of what kinds of processes and situations we want to formalize. This also means some sort of proper macro system, an end-user specifiable final/intermediate representation, and source code that we'll be able to tear apart if we need to.

The various editors I list above are split firmly into two camps. First, there are editors aimed at non-programmer humans for general vector construction. They're fairly flexible, but because they're for mouse-pushers, they don't provide a good intermediate representation for our purposes, don't particularly care about ease-of-use or keyboard-centricity, and don't make it particularly easy to script parts of your workflow. In the second camp, there are specialized flow editors aimed at programmers or technical managers, which do provide a good deal of flexibility on the keyboard binding front, and provide appropriate output, but also make all kinds of assumptions about what you're doing, why, and how you should want to go about it.

If you know of something awesome that I missed, let me know, but for the moment we're rolling our own. I've been working on it for about half a week solid now, and it's taking shape pretty quickly. I showed off the pre-alpha to some folks at the Erlang conference last week, and got reactions that didn't look like outright disgust. Once it takes some usable shape, I'm planning on posting some videos for you, and possibly showing it off a bit to the FBP group.

I was also going to talk a little bit about fact bases, and the inherent strength of simple models, but I think I'll leave that for another day. When I get permission to Free this editor, I'll have quite a bit more to say about that in any case.

Footnotes

2 - [back] - Except for UI purposes, obviously. The various selection/manipulation layers are going to have distinct visual cues that rely on color, but I don't count these as part of the diagram proper, even though they may be visually associated.

3 - [back] - Which incidentally kills any use we might get out of svg-edit's or Inkscape's connection tools. Each of them assume a single connection between two shapes, anchored at each center-point. No idea if said connection can be directional or not, but we've already established that it won't do, so I'm not looking into it.

Second, here's J. Paul Morrison talking about the history of FBP. He's the guy who first discovered the idea, and formalized it while he was working at IBM in the early 1970s, and he talks a bit about what led to the insight. Near the end, he also demos a visual editor he's currently working on.

If you really just want the audio and slides, you can get the relevant files right here. You can probably get about 60% of the gist from the audio, but all three of us used some relevant visuals in our presentations. In the case of Paul T. and Paul M., a powerpoint presentation, and in my case some noflo code. I'll try to put up .torrent links for the HD versions of the videos when I figure out the logistics, and the third video will go up next Tuesday.

There were no videos from the second meetup[1], and there won't be any from today[2]. We're basically trying to figure out how to pay for them; one idea I'm looking into is getting a monthly kickstarter going to let the market decide if there's enough interest to produce videos for a given month. I'm not sure how much advance notice we'll need, or how far we can actually plan out talks, but I'll keep you posted.

Sunday, November 10, 2013

EDITOR'S NOTE:
This is a piece I wrote about three years ago, then sat down to copy-edit and polish up today. It was very focused on a few particular, topical anecdotes that my then-co-workers were throwing about. Having read over it equipped with three extra years of hindsight, I'm not sure I'd still be quite as vehement about my opinions as I was here, but don't disagree on any large points.
Sun, 10 Nov, 2013

I'm writing this post because I've seen one too many comments of the sort "Well, if PHP is so shitty, then why is it so popular?". Typically this is the main claim in a rebuttal to "PHP is a shit language"[1], and the end result seems to be that a lot of people just sit back and think "Oh yeah, I guess it is popular. It must not be shit, but rugged." Substitute "manly", or "quirky" or similar if you like. As in, yeah, it has flaws, but they're endearing flaws. Like mysql_escape_string being the deprecated precursor to a different function named mysql_real_escape_string (itself also deprecated). And static functions having an odd interpretation of "this". And the complete list of deprecated functions being longer than my average blog post thanks to the fact that they never actually get removed.

How do technologies get popular?

This probably won't turn into an article where I mention Common Lisp[2], by the way. The idea of language popularity is almost completely irrelevant to what I'm discussing here; this could be a discussion of any sort of tool and how you'd go about picking one for a particular purpose. The source of popularity is obviously difficult to determine because relatively few things get there, so lets back off a little bit.

What does it mean for a tool to be popular?

Does it have to solve some specific problem better than other tools? Does it need to be better/cheaper/faster than solving that particular problem by hand? Does it need to be more fun or easier to use? Does it have better marketing/sales/promotions than the competition? Is it the first tool to solve a problem sufficiently well?

No. To all of the above.

A tool is popular when enough people have chosen it to perform a given task. Any of the above points contribute to a tool getting chosen, but for each, you can find a large number of counterexamples. Both tools that lacked it and became popular, and tools that had it but went nowhere. So no single element of that list of points is going to make or break you.

Lets look at it from the other side instead though.

What does it take for someone to choose a given tool?

That's a simpler question, but it should get us the same answer. If "popularity" is "being chosen by enough people" then figuring out "how do most people choose" should tell us "what it means to be popular".

A big reason to choose a tool[3] is that it'll get you a job. Again, this has nothing to do with language choice. Lots of people claim "I learned [x] to get a job", and [x] can be "Java" or "C#" with the same probability as "MS Word", "Photoshop", "Wordpress", "typing", "cooking" or "how to drive a bus". So one reason people choose is to get a job. Before we drill down to the next level, any other reasons?

For fun. I know about as many people who paint/design professionally as those who just do it on the weekend to relax[4], and I know plenty of people who just plain like to program. For fun. Like, on nights and weekends. Granted, not everyone works this way, and not every tool has this effect on people to the same degree[5], but it's still one possible reason.

Fitness of purpose maybe? Well, not in practice, no. Fitness of purpose is how you pick a specific class of tool. Which is to say, that's how you know you want a rotary cutter as opposed to a reciprocating saw or a pen and not a sable brush. It still doesn't tell you that you want a Dremel as opposed to a DeWalt, or a Pilot instead of a Bic[6], or ahem, a Lua instead of a PHP. It's also not as high a bar as people might think. I try to be objective about it, but from observation, most people tend to treat "fitness of purpose" as "what tool do I currently know how to use that could sort of be put to this use?" rather than "what is the most effective tool for the problem I'm solving?"

Popularity is the only other big reason I can think of that tools get chosen, but I don't want to recur just yet, so we'll leave that one alone. Back up a bit.

Tools are popular if they'll get you a job.

When will a tool get you a job? Well, when enough employers start putting it on their job listings. Until that point, it's not worth learning it just for that. Tools before that point mostly get adopters that come by because it's fun for them or the tiny minority that have performed a sufficient comparison and found that tool to be the best fit for them out of the ones they compared. In other words "I choose tools that will get me jobs" translates to "I choose tools that employers choose".

So how do employers choose tools?

Well, here, I can actually share some small amount of real-world data. Anecdotal, so take it with a grain of salt, but enough to form a theory. If anyone wants to try being the experimentalist on this one, be my guest. If you did it well enough, I'm sure it'd be publishable.

There are a few major points that impact on what an organization does in terms of technical tool choice[7].

The biggest one is "We'll keep using what we're using". Which is to say, if the previous project turned out to be successful, there will be a big push to use the same tools on the next one. Interestingly, this happens even if the success of the last project had everything to do with the team pulling constant overtime, and nothing at all to do with tool choice. The tools can be actively detrimental to the goal and still reap a rep-boost if the project succeeds. This doesn't really answer anything. How does a company choose their tools on the first project?

The first one at least has some expert input, but works oddly. You don't get choice bias towards the "best"[9] tool, but rather the most popular. If Bleeb is "better" than Blub, but only two people on a team of ten know Bleeb whereas everyone knows Blub, then the team uses Blub[10]. In other words, no help there; this criteria will get you the popular language without requiring any level of quality or rigor in its design principles.

The second one is just plain odd, and before sitting back and observing, I would have sworn that it would be a really weak reason to use a tool. Companies seem to not care though; if a given preferred vendor uses tool [x] for task [y], then the company tends to use tool [x], even if it's ridiculously awkward to actually use. The vendor is also a company, so they use this same process for picking their tools, so substitute that back once we're done.

The third one is obvious, I hope, but it also boils down to "popularity" because very few clients know the problem space. Typically, they listen to the first/best sales people that talk to them. They're a force though; if your target client wants it on MSSQL and .NET, then it'll either be that or it won't be.

The fourth one is the previous answer on a macro scale; "We'll keep using what we're using (as an industry)". In other words, if there were lots of successful companies using tool [x], we'll use it too[11].

How does the first company in an industry pick their tools?

Regardless of any other decision factors, the answer is almost by definition "before they really know what problems those tools will have to solve", and I've already discussed that one pretty thoroughly. There are no clients, so they can't pick that way, there are no other companies or vendors so there is no industry standard. They might look at what similar industries do. Would they use the best possible tool for the job though? Well, no. They're likely to go the "What do our developers know how to use?" route, which we've already discussed above.

how easy is it to hire people that know how to use this? (easier the better)

do we have existing code that we'll need to interface with? (if so, weigh whatever we used on that project favorably)

have we used any technologies in the past for similar purposes to what we're doing here? (if so, weigh those favorably)

and the big one

if I choose this, and anything blows up, will I still be able to make the case to non-technical humans that it was the right choice? (if not, weigh it very unfavorably)

1, 2 and 3 mean that the more popular the language is, the more chance it has of getting entrenched[14]. 4 means the same, but this time "popular" means overall, not just within the tech community. A non-tech has heard the name "PHP" before, enough times to associate it with "the web" and "Facebook" and "Wordpress", but probably hasn't looked into it closely enough to catch complaints from developers[15].

The end result is that, in a sufficiently large company, it's safer to use a popular tool that's poor in the technical sense than it is to use an excellent tool no one's heard of. And that's also been discussed thoroughly, and this time it wasn't even by me. The decision is made purely on the basis of popularity once again.

Shit

We just bottomed out our recursion. Just in case you haven't been keeping score, literally every single level at which a tool can be selected is likely to be filled by the most popular tool in some context, and this popularity never requires, therefore never implies, anything other than popularity. One more time: at no point in the process of selecting a toolkit do most choosers even try to see whether it's shit or not. So I don't care how popular your steaming pile of imperative, counter-intuitive security-exploits-waiting-to-happen is; it's still shit.

Don't let me stop you from eating it, but I remember what it tastes like so I won't be joining you. Or shaking your hand afterwards.

Footnotes

1 - [back] - Which I happen to agree with, actually. If you're looking for details on what languages(Spoiler warning) I'd recommend learning, you're better off reading this instead.

5 - [back] - Point of fact, only one person I know drives a bus for fun. It's been his obsession to work for the TTC since grade 8. I haven't heard from him in a while, but I still remember his room being full of papercraft Orion 3s, and I'm pretty sure he spent every internship opportunity he had on some streetcar route or another.

6 - [back] - The pen fanciers among you are probably ready to tell me that these are the worst possible examples; they're just the most popular common brand pens, not the really good stuff, where quality can make a difference. Really, I should have used foo and bar. You can go now, if you ponder that point hard enough, you pretty much got the gist of the article.

7 - [back] - I'm reigning it in a bit to software tools because that's what I have experience with, but this still seems like it might be a general trend; again, experimentalists welcome.

8 - [back] - In varying order, in my experience, but always on these things.

9 - [back] - Which I'm still quoting. In a book, that's called "foreshadowing". In a game or movie, it's called "setting up the sequel".

10 - [back] - And you get a varying amount of childish name-calling and dismissiveness towards Bleeb.

11 - [back] - Again, disregarding whether the tool had any effect at all on success.

12 - [back] - And at this point, all bets are off, I'm just theorizing, because I haven't observed the decision making process in an industry-defining company. That would be an interesting research project though, let me know if you've got one lined up.

13 - [back] - Not individual developers, but groups of corporate developers complete with leaders, technical or not, who are ultimately responsible to non-tech people further up the hierarchy.

14 - [back] - And note that both points are completely unrelated to how "good" a language is, and entirely dependent on how popular it is.

15 - [back] - Or to determine whether there's a lot of overlap between "good developers" and "developers who complain vocally about PHP".

My initial assumption was that I'd just make it the usual general purpose HTTP server, with a few pieces focused on my end goal, but now that I'm waist deep in the guts of this thing, it occurs that I could take it further if I wanted to. For my current purposes, a relatively small subset of HTTP would do just fine. Here are the points I've noticed:

Deal never runs into the situation where both a GET and a POST parameter have the same name. As a result, I can do the Hunchentoot-standard thing and mangle parameters so that get-params and post-params are actually kept in the same associative structure. That simplifies the structure of both request objects and of handler functions, at the cost of differentiating between parameter types.

Because there's never an ambiguity between GET and POST parameters, I don't even really need to differentiate between GET and POST requests. That'll simplify the class tree of incoming requests. Specifically, it'll cap its depth at 1.

There are only two handlers that allow session-less requests, the rest require that the requester have a session[1], so what I could do is just make each request start up a session if a valid session token isn't passed in. That'll occasionally burn a few milliseconds of extra work[2], but it'll remove the need to assert session presence at the application level.

Because Deal is already targeting recent browsers[3], there's no need to support any older version of HTTP.

Finally, because there's only one handler that's going to need to handle extended connections, and the rest will always return immediately, I can more or less ignore request headers. In particular, I never care about Connection[4], Accept[5] or Cache-Control[6].

Acting on each of these assumptions is going to narrow the usefulness of the server, but also significantly simplify it. The optimum is eluding me, though I suspect it might be "as simple as possible" given my end goal for this particular project. Interestingly, even if I decide to do the simple thing for each of the choices outlined above, the end result will still probably be a useful general-purpose game server.

The only really funny part is that, now that I've thought it through, implementing this system as a websocket server[7] seems like it would yield an even simpler architecture. I'll save that one for later though. Step one: simple SSE+session based engine, make sure it works, then rip out its guts again and see what the other way looks like.

Footnotes

1 - [back] - And most of those require that the session be associated with a particular, already-instantiated game table, but optimizing for that seems like it would be going too far.

2 - [back] - When someone who just wants to watch rather than play first checks out a table.

3 - [back] - We're using quite a few features of HTML5 to make the whole thing playable.

4 - [back] - That one that I mentioned will be assumed keep-alive, while the rest assume the reverse.

5 - [back] - You're expecting text/html on the main request, text/event-stream on the subscription and application/json on the rest. If you can't handle those, you won't be able to play in any case.

Wednesday, November 6, 2013

I can finally tell you what I'm doing at work. A friend has suggested that I just not mention that there are things I still can't talk about, so I won't. In any case, all the interesting stuff is fair game. Apparently, I'm allowed to talk about it in much greater detail than I can write about it because a persistent record is still frightening to some humans. Progress, I guess.

Damn it feels good to finally get that off my chest. I have no idea how Yegge stopped blogging. I'm guessing he hasn't, but rather just stopped publishing the results. I've been writing about one thing for the past little while, and the number of ideas I need to discuss with the rubber duck at some point is staggering. If my output rate were zero, I would probably have a pretty tenuous grip on my sanity.

This is heading off topic. Once more, with feeling

Flow Based Programming in Common Lisp

I'm not sure what I think about it yet. Lets just be clear about that up-front. You'll find plenty of FBP True Believers on the appropriate Google group[1], but I am not one of those. The fact that I'm willing to throw a couple years behind the idea implies curiosity, nothing more.

A matter quite apart from the underlying structure is our implementation of it, and I am certain that it's over-complicated. Granted, based on all the others I've seen, ours is the least over-complicated, but still. That's something I'll aim to fix, with a personal project as a last resort, if I can't convince anyone else about it. But I digress. Again.

Here's why I'm curious. This is what a basic web server looks like in flow-based terms:

And here's what it looks like once you add SSE capability to it:

and finally, here's what it looks like when we add sessions into the mix

The above is by far the most useful set of images I've got for understanding what's actually going on behind the scenes of a page-view. I've worked through the principles in multiple languages and spent quite a bit of time thinking about it, but until I sat down to draw it out, it didn't feel like I really understood what needed to be done. You probably don't know the same languages I do, but the above is still likely intelligible to you. So that's why I'm curious.

Flow Based Programming vs. Functional Programming

Before I go, I want to tackle this, because several people I've talked to have gotten tripped up in the comparison. Including me. I ended up deleting a few lines from this post that said

The underlying problem for my lack of "wow" reaction might actually be my usual languages. I'm used to thinking about streams moving between inter-connected, lazy processors. That's the main way I conceptualize Haskell. In fact, if you squint just a bit, it's the way you can conceptualize most functional programs, pure or not. -Inaimathi

The difference is that functional programming focuses on partial conceptual separation, whereas FBP takes the isolation concept a few steps further by enforcing complete conceptual separation as well as complete temporal separation. Here's the accompanying thought experiment, just to clarify what I mean by that.

Suppose you were writing in a functional language and wrote the following:

Firstly, notice that, while the functions are conceptually separate units[2], foo still has to know about bar and baz. That prevents total isolation. Yes, you can re-define bar and baz in-flight if your language is dynamic enough, but you need to have them both defined and you need to have them named bar and baz before foo can actually run.

Secondly, note that what's happening there is most likely a bunch of synchronous work. That is, when foo calls bar and baz, both calls complete and then foo returns the return value of baz[3]. If you wrote the equivalent in a pure-functional language, the actual work may happen in a different order than you see it written out, subject to what the compiler can prove about the behavior of the functions involved, but bar and baz will still complete before foo does. If they didn't, you'd get some unexpected behavior in any callers of foo.

Now, lets take a look at the apparently equivalent, Lisp-flavoured, FBP-style program.

First, notice that foo doesn't know anything about bar or baz, or anything about the existence of either. At some point during its execution, it outputs two messages to some implementation-dependent output structure, but critically foo itself is not responsible for delivering those outputs to their consumers. That allows for total part-agnosticism; you really can shuffle parts around with pin-equivalent parts and have the result work. In functional or actors-based systems, you can almost do the same; the exception is that since each sender/caller has to name targets in some way, you need to change small chunks of code in places where functions/actors interoperate.

Second, note that there's nothing in this system about the timing of bar and baz. This omission includes the fact that both, either or neither may run before foo completes. In this model of the world, if bar takes a while to run, neither baz nor foo, nor any of their callers are prevented from further operation. The second critical difference is, essentially, that asynchronous operation is the norm.

Monday, October 21, 2013

Just a quickie to share a tweak I had to make to my xmonad.hs. Not sure if there's a better way to do this, but hey.

The goal was to finally, actually get working hibernation on my laptop. I usually use it in short bursts, so I just got used to shutting it down between sessions. However, I recently started using a work laptop running Windows 7 and hibernation has been useful there[1], and I'll be damned if the non-free shitbox is going to have a mildly useful feature that my machine doesn't.

The way you get a Debian machine to hibernate or suspend is with the appropriately named pm-hibernate and pm-suspend commands[2], so I figured this would be a fairly easy key binding

Unfortunately, the pm-* are root user commands. And Xmonad doesn't automatically prompt for a password when you do something like su -c pm-suspend. And, unlike with sudo, you can't pass a password into su. So that approach is right out.

I googled around for alternatives for a little while, but What I ended up doing was finally adding myself to the sudo group, and defining this function for my own nefarious purposes

Footnotes

1 - [back] - Granted, because the boot time on that machine is something like 5 minutes instead of the 12 seconds I'm used to waiting, Hibernate is a goddamn necessity, but I digress.

2 - [back] - Ideally, I'd just be using hibernate, but there are some issues. I've upgraded my ram since installing the OS, which means that my swap partition isn't big enough to store a memory dump, and I can't seem to resize it with gparted, with or without swapoff/swapon magic. Luckily, I've had a larger hard drive waiting for me to crack open the box and configure it to my liking, so I'll just do that this week rather than procrastinating. In the meantime though, I'm suspending instead.

Ruby and Erlang each come with their own modes, and recent Emacs versions ship with a built-in Python mode and shell. Smalltalk uses its own environment (though GNU Smalltalk does have its own mode), and I'd really rather not talk about PHP. If you're writing in it, chances are you're using Eclipse or an IDE anyway.