Tuesday, May 20, 2014

It's been a while. Sadly/happily, I've been busy working on the house write-up for AOSA 4 and thinking pretty hard about a cl-notebook-related submission to this, so any spare writing time I've had has been getting pointed directly away from this blog. This is going to be a pretty incoherent update, mainly focused at getting various things out of my head rather than making any kind of sense. It's a quick cross-section/pressure-release related to things I've been thinking about lately, not a catalyst for learning or external discussion.

cl-notebook/fact-base

I'm noticing that there's a very common pattern in the cl-notebook use-case for fact-base. Specifically, the deletion of a record followed by the insertion of a very similar new record.

What I'm trying to express there is a change in an existing record. It's mildly annoying for two reasons. First, it means that I need to effectively store every change twice in the form of a before-and-after shot (which, granted, I kind of need to keep doing if I want easy reversibility on edits). Second, and more importantly, it means that a history interface for something being backed by a fact base is going to need to be more complex than I'd like. Instead of naively displaying states, I'll need to make sure to intelligently handle the situation where a user moves history to a point between a deletion and new insertion of a particular fact. I'm going to shortly be making such an interface for cl-notebook, so this is going to get painfully relevant very soon. This has me seriously considering adding a third token to fact-base, :modify, specifically to address this.

Memory

Whatever language you're currently using, managing memory is something that you're ultimately doing. Whether it's manually, or through various schemes that do it on your behalf. The ultimate goal of all of these approaches is twofold:

make sure that a new allocation doesn't clobber a chunk of memory that's still being used by something

make sure that you never build up enough memory junk that you can't allocate a new block when you need to

The general approaches seem to be

Not

For sufficiently short-running programs, a reasonable approach is to just build up junk constantly and let it come out in the wash after the program ends. This is a pretty narrow use case, since you can't clobber memory held by another program, and you can't use more memory than exists on the machine running you, but it's sometimes an option. I'm... not actually aware of a language that takes this approach.

Manual

This is the C/C++ approach. You, the programmer, get to declare exactly what pieces are being used and when. Whether that's on a procedure-by-procedure or datastructure-by-datastructure basis, you're ultimately responsible for memory yourself directly. The upside is that there's no garbage collection overhead. The downside is that every procedure/datastructure has to manage memory acceptably, otherwise things start blowing up in various hard-to-diagnose ways.

Mark-and-sweep and variants

One of the automatic memory management approaches. The general idea here is

keep a free memory list, and track of all things allocated by the program

every so often (either at a time interval, or every n allocations, or maybe just when you try to allocate memory and your free list is empty), traverse the list of all things and free the ones that aren't being used any more

A variant on this known as generational garbage collection is keeping several buckets of allocated things, rather than one. You'd partition objects based on how long they've been around so that you don't waste much time traversing long-lived data every time through. This is the variant that I've seen discussed most often, and I kind of get the impression that it's also the one getting the most research time thrown at it, but I'm not entirely sure why. Oh, incidentally, languages like Common Lisp, Java and Python use this one.

Reference-counting

Another automatic approach, also known as deterministic garbage collection. As far as I understand, Perl does this. The idea is to keep a special eye on language primitives that create or destroy references to objects, and to keep track of how many references to a particular object exist. Every time a relevant primitive is called, modify the reference count for the target, and collect it if that count is zero afterwards. I'm not sure what the pitfalls are in practice, but there seems to be a lot less discussion about this approach than about the previous.

Circular Memory

I only vaguely understand this one, and from what I understand, it's fairly limited, but here goes. The situation you'd want to use this or something like it is when a particular allocation is only needed for a short time before being discarded, and can be discarded sequentially. If you're in that situation, what you can do is figure out how much memory you have, then allocate things to it in order and hop back to the beginning when you're done. You need a pointer to the next block you can use (which you update every time you need to allocate a new thing), and a pointer to the first still-relevant block (which you update every time you free something). If the two ever overlap, you know you have a problem. Shapes other than circular are possible. For instance, you could have some sort of self-referential tree structure that accomplishes the same thing for mildly different use cases.

Decentralization

This isn't a thing related to memory; this is separate.

I've got this pattern of

assuming problem foo is solved

going on about my business for a while

taking a close look at foo for the purposes of implementing something related to it

suddenly realizing that foo is not only unsolved, but can't possibly be solved

thinking about it really hard in an obsessive/compulsive sort of way

learning to live with a deep dissatisfaction about some small part of the universe

This has happened for identity, free will, type systems, authentication, and most recently decentralization.

More will follow, I'm sure.

I heard about the peer-to-peer connection mechanisms in development for HTML5, and figured it would be nice to put something nice and small together using it. To that end, I'd absorbed and discussed the idea of mental poker with a couple of people, and thought I had the hard parts under control. It turns out that this new set of functionality is still going to need central servers to set up the connections though, at which point I dismissed the approach only to notice a cognitive rupture. Every "distributed" protocol seems to need a central server to bootstrap. The web needs DNS, torrents need trackers, cell-phones need satellites and/or transmission stations, etc etc.

This fundamentally shifts the problem in my mind. If we're going to be burdened with some central functional cluster anyway, it looks like a better problem to solve might be "how do we perform tasks with multiple 'central' clusters" rather than "how do we have as few central structures as possible".

Fuck you if it doesn't make sense. It's late and I'm piping a stream of consciousness here. I was also going to talk about a few other things on my mind; assembler and some new revelations about Flow Based Programming in particular, but I'm about to lose coherence entirely, and nobody wants that.

I'll let you know what, if anything, comes of any of these meditations. For the moment, you'll have to tolerate some more radio silence.

Monday, April 21, 2014

No nuts and bolts this time. Here's a random collection of insights I've had while trying to put together the version 0.1 of a notebook style editor:

Being async Pays

My initial assumption was that evaluation would be an inline thing. That is, you send out a POST request, that code is evaluated synchronously, saved out to your fact base, and the HTTP response consists of that evaluated result which you can then display prettily on the front-end. Turns out that's a big fat no. Even discounting any issues with the Common Lisp return model, and the eventual attempt at making this a multi-user editor, the synchronous approach doesn't quite work. Which I found out the first time I accidentally wrote a loop without the required while. Something like

(loopfornext=(popstack)do(something-to-next))

The trouble should be obvious. That's the sort of computation that you want off in its own isolated thread. And you want to be able to kill that thread if it turns out to be as boneheaded a mistake as that was, which means that you have to notify the front-end of computations in progress, go off and perform it, then notify the front-end again when you're done. And that's not something you can do synchronously.

In the end, I end up spawning a single tracked thread to do the work of evaluation. I notify the front-ends when it begins work, and then again when it completes. At any point in between, a front-end can send a termination signal to kill that thread rather than waiting it out to completion. You can see the implementation here, here, and here. This will entirely coincidentally make it much easier to extend cl-notebook to a multi-user editor later.

Being surgical pays. Sometimes. Not in the way you'd think.

The first version of this editor basically spat out the complete current notebook state at the front-end on each action, and the front-end re-drew the entire thing on every change. You'd think that the sheer inefficiency of this approach would get to me, but you'd be wrong. It was actually fast enough that had performance been the only factor, I'd have left it at that. The problem with that approach is that you end up clobbering giant chunks of client-side state each time. In particular, re-drawing every cell (and hence, re-initializing each of their CodeMirror instances) meant that I was blowing away undo information for each editor window every time anything happened. And that's annoying as fuck. I'm not sure anyone's formalized the point into a commandment yet, but you really should act as though they have: Thou Shalt Not Mangle Thy Clients' Data. That applies no matter how temporary the data is. Even if your job is to take said data and transform it in some way, say for instance by evaluating it, you should do so with a copy instead of a destructive change. And even when you absolutely must be destructive, be as selectively destructive as you possibly can.

Thinking About Space Doesn't Pay. No, not even that much.

I almost left this one out entirely because it seemed so self-evident, but on reflection, it's something I've had to learn too. Space doesn't even begin to matter for systems like this. I'm talking both about memory and about disk space. Yes, there are some applications for which this is not the case, but it's true as a rule.

The fact-base project in particular has been sticking in my craw in this sense. I kept thinking things like 'Holy shit, I'm keeping history for every cell, on every edit forever. This is going to be huge! I'll have to figure out a way to condense these files, or start them off from a non-zero state so that I can throw out history at some point!'

Completely pointless.

At the beginning of the cl-notebook effort, I started a scratch file which I've been steadily editing, putting through various save/auto-save tests and just all round mauling with updates. This thing has been around for months at this point, and it's taken much harder beatings than the average notebook ever will. Wanna know what its current total size is?

That's smaller than many Word and PDF documents I've worked with, and those don't bother keeping my entire editing history around. So I figure I can get away with treating disk space as if it were infinite for my purposes. Absolute worst case scenario, I'll compress it. And since I'm dealing with plaintext files, that should be rather effective.

Return Values are Complicated

I mean, I knew that already, but it turns out there are even more intricacies here. I initially assumed I'd be able to just keep a single return value per cell (by which I mean the return from a single Lisp function, which can be zero, one or more values). Then it hit me that a cell might have more than one expression in it. Then it hit me that return values aren't enough; you need to be able to handle *standard-output* emissions and warnings on a per-expression basis rather than on a per-cell basis, and that we'd want type annotations in some places since we'll be serializing various things to strings and it would otherwise get confusing. Then I hit me and sat down to write down something workable. Each cell now stores a result, which is zero or more values, each of which is actually a value and a type.

That lets the front end figure out what it needs to do on a per cell basis, which means that the server-side implementation of a cells' noise becomes very mechanically simple. It's basically just an extra fact we keep around as a label, which the front-end queries to decide how to transform the high-detail result.

cell-type is not the same thing as cell-language

Early on, I had this idea that I'd be semi-implicit about what's in a cell. At that point there were two cell-types; common-lisp and cl-who. The idea would be that this single cell-type would determine both display and evaluation properties of the contained code. Nope, as it turns out. And the thing that finally made this clear to me is thinking about how I'd treat test code. It's still Common Lisp, you see, so I'd still be evaluating it the same way as any other code cell, but I didn't want it showing up in certain exports.

The solution I ended up settling on is to be explicit about everything. Each cell now has a cell-type as well as a cell-language. The first one being one of markup (for prose blocks), code (for code blocks), and tests (for code blocks that I'd want to separate from actual runtime code).

Naming things is difficult

I think there's a joke about this somewhere. Something along the lines of

The only two difficult problems in programming are naming things, cache invalidation and off-by-one errors.

and man did that ever bite me this time. It's obviously bad to tie the file-system name of something to its display name, if for no reason other than it opens up various injection vectors that you'd rather not open up. It turns out it gets even more complicated when you're dealing with history trees of various documents, and you're trying to reduce headaches for your users. Here, think about this problem for a bit. Say you had a document type that you'd let your users name. We're obviously not naming the on-disk file after the display name of the document, so this is a matter of storing a record in the document that'll keep a user-entered name around for reference purposes. That gives you the additional benefit of being able to roll back renames, and the ability to see what a given document was called at some point in the past. Now, say you want to be able to branch said document. That is, instead of being a single history line, you want to be able to designate certain timelines as belonging to different branches than others. What you need now is four different levels of naming. Five, depending on how ham-handedly you've decided to store and relate those branches. At minimum you need

The filename of the document, which is different from

The display name of the same document (which might be different in different branches, and at different points in time), which is different from

The display name of a particular branch of the document (which might need to be human readable, or user entered) which is different from

The collective, still human-readable name for a set of branches belonging to one document.

If you've stored a branch collective as a file/folder on disk, you'll have to name that too. So, what would you do?

Confronted with this problem, I punted. Branching is basically going to be a copying operation. What you'll eventually get, once I put the branching system together and you try to use it, is a complete, self-contained fact base that happens to (possibly temporarily) have the same display-name as its origin (plus the word 'branch'), and a fact or two that point to a specific time point in that origin. From there, they'll diverge and be entirely separate entities. No, I'm not entirely sure this is the best approach, or even an acceptable approach, but it seems to be the only way to avoid asking the user to manage four different names in the space of one document. So I'll take it.

There's probably more where those came from, but they're all I could pull out of my head at short-ish notice. I'll try to let you know how the rest of the project goes, as it happens.

Sunday, April 20, 2014

So it's about time I talked about this thing, and what the hell exactly I'm thinking. Because I've been working on it for a while, and while it's still kind of buggy, I've already found myself wanting some of the features it has when working with Emacs, or other text editors I've had to work with.

Notebooks

Actually, before I get to that, a little exposition. As far as I know, notebook-style editors already exist for Python, R, and Clojure. And a second one for Clojure. The general idea is to have a web-based interface, with code being divided into small, re-arrangable chunks called cells, each of which are associated with their evluation results. Some cells are code in whichever language the notebook supports, others are just prose, usually in markdown. The idea is that you get a dynamic environment that lets you selectively evaluate small chunklets of code, and intersperse relevant documentation in the form of prose and tests.

cl-notebook

You can find it at the other end of that github link I start out with. Last time I mentioned this project in passing, I noted that the ultimate goal was replacing Emacs as my Common Lisp IDE of choice, and that's no small task. Despite the existence of subpar, I don't have proper s-expression navigation yet, and I haven't wired up proper auto-completion or argument hinting yet, and there's a bunch of other stuff I still want to build, ranging from the necessary to the frivolous. On the whole, I think I'm on the right track, because certain things are somewhat easier here, and because there are some features that I find myself missing when I hop back into emacs.

Lets just get those out of the way right now, actually. Firstly, I get to program in my browser, which is surprisingly elegant once I hop into full-screen mode. It lets me tab over to search for relevant links to talk about, and since my browser can be set to start up with previously open tabs, I get to resume editing exactly where I was in a later session. Secondly, because of the back-end storage system I'm using, I get to have a running history of all the edits I've ever made, which is updated every time I evaluate a cell (I'm working on having it implicitly updated every so often between evaluations, but don't have that part checked in). Thirdly, I've got exporters wired up that let me put together a book, then export it as an HTML page, or as a .lisp file. And I'm planning to add two more, one to just extract tests and a second to just hand me an executable from the given book.

The first one is minor, and makes it all the easier to randomly check my email or github notifications, so pros and cons. The third could concievably be wired together in Emacs. The second one is huge. I don't know about you, but I've been programmed to hit save every few seconds in whatever editor I've got open just because crashes happen, and I don't want them to be too painful. I guess I could have wired up emacs to do that every so often, but it sounds fiddly as hell. You don't particularly want a standard editor saving every three seconds or so; you might be in the middle of an edit the currently keyed-in part of which doesn't make sense by itself, and most editors 'save' by overwriting your existing file. Which is exactly what you don't want when you've got an unfinished code edit. Hopefully, adding total-history retention to the equation softens the blow.

Core Concepts

Code is organized into books. Each book is the complete history of a bunch of cells. A cell can contain code, tests, or markup in a particular language (currently just Common Lisp, but given how many languages I blog about, it'll probably need at least highlighting support for a few more). The cells' language and type impacts the evaluation approach we take on the back end, as well as which exports it appears in, and in what form. Specifically, common-lisp/markup cells are evaluated as :cl-who forms, don't appear in .lisp exports and only contribute their results to an .html export. By contrast common-lisp/code is straight up evaluated (capturing warnings, errors and standard-output), contribute their contents to .lisp exports, and both their contents and results to .html exports.

In addition to a type, language and id, a cell has a contents, result, and a noise. The contents is what the user has typed in, the result is what that contents evaluates to and the noise dictates how the results are displayed. This is a normal cell:

There's also a silent setting which lets you ignore the evaluation result entirely.

You can edit a cell (changing its contents), evaluate it (changing its result) delete it, change any of its mentioned properties, or change the order of cells in a notebook. Each of these is an event that gets initiated by a POST request and gets completed with an event-stream message to any listening front-ends (which means I'll relatively easily be able to make this a multi-user editor when I get to that point). Enough low level stuff, here's an example.

Example

This is a piece of code I actually wrote using cl-notebook.

A parameter is a thing that starts with #-. It might be nullary or unary. A parameter followed by a parameter or an empty list is interpreted as nullary. A parameter followed by a non-parameter is unary. Command line args are more complicated in the general case, but not in cl-notebook

It's a small utility function for parsing command line arguments in :cl-notebook. You can see all the relevant features on display there; it starts with some documentation prose in a markup cell, has definitions in a code cell, and finally a bunch of example invocations of each thing in a tests cell. They're not really tests, because they don't encode my assumptions about the return values of those calls, but you could imagine them doing so. The point is, they won't be part of a .lisp export, but will show up in an html export like this one.

That's it for the introductory thoughts. I'll try to gather some insights into such editors into the next piece I put together. And I'll continue dogfooding until it gets good enough to call "delicious".

You may have noticed that this isn't an animated gif. It hangs there for something on the order of thirty seconds, more if profiling is on, and then returns the expected result. So that won't really do. There's some interesting points I'll talk about later, that have to do with clause order and the underlying operations. But, even though this is probably the worst way to write this particular query, it should return in under a second.

Thirdly, that I had exactly zero use cases for or goals. This might change, but until then, it looks like I don't even need unification[2].

So as a result, I sat down and took the precise opposite approach to traversal that I tried last time. Instead of trying to keep it elegant and lazy, lets make it hacky and eager. Here's our problem, once again:

Except, you know, it should be smarter about using indices where it can. But that's a pretty straight-forward specification.

lookup and decide-index changes - take 1

The first thing I had to do was change lookup and decide-index a bit, because I wanted them to be mildly less naive. And yeah, I broke down and added some macrology to pull out all the repetition in the index-related functions. Turns out that was a good thing.

Short version is, the function now takes a fact-base in addition to an a, b and c, and checks whether a particular type of index is kept for a fact base before otherwise seeing whether it would be appropriate for the current query.

lookup now has to be mindful of this, and has to check that the indexed facts match the incoming query. Because we're now potentially using a more general index than the query calls for. My gut tells me this is still a net increase in performance since last time, even though our best case is now On with the size of the result rather than 01. If it comes to it, I'll go back and make that more efficient.

That more complicated version of lookup expects two values instead of one; which index we're using, and which index we'd ideally use. If the two are the same, we just return the results of our lookup, otherwise we have to do the narrowing traversal. That's about as efficient as it's going to get without making it lazy. Which I guess I could, but not right now. However, we also need a modified decide-index to pull this little trick off. And that's going to be fugly.

Say what you will about imperative programming; it's efficient. That's a single pass over the relevant indices that returns both the least general applicable index, and the ideal index for a given query. Which means we can now profitably compare the two in lookup, which means that our best case is back up to O1, since we don't need to traverse queries for things we've indexed.

With those modifications, I can pull some fancier crap in translating for-all calls into loops. Specifically, I can do this:

rather than the lazy-ish generator tree from last time. Thanks to our re-structuring of lookup, this is about as efficient as it's going to get without re-jigging goal order. The only edge case we have is what happens if the entire goal is perfectly indexable, except it seems that the programmer would use lookup directly in those situations[3].

In order to do that, we have to replace everything other than variables with gensym calls, but keep the same tree structure. loop does deep destructuring, so we can get away with using this as a pattern-matching strategy. We also need to replace already bound variables from previous destructuring-forms with the same gensym calls so they don't get re-assigned unnecessarily.

Easy, right? grab the results of goal->lookup and goal->destructuring-form and stitch them into a loop along with the collecting clause. Nothing fancy here, except for that cryptic note about a different method definition.

And this is the full story[6]. Because of the specific way we want lookup and destruct to interact with their containing bindings, their order matters quite a bit. Play around with the macroexpander if you don't quite see it from just the definition.

Anyhow, the way we deal with and goals is by building up a chain of loop forms, each one dealing with a single goal while taking the previous goals into account. All but the last one need to append their results, while the last needs to collect them. The only part we've got left is the now trivial step of putting together the for-all macro interface to the rest of this compilation pipeline[7].

This concludes the part of this post wherein I talk about implementation details. The rest is just one or two interesting notes about traversals. If you're getting bored, or tired, this is a pretty good break-point for you.

Traversal Notes

Near the beginning of this piece, I said

...this is probably the worst way to write this particular query...-Inaimathi

and the reason should be fairly obvious now that we know exactly how we go about finding these answers. Remember, the expansion for this form, after compensating for the different keyword argument in our new for-all, is

Now granted, we're aggressively using indices where we can, so we can slice a lot of the constant time out of this equation depending on how often such an operation happens, but no matter how efficiently we slice it, we're going to take a number of steps equal to goal-3 * (goal-2 * goal-1). That is, we're going On over the candidates for the last goal, for each candidate of the previous goal, for each candidate of the previous goal and so on.

This is why the indices help us a lot. If we couldn't effectively discount swathes of our initial corpus, the performance characteristic would be On^m where n is the size of our fact base and m is the number of goals. Meaning that it behooves us to cut as many candidates as early as possible, since early reductions in our problem space will give us much better returns.

are logically equivalent, the latter is going to perform noticeably better, because (?id :number 62) has a much smaller set of candidate facts than (?id :user ?name) in our particular corpus. One interesting exercise, which I'll leave for next time, would be to have for-all try to optimally sort its and goals by putting the smallest candidate lists at the beginning so as to reduce the search-space with no thought required from the user. The above is a trivial example; there's one goal that has more indexable terms in it than the others, so in general[8] it will probably yield a smaller candidate list. The real way about this feels like it would be to aggressively index goals at the start of a query and sample their corpus size, then sort on that. Not sure if that would cost more than it buys me though, since it feels like that would get complex fast.

Anyway, like I said, I'll leave it for next time.

If I end up seeing performance issues in the things I'm building out of fact-base.

2 - [back] - Which makes things much simpler for this approach. Hopefully, you'll see why as we go.

3 - [back] - and they can, since it's still an :exported symbol itself.

4 - [back] - if it has been bound by a previous destructuring-form, it'll be assigned by this point, which means we'll be able to index by it. Otherwise, gethash will return nil, which is exactly what we want.

5 - [back] - This is where we could be a bit more efficient, in case you're interested. If we wanted to be very precise about it, we'd say that we could use a compound form with variables as an index, provided that all of its variables have been bound prior to this point in the traversal. I'm leaving it out for now because

it would further complicate an already tricky chunk of code

I'm not sure how often this edge case would happen in practice and

if it does happen, the current result will be a slightly less efficient traversal, which doesn't sound too bad. If the consequence were incorrect results instead, I'd have reconsidered

6 - [back] - As an aside, this is the first place I've seen in something like 8 years where a comment is appropriate. It doesn't mirror the code to which it pertains and it explains a non-obvious but necessary facet of the implementation. Usually, I'd either work out some naming scheme that would make the point obvious, or just factor out the chunk of code that needs explanation. There doesn't need to be a simple way of doing either here[9].

7 - [back] - And just to highlight this, it is a compilation pipeline. I mentioned this at a semi-Lisp-related meet-up lately, and it's true enough to repeat to the internets: a good way of conceptualizing a Common Lisp macro is as a compiler that takes some Lisp code and emits different Lisp code. Because of the way Lisp is structured, we get the first chunk of an actual compilation pipeline for free, and essentially start with a tokenized input. It's a pretty powerful technique once you get your head around it.

Wednesday, April 9, 2014

So it seems that a lot of people are into this logic programming thing I've been reading about lately. There's the already mentioned Reasoned Schemer trio of Friedman/Byrd/Kiselyov behind the beautiful but arcane miniKanren language, a prolog-like contained in Peter Norvig's Paradigms of Artificial Intelligence Programming chapters 11, 12 and 14, another one in Graham's On Lisp chapters 19, 22, 23, 24 and yet another in chapter 4.4 of Abelson and Sussman's SICP. So there's a lot of literature around dealing with how you go about building a unifier or pattern-matcher[1].

Anyway, I've been consuming this literature for a while, and the part I want to zoom in on is searching the database. The other stuff is easy; a unifier can be straight-forwardly built in about ten lines[2] and handling variables is the same tree-traversal stuff you've seen a hundred times before, but the actual search never seems to be the focus of these things. And I'm looking for a particular type of search for fact-base. I showed it off recently at a Toronto Lisp Group meeting along with the app it's supposed to enable and mentioned that querying is mildly annoying when you get to compound queries. Specifically, I took as an example the very simple database

That's even passably fast, thanks to our index system, but it's annoying to write, and it forces me to do a fact-base->objects conversion in some places rather than write out these multi-stage iterations myself. What I'd like to be able to do in the above is something like

and have the system figure it out for me. Granted in this situation, you don't gain very much, but it would be a compounding gain for more complex queries. For instance, if I suddenly decided I want to select All the message bodies authored by Inaimathi pertaining to other messages, the query language version handles it very simply:

whereas the manual version would add another level of iteration I'd need to work through. Oh, and have fun with the situation where you only want the first 5 or so hits. The easiest solution with the manual approach is searching the entire space and throwing away all but the first n results. You could do better, but you're suddenly in the supremely annoying situation where your queries all look mildly different, but perform the same basic task.

What I figure I'd want is a lazy or lazy-ish way of getting the results. The lazy solution can easily be converted to the eager solution later, but it's really painful to take the eager approach and then find out that you only needed to do about 4% of the work done. I'll be using generators, rather than outright lazy sequences just because they're mildly easier to put together. For a single goal, that's trivial.

(for-all (?id :author"Inaimathi") :in my-fact-base)

All you have to do here is have a generator that runs over the facts in my-fact-base and returns the next matching one it finds. Something like

would do fine. I'm pointedly refusing to commit to an implementation of (fail), unify and bindings at each point for the purposes of this post, but am using the stuff out of Norvig's PAIP source code. For the uninitiated: A goal is the thing you're trying to match; it's an expression that may contain some variables. A variable is a thing that you can substitute; it can either be unbound or assigned a value in a particular set of bindings. If a unification fails, it returns (fail), and if it's successful it returns the set of bindings that would make that unification expression true. For instance, if you unified ?a with 5, starting with empty bindings, unify would return the set of bindings in which ?a is bound to 5.

So the above match-single definition would return a generator which, when called, would either (fail), or return the environment resulting from unifying the next element of facts with goal. Hopefully, straight-forward, though you may need to do a bit of reading up on it if you've never seen the terms before.

The next easiest thing to do would be handling a set of ored goals. That is

What you want here is fairly complicated to express in English. I'm still trying to return a generator from the whole thing, but expressing its behavior is a complex.

If you only get one goal, you want to fall through to a call to match-single; that's still fairly straight-forward. The magic happens at more than one goal. And I just deleted about four paragraphs of prose that would have thoroughly confused you. It's not a very easy set of concepts to express in English because it refers to pieces of itself fairly often.

The image you want, once you've put the initial generator tower together, is one of those combo bike-locks.

If you want to search its entire possibility space, you spin the last ring until it runs out of values. Then you spin the second-to-last ring once, and retry the last ring. When you run out of values on the second-to-last ring, you spin the third-to-last ring once and so on. It's an incredibly tedious exercise, which is why I'd prefer a machine to handle it.

By the time we're calling this function, I assume that it'll be handed at least one goal. You always want the generator of your first goal, and if you only get the one goal, you just return said generator and you're done. Multiple goals are where you need to pull fancy footwork. Again, one chunk at a time:

This is where we set the rest-generator from earlier. It's just the procedure that will return the next result from proving the rest of the goals given the set of bindings built from proving the first goal into the starting set of bindings given to match-ands initially. If calling the first goals' generator fails, we likewise fail; otherwise we set rest-generator to the generator we create by passing the result back up to match-ands.

...
(backtrack! ()
(if (fail? (next-gen))
(fail)
(next)))
...

Occasionally, we have to backtrack. Which in this context means we try to call next-gen. If that fails, we likewise fail, otherwise we invoke next. Which...

we're setting up some name sanitation for certain words we'd like to use in the definition that should still be usable by the callers of for-all. Note the use of replace-anonymous, the definition can be found in Norvig's prolog implementation. The entirety of that cond decides which of our matchers we're going to use to traverse our corpus.

If we get passed the apply argument, we'll be doing something special later. Otherwise, we'll want to slot our results into the template in gen, and failing that, just slot it back into the querying goal form.

And that's the meat of it. We're going to be grabbing results out of our generator. As you can see, the special thing we're doing with the apply argument is stitching up a function to apply to a substituted list of our results. If we didn't get an apply, we're just slotting said result back into the template we defined earlier. I find that seeing some macroexpansions really helps understanding at this stage. So, here are the basics:

And that's that. Granted, the implementation is a bit more complicated than just writing manual loops, but I'm convinced there are a couple wins here. Firstly, the invocation is simpler, which means that the above definitions will eventually "pay for themselves" in terms of complexity. Secondly, it seems like I could fairly easily mod this into parenscript-friendly forms, which means this'll save me from having to convert fact-bases to object lists on the client side. But that's something I'll tell you about next time.

Footnotes

1 - [back] - Almost always using the question-mark-prefix notation for logic variables for some reason. I'm not sure what the approach gains or loses you yet. I guess in the case of miniKanren, it gains you the ability to unify on vectors since there's no ambiguity, and it might make it easier to read the resulting programs, but I'm not banking on that.

2 - [back] - Though do go over Norvig's version to see a dissection of the common bugs.

3 - [back] - And remember, backtrack! itself fails if it runs out of search space.

Ruby and Erlang each come with their own modes, and recent Emacs versions ship with a built-in Python mode and shell. Smalltalk uses its own environment (though GNU Smalltalk does have its own mode), and I'd really rather not talk about PHP. If you're writing in it, chances are you're using Eclipse or an IDE anyway.