For those who haven\’t seen it yet, Twitter is a sort of micro-blogging broadcast tool that is apparently the next big meme. I haven\’t really bought into it yet, but I do have an account. If you\’ve got a Mac, the Twitterrific application makes it much more usable.

So far it\’s kind of cool. Less intrusive than IM, easy to shoot off links you want to share, or to say \”I\’m heading to lunch at Sucre et Salay, who\’s coming?\” We\’ll see if it catches on. With messages limited to 160 chars, I don\’t see it replacing blogging anytime soon.

Yesterday\’s post was perhaps a bit off the cuff. There were a lot of people really scared about my script, and I don\’t blame them. A lot of good points were brought up. It\’s not that I didn\’t consider them, it\’s just that they\’re already taken into account with how I do revision control. At least Luis had faith in me

Below is an example of how I tend to organize a project (my pre-1.0 releases are a bit different, so I\’m just leaving them out for the moment). There are effectively two kinds of tags, fixed (in black) and mobile (in red).

Notice that all the fixed tags are three-number versions. This means that if you request my-library_1.1.1.tar.gz, you\’re going to get that same code point all the time. However, you probably don\’t want to do that …

I wouldn\’t say I disagree with Jack Unrue\’s statement about releases being necessary, but I think there\’s a way to make both groups happy (and maybe I should just get around to implementing it). If the no-release crowd thinks that we don\’t need to indicate functionality changes in some way, there\’s still an impasse. If that\’s not the case, I may have a solution.

We all know that a URL doesn\’t have to point to a file, so what if http://example.com/releases/my-library_0.3.4.tar.gz created the archive on the fly? It would be built from the \”my-library\” repository, using the tip of the \”0.3.4? branch, using tar -cz (if the user had typed \”.tar.bz2\” on the end, it would have used tar -cj). You can, of course, cache the archive (just check to make sure there are no new commits since the last build).

What are the benefits of this? Well, the developers don\’t need to explicitly build release packages. Also, any bug fixes that get merged into the branch are included whenever the next person requests that archive. It gives the user the same interface that they\’re used to. It works fine with ASDF, etc. (as long as you remember to set the latest tag on your current branch).

Actually, this may work better as a commit hook than being pull-based. This should be pretty trivial to build for any specific source control system. Maybe it\’s my project for this evening.

Over the past few years my opinions on concurrency have changed. I mean, threads were always a pretty scary thing. I figured that no matter what I did, there must be a race condition or deadlock in there somewhere (and really, if you ever think otherwise, you\’re just fooling yourself). I didn\’t know of any alternatives. Java\’s approach seemed to have some merits — all you had to do was write synchronized, right? — and I took it a bit further with a language I was designing some years back. Here\’s an image from the spec:

My hope is to go to Snoqualmie for night skiing 2-3 nights a week, and hit Stevens every weekend (well, every weekend I don\’t trek to Whistler, that is). This also means that you\’re not likely to see me on Baker or Crystal at all this season.

I\’ve got a bit of blogger block going on. I mean, it\’s not like I\’m particularly good at it in the first place, but it\’s been two months since my last post. I have over a dozen posts sitting in the half-done state and am having trouble finishing any of them. So I figure I\’ll just throw out something random and hope it can get me over the hump a bit. So bear with me, and hopefully something good will come out of this.

I\’ve recently started getting back on track with my organization. The amazing part was that although I\’ve been lax with it the past few months, the initial setup I did helped to keep things from devolving too much. I still managed to pay bills, etc., and some areas actually continued to get more organized despite me no longer paying attention to it. The success of using GTD has been surprising. Now, about 5 months in, I figure it\’s time to get back to the maintenance. The nice thing is that the framework is already in place. It\’s not like \”starting over\” or \”trying again\”, but \”picking up where I left off\” and it\’s quite invigorating.

I also just found out (yes, after I wrote those first two paragraphs) that it\’s NaBloPoMo (National Blog Posting Month). While I\’ve already failed to meed the requirements, I can at least make a gesture by trying to get a post or two out this month.

I know, nobody believes I have anything to do with music anymore. I mean, it\’s been forever since I\’ve talked about it, let along done something about it. So, for the first time since I moved to Seattle, here\’s a taste of what I\’m doing. It\’s not much yet, just fun with drums. But I\’m back. Seriously. This time it\’s fo\’ realz.

There are some issues with this recording … Ableton has failed to send me the serial number for my copy of Live, so I\’m unable to save from it. As a result, I\’m hijacking the audio output and recording that. It means it\’s less than perfect … some silence both before and after, generally lower quality, and it quiet, so please turn up the volume a bit. Hopefully I\’ll get that fixed once they bother to let me know what my serial is.

So, I wrote a new HTTP client package for Common Lisp. I know, there are already a bunch, but they all seem to be pretty limited in their functionality (yes, even the one in Closure).

For one, they expect character content, which is fine when you\’re trying to load a Web page (as long as it\’s in the encoding the client expects) but not so much when you\’re dealing with binary data. There also isn\’t much support for \”Transfer-Encoding: chunked\”. Closure does that, I think, but no one else. I also have some other nice features:

So, I needed to handle both binary data and chunked encoding for what I\’m doing at work. I figured I might as well take some time to make a full implementation of RFC 2616. It\’s not quite perfect yet, so here\’s a list of the known shortcomings:

sometimes the content is returned as a byte vector, other times as a stream;

no parsing of date headers (I\’ll probably just steal this from Closure);

it might be nice to return a string in cases where we have a Content-Type (but this might just add confusion); and

handling other header fields.

None of these things prevents it from complying with RFC 2616, though (I think). So, it\’s just a matter of making it more user-friendly at this point. I\’d appreciate any feedback. There\’s just the darcs repository now, but I\’ll make this into a CLNet project if people are interested.

Lightman\’s writing reminds me of Martin Amis\’ short stories — which I consider a huge compliment. Compared to his novels, Amis\’ stories are a bit stilted, but there\’s always a compelling exploration of some idea. Lightman captures the same in each chapter of Einstein\’s Dreams. There\’s nothing as powerful as Amis\’ evocative sentences (EG, Money\’s opening line, \”As my cab pulled off FDR Drive, somewhere in the early Hundreds, a low-slung Tomahawk full of black guys came sharking out of lane and sloped in fast right across our bows.\”), but considering Lightman doesn\’t have enough time to give any of his dreamscape inhabitants a name, their depth is impressive.

The weakness comes in the repeated structure. For each dream of a world where time behaves differently, there is the same pairing of extremes, \”Some people fear traveling far from a comfortable moment?. Others gallop recklessly into the future, without preparation for the rapid sequence of passing events.\” And with this is the explanation of how time works in each dream. But amidst this cornucopia of parallel structure, there is something interesting and challenging. You begin to think of how Lightman\’s descriptions differ from your own visions of such worlds. And that imagining is the real substance of Einstein\’s dreams.

Overall, I think I was disappointed here. I probably shouldn\’t have been. I know Norman\’s style. He gives more anecdotes and examples than studies and guidelines. He too focuses on stimulating the imagination — talk of robots with emotions, and other ways in which our perception of technology (and its of us) can be controlled and improved. But there is little real substance. A more enamored reader may find some meat by digging through the bibliography, but while I?was content to listen to Don\’s ideas of the future, it didn\’t drive me to push any further. It was a good read, but not much beyond that.

I\’ve been meaning to get to this one for years (ever since Dawkins told me, \”you shouldn\’t have read this, you should have read The Extended Phenotype instead,\” at the end of The Selfish Gene). I waited far too long. This book has invigorated me more than anything I\’ve read in quite a while (although Tufte\’s Beautiful Evidence is on my short list for next reads). There is a constant building to some great revelation, but each page contains revelations of its own. Amusing cases where observed behavior caused organism-centric biologists to create Ptolemaic descriptions are followed by elegant descriptions of the gene-centric explanation. By no means am I qualified to debate Dawkins\’ methods or conclusions, and in fact I?wouldn\’t have been able to approach this without The Selfish Gene as a primer, but its the energy in his writing — the drive to push the boundaries of what we know — that motivates me to behave the same in my work.

If for no other reason, you should pick up The Extended Phenotype just to read about the insane behaviors that have evolved in different species over time. You\’ll be entertained, and you\’ll walk away with an entirely different perspective on what can be inferred from behavior. (At what point does intelligence become more than an arbitrarily complex set of automatic reactions to stimuli?) It\’s a must-read for anyone who feels the need to break out of a stagnant thought process — and we all do at some point or other.

So, I\’ve been reading Getting Things Done, which I\’ve had recommended to me repeatedly in the past year by a lot of people I respect. I\’m most of the way through my initial reading, and I have to say it\’s really promising. I\’ve never been organized, or really into the idea of having a system to help me get or stay there, so the excitement I feel about this book is a bit unexpected.

Why is this book different? I think it\’s because I identify more with the motivations and techniques that Allen talks about. The goal isn\’t organization itself, but organization as a way to relax more often and more fully. His techniques involve organizing things so that what\’s important is in front of your face when it can be accomplished. Calls are checked when you\’re near a phone, computer activities when you\’re in front of a computer, etc. While he uses a PDA for most of this lists of tasks, most of the book assumes you just have everything written on paper (which seems like a good baseline). It\’s hard to highlight a few points, because it mostly just seems like common sense. However, he layers enough rigor and experience on top of that to make sure that it\’s easy to do and covers everything you need to deal with.

I haven\’t implemented the system yet; that\’s step three. Once I finish the book, I\’ll go over how it worked for 43 Folders, then I\’ll head back to the practical chapters and start the two-day bootstrapping process.

I think my girlfriend\’s copy of the book arrived today, and she seems excited about applying it, too. I think doing it in tandem will help keep each other motivated as we head through some of the rough spots.

A while ago, Bill Clementson wrote about how to make Lispworks Personal Edition work under SLIME. It’s a lot harder than you might think (well, harder than I thought, anyway). And it seems like they change things over time to make it even more difficult. So, Bill’s 18-month-old instructions no longer work. I did a lot of cursing while trying to fix it. Here’s what I came up with. (Again, this is mostly Bill’s work, I just changed what was necessary to have it work with LispWorks 4.4.6.)

Gary King compares test-driven development with literate programming, saying that writing the tests often ends up being \”silly\”, since he doesn\’t yet know where he\’s headed and the tests get thrown out. I agree that tests are sometimes unnecessary, but I think part of Gary\’s problem must be that he\’s developing tests the wrong way.

The appropriateness of tests depends on why you\’re writing the code. Do you have some use for it in mind, or is it pure speculation and experimentation? If it\’s the former, writing tests is paramount and should be easy (or, at least it gets easy once you get into the habit); the closer you get to experimentation, however, tests become harder and less appropriate. I think that if you have any inkling of how you\’d use the library, you should write some tests for it. It just gives you a bit of direction, and a way to see what you still need to accomplish. This can mean just grabbing part of your REPL session and throwing it in a file.

How is it possible that testing can be easy? Well, if you\’re writing a library that you have a planned use for, the trick is to use it before you write it. You don\’t sit down and try to come up with an API or some spec, you just pretend that your library already exists — perfectly coded — and use it. Even before TDD (now better explained as Behavior-Driven Development) this was an approach I used frequently. Write examples and other usages of your new library so you have some target to reach.

These are your tests. These are the things that need to work, and they are your effective spec. Of course, new use cases are going to come along and some of your assumptions are going to change, but they should almost entirely come from your (and others) usage of the library. The hard part always seems getting a test framework in place so you can slice up your usage and make tests out of them.

I think BDD is a significant step up from TDD because of its shift in mindset — it encourages you to think of what should happen as opposed to thinking that you need to test all functionality. Only test what you use, and only implement what you test.

It\’s hard when you start, but with a bit of practice it becomes natural.

Note: I know testing end-user applications and Web sites is difficult. All I can offer is to make that layer as thin as possible, and do more things as libraries. This is the natural way of Lisp … you stop thinking in terms of applications at all, because you can play with everything interactively. Any user interface is there to simplify the API for less sophisticated users.

In C2:CommonLisp, or any language that uses C2:GenericFunction s instead of class encapsulation, the package actually defines a domain. By domain, I mean a vocabulary in which each word has only one meaning.

Since C2:CommonLisp only allows for a single function signature within each package, this meaning for the word must be well-defined.

Often, the same concepts appear in multiple domains. Sometimes they keep the same names and it’s easy to :use a package in order to
extend that domain with your own. However, sometimes the concepts carry over, but the vocabulary doesn’t. Currently that means that for
each word in the vocabulary, you need to do this:

sh: /home/pfeilgm/bin/enscript: No such file or directory

Of course, defalias is a popular utility to make this renaming easier, but it\’s not part of the standard. I wonder if this could be made even simpler with an idea stolen from Eiffel: feature renaming:

sh: /home/pfeilgm/bin/enscript: No such file or directory

This makes the domain translation explicit.

Thinking of packages as domains may help limit the naming conflicts that arise in C2:GenericFunction-based OO systems, since people often try to use the packages much like Java packages, rather than like Perlpackages (a much closer, albeit single-dispatch, analogue).

Gary King wrote an article not so recently about how to set up ASDF test-ops. I basically stole it, but have changed it a bit to work more intuitively (IMO). He arrives at the following test system definition:

However, it seems to me that (asdf:oos 'asdf:test-op:moptilities-test) should be testing the tests, not Moptilities. I would much prefer to see (asdf:oos 'asdf:test-op:moptilities), and this is possible with a little rearrangement.

Yeah, in addition to moving things to the primary system, I also changed a couple strings to keywords … just a personal preference. Also, OPERATION-DONE-P should operate on the MOPTILITIES system, rather than MOPTILITIES-TEST. Oh, and I changed the package name from TEST-MOPTILITIES to MOPTILITIES-TEST. I?like to keep packages as nouns and functions as verbs.

I’m always searching through Gary’s archives to find this, and then re-remembering my variations, so this puts it all in one place for me (and maybe others) in the future. Leave a comment if you have any suggestions.

The other day, I mentioned that I was having trouble mounting my hangboard. It hasn\’t abated. Today I was lent a drill, vice grips, and a few other tools. So I used the vice grips to remove the stripped screw, pre-drilled everything with a 3/32″ bit (that\’s what the screws say to use as a pilot), and tried again.

This time I got the screw in farther, and the head didn\’t get stripped. What did happen was that the screw sheared at the top of the threads. Thankfully, it still protrudes from the wall, but it\’s covered by the plywood. Once I get the plywood off, I should be able to remove the screw … again.

I\’m thinking that the problem I have now is that the drill bits are too short. The screw drives as deep as the pilot hole, but then stops dead and bad things happen. If I get a 3+″ drill bit, I might finally be able to drive the screws in all the way.

I tried to set up a plugin today that would allow me to embed Latex[1] equations in my posts, but to no avail. My hosting company doesn\’t have latex installed. I need to get this blog (and the rest of my data) moved to my other space, then get the DNS transferred, but I\’m really just too lazy.

Also, today is the second time I\’ve failed to receive any email in my account. That\’s really not ok.

[1] I don\’t write LaTeX because I don\’t have the ability (lacking Latex) to reproduce the logotype correctly and because gratutituous capitalization in general is anathema to me.

So, Stevey used to be at Amazon, but now he\’s at Google. At Amazon, his blog was one of the high points, and it was a sad day when I learned he wouldn\’t be able to back me up in my battles anymore. Now his blog is public, and it seems to be catchingonalready. Not to be left out, I figured I\’d offer my own comments on his suggestions for learning math.

I\’ve been hanging a number of things from my walls lately. Got a number of art pieces framed (finally), but also have my snowshoes up there and a guitar. I\’ve been trying to figure out how best to hang my snowboard. Also, I got a hangboard recently, and decided to hang it today.

Things haven\’t gone as planned.

I took the time to find studs, measure out where all the screws would go, etc. Then I proceeded to completely strip the head on the first screw, while more than an inch of it is still sticking out of the wall. I don\’t really know what to do from here. I hear I can drill a hole and hopefully get enough of a grip to withdraw the screw. I just see things getting worse from here, though.

Update: So, Fin mentioned that he pre-drilled the holes before putting up the board. Yeah, I\’m going with that when I get things sorted out.

This is an email I sent out to a few friends, talking about what we\’re doing with our time, and I realized that it\’s exactly the sort of thing I should be posting.

There are a number of models of computation, the most famous being the Turing machine and the λ-calculus (which are equivalent, but λ-calculus is prettier). They\’re both pretty old, dating from the 40s and 50s respectively. Of course, they\’ve pretty well stood the test of time. Two of the more modern ones that have held my interest are the ρ-calculus and the π-calculus. Both of these encode the λ-calculus very easily (meaning anything you can do with λ-calculus you can do with them).