Stevey's Blog Rants

Thursday, January 24, 2008

Emergency Elisp

Are you an Emacs user but don't know Lisp? Welcome to my first Emacs Lisp primer! This should hopefully help get you over the hurdle so you can have more control over your Emacs sessions.

There are lots of ways to do things in Lisp, and some are "Lispier" than others. I'm going to focus on how to do things you probably already know how to do from C++ or Java.

I'm mostly focusing on the language itself, since that's arguably the hardest part. There are tons of Emacs-specific APIs that you can learn how to use from the documentation.

Lisp is good at some things (like code that generates code) and not so good at others (like arithmetic expressions). I will generally avoid talking about good vs. bad, and just talk about how to do things. Emacs Lisp is like any other language – you get used to it eventually.

Most Lisp introductions try to give you the "Tao of Lisp", complete with incense-burning, chanting, yoga and all that stuff. What I really wanted in the beginning was a simple cookbook for doing my "normal" stuff in Lisp. So that's what this is. It's an introduction to how to write C, Java or JavaScript code in Emacs Lisp, more or less.

Here goes. Let's see how short I can make it. I'll start with the boring (but hopefully familiar) lexical tokens and operators, then move on to how to implement various favorite statements, declarations and other programming constructs.

Quick Start

Lisp is written as nested parenthesized expressions like (+ 2 3). These expressions are sometimes called forms (in the sense of "shapes".)

There are also "atoms" (leaf nodes, basically) that are not parenthesized: strings, numbers, symbols (which must be quoted with apostrophe for use as symbols, like 'foo), vectors, and other miscellany.

There are only single-line comments: semicolon to end of line.

To set a variable named foo to the value "bar":

(setq foo "bar") ; setq means "set quoted"

To call a function named foo-bar with arguments "flim" and "flam":

(foo-bar "flim" "flam")

To compute the arithmetic expression (0x15 * (8.2 + (7 << 3))) % 2:

(% (* #x15 (+ 8.2 (lsh 7 3))) 2)

In other words, arithmetic uses prefix notation, just like lisp function calls.

There's no static type system; you use runtime predicates to figure out the type of a data item. In elisp, predicate functions often end with "p". I'll let you figure out what it stands for.

Important: You can (and should) experiment with Lisp in the *scratch* buffer. You can evaluate an expression and see its result in any of several ways, including:

The first approach spits the result into the *scratch* buffer, and the next two echo it into the minibuffer. They all also work for atoms – expressions not in parens such as numbers, strings, characters and symbols.

Lexical Stuff

Lisp has only a handful of lexical tokens (i.e. atomic program elements).

The variables most-positive-fixnum and most-negative-fixnum are the largest and smallest integers representable in Emacs Lisp without bignum support. Emacs 22+ comes with a fancy bignum/math library called calc, if you need it. Arithmetic operations overflow and underflow the way you'd expect (in, say, C or Java.)

Booleans

The symbol t (just a letter 't' by itself) is true.

The symbol nil is false (and also means null).

In Emacs Lisp, nil is the only false value; everything else evalutes to true in a boolean context, including empty strings, zero, the symbol 'false, and empty vectors. An empty list, '(), is the same thing as nil.

Arrays

Elisp has fixed-sized arrays called "vectors". You can use square-brackets to create a pre-initialized literal vector, for instance:

Note that you do not (and cannot) use commas to separate the elements; use whitespace.

Vectors can have mixed-type elements, and can be nested. You usually use the function make-vector to create them, since literal vectors are singletons, which can be surprising.

Lists

Lisp makes heavy use of linked lists, so there's lexical syntax for them. Anything in parentheses is a list, but unless you quote it, it will be evaluated as a function call. There are various ways to quote things in Lisp:

There's a lot more that could be said about lists, but other people have already said it.

Pairs

You can set the head and tail (also known as car and cdr) fields of a lisp link-list node struct (also known as a cons cell) directly, using it as a 2-element untyped struct. The syntax is (head-value . tail-value), and you have to quote it (see above).

A common lookup-table data-structure for very small data sets is an associative list (known as an alist). It's just a list of dotted pairs, like so:

'( (apple . "red") (banana . "yellow") (orange . "orange") )

Emacs Lisp has built-in hashtables, bit-vectors, and miscellaneous other data structures, but there's no syntax for them; you create them with function calls.

Operators

Some operations that are typically operators in other languages are function calls in elisp.

Elisp has two versions of the classic switch statement: cond and case.

Elisp does not have a table-lookup optimization for switch, so cond and case are just syntax for nested if-then-else clauses. However, if you have more than one level of nesting, it looks a lot nicer than if expressions. The syntax is:

The do-stuff parts can be any number of statements, and don't need to be wrapped with a progn block.

Unlike classic switch, cond can handle any test expression (it just checks them in order), not just numbers. The downside is that it doesn't have any special-casing for numbers, so you have to compare them to something. Here's one that does string compares:

The symbol 'break is arbitrary, but is probably a nice choice for your readers. If you have nested loops, you might consider 'break-outer and 'break-inner in your catch expressions.

You can (throw 'break nil) if you don't care about the "return value" for the while-loop.

To continue a loop, put a catch expression just inside the loop, at the top. For instance, to sum the numbers from 1 to 99 that are not evenly divisible by 5 (artificially lame example demonstrating use of continue):

All the loops above compute the value 4000 in the variable total. There are better ways to compute this result, but I needed something simple to illustrate break and continue.

The catch/throw mechanism can be used across function boundaries, just like exceptions. It's not intended for true exceptions or error conditions – Emacs has another mechanism for that, discussed in the try/catch section below. You should get comfortable using catch/throw for normal jumps and control transfer in your Elisp code.

do/while

Pretty much all iteration in Emacs Lisp is easiest using the loop macro from the Common Lisp package. Just do this to enable loop:

(require 'cl) ; get lots of Common Lisp goodies

The loop macro is a powerful minilanguage with lots of features, and it's worth reading up on. I'll use it in this primer to show you how to do basic looping constructs from other languages.

You can do a do/while like so:

(loop do (setq x (1+ x)) while (< x 10))

You can have any number of lisp expressions between the do and while keywords.

for

The C-style for-loop has four components: variable initialization, the loop body, the test, and the increment. You can do all that and more with the loop macro. For instance, this arbitrary JavaScript:

(loop with result = '() ; one-time initialization for i downfrom 10 ; count i down from 10 for j from 0 by 2 ; count j up from 0 by 2 while (< j 10) ; stop when j >= 10 do (push (+ i j) result) ; fast-accumulate i+j finally return (nreverse result)) ; reverse and return result

It's a bit more verbose, but loop has a lot of options, so you want it to be reasonably transparent.

Notice that this loop declares the result array and then "returns" it. It could also operate on a variable declared outside the loop, in which case we wouldn't need the finally return clause.

The loop macro is astoundingly flexible. Its full specification is way out of scope for this primer, but if you want to make Emacs Lisp your, uh, friend, then you should spend some time reading up on loop.

for..in

If you're iterating over a collection, Java provides the "smart" for-loop, and JavaScript has for..in and for each..in. There are various ways to do it in Lisp, but you really might as well just learn how to do it with the loop macro. It's a one-stop shop for iteration.

The basic approach is to use loop for var in sequence, and then do something with the individual results. You can, for instance, collect them (or a function on them) into a result list like so:

The loop macro lets you iterate over list elements, list cells, vectors, hash-keys, hash-values, buffers, windows, frames, symbols, and just about anything else you could want to traverse. See the Info pages or your Emacs manual for details.

The body can be any number of expressions. The return value of the function is the result of the last expression executed. You do not declare the return type, so it's useful to mention it in the documentation string. The doc string is available from M-x describe-function after you evaluate your function.

Emacs Lisp does not have function/method overloading, but it supports optional and "rest" parameters similar to what Python and Ruby offer. You can use the full Common Lisp specification for argument lists, including support for keyword arguments (see the defstruct section below), if you use the defun* macro instead of defun. The defun* version also lets you (return "foo") without having to set up your own catch/throw.

If you want your function to be available as a M-x command, put (interactive) as the first expression in the body after the doc string.

local variables

You declare function local variables with the let form. The basic syntax is (let var-declvar-decl)

Each var-decl is either a single name, or (name initial-value). You can mix initialized and uninitialized values in any order. Uninitialized variables get the initial value nil.

You can have multiple let clauses in a function. Code written for performance often collects all declarations into a single let at the top, since it's a bit faster that way. Typically you should write your code for clarity first.

reference parameters

C++ has reference parameters, which allow you to modify variables from the caller's stack. Java does not, so you have to work around it occasionally by passing in a 1-element array, or using an instance variable, or whatever.

Emacs Lisp does not have true reference parameters, but it has dynamic scope, which means you can modify values on your caller's stack anyway. Consider the following pair of functions:

Dynamic scoping is generally considered a bad design bordering on evil, but it can occasionally come in handy. If nothing else, it's good to know it's what Emacs does.

return

A lisp function by default returns the value of the last expression executed in the function. Sometimes it's possible to structure your function so that every possible return value is in a "tail position" (meaning the last expression out before the door closes, so to speak.) For instance:

The return value is just the result of the last expression, so whatever our nested if produces is automatically returned, and there's no need here for an explicit return form.

However, sometimes restructuring the function this way is inconvenient, and you'd prefer to do an "early return".

You can do early returns in Emacs Lisp the same way you do break and continue, using the catch/throw facility. Usually simple functions can be structured so you don't need this – it's most often useful for larger, deeply-nested functions. So for a contrived example, we'll just rewrite the function above to be closer to the JavaScript version:

Emacs has a different facility for real error conditions, called the "conditions" system. Going through the full system is out of scope for our primer, but I'll cover how to catch all exceptions and how to ignore (squelch) them.

Here's an example of a universal try/catch using the condition-case construct, with a Java equivalent:

If you want an empty catch block (just squelch the error), you can use ignore-errors:

(ignore-errors (do-something) (do-something-else))

It's sometimes a good idea to slap an ignore-errors around bits of elisp code in your startup file that may not always work, so you can still at least start your Emacs up if the code is failing.

The condition-case nil means "Don't assign the error to a named variable." Elisp lets you catch different kinds of errors and examine the error data. You can read the Emacs manual or Info pages to learn more about how to do that.

The progn is necessary if you have multiple expressions (in C/Java, statements) to evaluate in the condition-case body.

condition-case will not catch values thrown by throw – the two systems are independent.

Classes

Emacs Lisp is not object-oriented in the standard sense: it doesn't have classes, inheritance, polymorphism and so on. The Common Lisp package includes a useful feature called defstruct that gives you some simple OOP-like support. I'll walk through a basic example.

Java may suck at declaring constructors, but Emacs Lisp makes up for it by sucking at setting fields. To set a field in a struct, you have to use the setf function, and construct the field name by prepending the structure name. So:

The Lisp one doesn't look too bad here, but in practice (because Elisp has no namespace support and no with-slots macro), you wind up with long structure and field names. So your defstruct-enabled elisp code tends to look more like this:

To fetch the value of a field in a struct variable, you concatenate the struct name with the field name and use it as a function call:

(person-name steve) ; yields "Steve"

There's more that defstruct can do – it's a pretty decent facility, all things considered, though it falls well short of a full object system.

Buffers as classes

In Elisp programming it can often be useful to think of buffers as instances of your own classes. This is because Emacs supports the notion of buffer-local variables: variables that automatically become buffer-local whenever they are set in any fashion. They become part of the scope chain for any code executing in the buffer, so they act a lot like encapsulated instance variables.

You can use the function make-variable-buffer-local to declare a variable as buffer-local. Usually it comes right after the defvar or defconst declaration (see below.)

Variables

You can declare a variable, optionally giving it some runtime documentation, with defvar or defconst:

(defconst pi 3.14159 "A gross approximation of pi.")

The syntax is (defvar namevalue [ doc-string ]).

Ironically, defconst is variable and defvar is constant, at least if you re-evaluate them. To change the value of a defvar variable by re-evaluating its declaration you need to use makunbound to unbind it first. You can always change the value of any defvar or defconst variable using setq. The only difference between the two is that defconst makes it clearer to the programmer that the value is not intended to change.

You can use setq to create brand-new variables, but if you use defvar, the byte-compiler will be able to catch more typos.

Further reading

Emacs Lisp is a real programming language. It has a compiler, a debugger, a profiler, pretty-printers, runtime documentation, libraries, I/O, networking, process control and much more. There's a lot to learn, but I'm hoping this little primer has got you over the hump, as it were.

In spite of its various quirks and annoyances, Elisp is reasonably fun to program in once you get the hang of it. As a language it's not that great, and everyone wishes it were Common Lisp or Scheme or some other reasonable Lisp dialect. Some people even wish it weren't Lisp at all, if you can believe that! (hee)

But it's really, really useful to be able to customize your editor, and also to be able to fix problems with elisp code you borrowed or inherited. So a little Elisp goes a long way.

For those of you learning Emacs Lisp, please let me know if you found this useful. If you try writing some Emacs extensions, let me know what you would like to see documented next; I can always do another installment of the Emergency Elisp series if there's enough interest.

Monday, January 07, 2008

Blogging Theory 201: Size Does Matter

I'm always getting criticized for writing long blogs. "Way too verbose! Couldn't he have said all that in two paragraphs?" Not everyone feels that way, of course; lots of people tell me to keep doing what I'm doing. But the size critics are doggedly persistent. And I don't think it's just people who are slow readers. Even friends of mine will sometimes advise me to trim my entries down, which is a surprise, since I thought most of them would have picked up on the cause and effect relationship between blog length and popularity. Evidently not!

So, like, let's get this out in the open: I'm doing it on purpose. Yes, sure, I could do with an editor (the people kind), but only if said editor were on board with long blogs, because that's the kind I want to write.

In short, I think long blogs have better survival characteristics: greater reach and greater impact. And I've decided to celebrate the august occasion of the 1000th kneebiter publicly maligning my style by explaining why I do it. And yes, it'll be long. Set aside at least 20 minutes to read this thing. You've been warned!

The Expectations Problem

Let's start with the obvious. People expect blogs to be short – at least, shorter than mine. They expect that because it's pretty much how everyone does it. Short entries, and frequent. Here's my cat today. Doesn't he look sooo different from yesterday? No wonder so many people hate bloggers.

When I write my long blogs, I'm bucking established social convention, so it's natural that some people will whine that they're too long.

Well, how far off cultural expectations am I? Doing a quick print preview in my browser shows that my last entry, formatted at about 14 words per line (typical for a printed book) weighs in at about ten pages. So it's roughly essay-sized. I'm not talking about those toy five-paragraph essays they made you write in high school. I'm talking about real-life essays by real-life essayists. Real essays can range from three pages to 30 or more, but ten pages is not an unusual length.

If I were attempting to publish these entries as books, publishers would laugh at me. They're way too short to be books. Sure, I could bundle them, but that's beside the point. The fact is, two different real-world audiences have entirely incompatible views on what the proper length for my writing should be.

Trying for Essays

I like to shift between writing articles and essays. The two overlap to some extent, in that an "opinion article" can be essay-like. But an essay attempts to be richer and deeper than an article. Essays can take all sorts of forms – prose, poems, stage plays, screenplays, short stories, even songs. And yes, they can take the form of blog entries.

Essays have different goals: some introduce new ideas, some aim to change minds that have been made up, some try to rally people to a cause, some just poke fun. Regardless of the goal, I think what unites them as essays is that they strive to imprint the reader with an idea, some hopefully unforgettable perspective, even if the reader doesn't necessarily agree with it.

Essays might use humor to endear you, or satire to shock you, or storytelling to entertain and lull you, or logic to convince you, or rhetoric to persuade you, but in the end they're trying all trying to imprint you with a little piece of the essayist's personal perspective on life.

So we've established that my longer entries are for the most part essays in blog form, with nary a cat picture to be found. I think the appearance of my entries in feed-readers alongside cat pictures and other non-essays is a big contributor to why so many people feel they're too long. If I instead herded them off into a page titled "Essays", as essayist Paul Graham does, then my guests might arrive with more appropriate expectations.

But let's face it – that's more work. Blogs are the closest thing we've got today to a ready-made, turn-key, high-availability essay publishing system, one that permits comments, subscriptions, biographical links and the other trappings you'd expect. It's not ideal – I even talked about this in my first-ever Blogger entry, but none of the issues I raised then have been resolved, doubtless because most people don't write essays, so there isn't a pressing need.

Blogging's the best medium I've got today, so that's where I publish my essays.

Amusing true side-story: I met Paul Graham at Foo Camp last summer. After his crowd of admirers had dispersed on the first day (he's pretty famous), I came up and introduced myself. He was very nice and polite, and he was even kind enough to venture: "I've read some of your ...essays." He said the word essays with this funny pained look on his face, as if he'd just swallowed a gob of wasabi and was trying to play it off like nothing was wrong. I think he meant well, but that expression was just priceless.

I already knew my work wasn't for everyone. :)

Anyway, there's more to the long-blog problem than mere expectations. You can still make a valid argument that my entries are too long even for essays, or at any rate for the material I'm covering or the points I'm making. And I'll still disagree with you. Let's see why.

Blowing the Cache

So, I have this pet theory I'm going to foist on you. It's probably total hokum, and someday I may be proven as wrong as Lamarck, but for now it's a hypothesis that fits the data pretty well.

First, let me tell you what the pet theory is about. I talked about it a little in an essay I wrote in 2004, You Should Write Blogs. In the essay I outlined some unexpected behavior of an essay my friend had written and circulated at Amazon: nobody read the thing, but somehow a year later everyone knew about it, and its core message had been imprinted on everyone in the company, up to and including the executive staff.

In the intervening three years since I wrote that essay, my own blog has taken off to level that can only be described as absurd. I've been lampooned in web comics, discussed endlessly on Reddit, Slashdotted, invited to Foo Camp and various big conferences, approached about writing books, recruited constantly, and heckled mercilessly by my coworkers, all of whom are smarter than I am, both technically and also in the sense that they don't make public asses of themselves once a month.

It's undeniable that I'm doing something right, at least in terms of reach. My blogs may or may not be any good, but they're widely read. So are they really too long?

Well, people always tell me: "Steve, you'd be doing yourself – and us – a huge favor if you just made your entries shorter." So I try it now and again, and I've observed a correlation between blog size and splash size. It's as if you're all in a pond, and I'm throwing a rock into it. Bigger rock, bigger splash.

I think you can actually stretch that metaphor one more level. I think if I throw in a sufficiently large rock, it'll crush you, which for most of you is an undesirable outcome. The rock needs to be big enough to splash you and get you all wet, but it shouldn't kill you.

To translate that bizarre thought into non-metaphorical terms, a blog that's too big will cause "too many" readers to drop off, for some value of "too many". A longer entry means that fewer people will read it immediately, although I'd argue based on experience that longer entries that are worth reading will ultimately achieve a wider audience. It just takes longer for them to make the rounds - sometimes months or years. But there's a tradeoff there. It can be useful to make a big splash all at once, in the style of Gladwell's Tipping Point.

So we've got a tricky number to solve for. Very short entries get ignored; I've tried that. Longer entries can make a splash but may not have broad long-term staying power. Very long entries tire people out, so they can take years to make the rounds. What's the right length for making a big splash the day it's published?

That's where my pet theory comes in. Oh, you may laugh! Ha, ha, you might say! But I, who know absolutely nothing whatsoever about Cognitive Science, have a pop cog-sci theory about the right length for an essay.

The right length for an essay, I believe, is exactly "one sitting": no more, no less. You should be able to read and absorb it fully in one go, with no breaks. Moreover, after you finish, you should be at the point where you need to take a break. You should want to stand up, stretch your legs, grab a coffee, play some foosball, get your mind off everything for a while. If you don't need a break after the essay, then it wasn't long enough.

I suspect the maximum length for a "sitting" is 50 minutes, given various government studies I learned about while I was in Navy Nuclear Power School. They determined that people absorb information best (and concentrate best) in school in 50-minute intervals with 10-minute breaks. They'd figured out all sorts of other stuff too: use outline form, make the students copy the outline into a notebook, repeat everything exactly 3 times, and so on, all ways they'd found that lead to better retention of the material. But the 50-minute thing seemed intuitively reasonable. Those ten-minute breaks were indispensable.

I actually think 50 minutes is the absolute upper bound for a sitting; the optimal duration is probably lower. But the takeaway here is that one consequence of my pet theory, which we'll get to shortly, is that the ideal length for a blog is measured as a duration, not a word count.

Of course, that presents a problem, because duration is a function of word count and reading speed. I need to account for different personal speeds, and some folks like to read slowly. Heck, some don't even read at all. It's one of the amazing miracles of the internet: write-only people. They can't read but they somehow find a way to write. You see them commenting all the time in my blogs: "I didn't actually read your entry, but allow me to comment on it all the same..." Lovely.

So I need to aim for something lower than 50 minutes, to make it possible for average readers, and then hope that for fast readers I'll still have blown their entire page cache. Being a fast reader is actually a disadvantage here.

Figured my pet theory out yet? I'll bet some of you have!

Stevey's Brain-Cache Theory of Essays

My pet theory is predicated on the hopefully obvious axiom that our brain is a computer. As a computer, even though it's structually different from a von Neumann machine, it's still constrained by the same laws of physics. Hence, it probably has a multi-level cache.

I have some even more farfetched pet theories about the architecture of this cache, but whatever the architecture, caches all share the property of being limited short-term storage.

I really want to talk more about how I think this cache works, and I keep deleting paragraphs about it. I'm in a bind: if I talk more about it, my pond-boulder will get too big and crush people. But if I don't, then I'll be accused of grossly oversimplifying.

So it goes. Let's oversimplify.

Your brain clearly has at least two obvious caches: your short-term memory and your long-term memory. Your long-term memory is more complex than a cache, but behaves like a cache in the way it forgets things that aren't refreshed periodically.

The Wikipedia entry on short-term memory, linked above, says short-term memory lasts about 20 seconds. And long-term memory, of course, is persistent and can last up to a lifetime.

My pet theory posits the existence of at least one second-level cache in your brain that holds data for a while before deciding whether to commit it to long-term memory. That "while" varies but is at least 10 to 15 minutes.

Of course, writing the theory down like this makes all the holes in it pretty obvious, and I'm way too lazy to try to patch them all up here. Following the best academic tradition, I leave the hole-patching as an exercise for the reader.

In my pet theory of the brain, such as it is, your second-level cache keeps track of all your sensory input for the past few minutes. It also serves as a scratchpad area for doing computation: if you're trying to follow a complex argument, you need to construct a graph: idea A leads to B and C, C implies D, etc. Even following a scene in a story requires a graph and some computation: think of a bank robbery movie scene with five people involved. Following its progress requires a little short-term memorization and some deduction, and your mind does this for you automatically for situations up to a certain low level of complexity.

What about bigger arguments and more complex scenarios? Well, if the graph is too big to fit in your second-level cache, then your brain needs to swap some ideas to "disk" (your long-term memory). This is also known as "learning stuff." Painful, I know. I've been there.

So my pet theory is that if you want to make a lasting impression, then you need to fill up the reader's second-level cache and start blowing pages (cache elements) out into their long-term memory. If you want to imprint them with something memorable, you've gotta flush it to disk. To fill the cache you have to create a story big enough to fill their short- and medium-term memory and start spilling over into long-term memory, at which point you're guaranteed that some of it will stick. It won't be just another funny blurb that your reader sees, laughs at, and immediately forgets.

This obviously entails some effort on the part of the reader, even if they're having fun. You watch a 2-hour movie and you'll be exhausted (or at least ready for a break) because your brain is busy swapping stuff out. It uses more energy because of those pesky laws of physics that led to the cache structure in the first place.

I think this whole idea scales up to N-level caching; if you write a whole book about something, and the reader manages to get through it all, then you've probably left them with a lot more long-term memories and patterns.

But a good essay is usually just trying to get one idea across. One idea, one big rock in the pond: one sitting, one story. That's my theory. And it's the thesis of this essay, with the conclusion being that the relationship between blog length and popularity is actually causative.

"There's one thing in particular that struck me..."

In the spirit of filling your second-level cache, I'd like to offer you just one detail of my pet architecture: I think our caches are only partly LRU; I think there's some randomness involved in which pages your short-term memory chooses to discard when you're interrupted with new data. In fact, if anything, they may be MRU (Most Recently Used), given that when you're having a conversation with your friend and you both get interrupted, you often can't remember the thing you were just now talking about, but you can both remember things you talked about a few minutes before.

If that's true, then the stuff that gets swapped to disk is probably different for every reader, and may be somewhat random. In other words, everyone comes away with some different memory of the essay. It's likely also in no small part a function of how well any given turn of phrase is a match for the reader's experience. So a good essay needs to try to say the same thing in a bunch of different ways, hoping that whenever the reader's brain decides to latch onto something that "strikes" them more or less permanently, it's hopefully related to the core message of the essay.

So everyone gets something different. But I think that's a good thing. If the readers come away thinking about it at all, the essay has succeeded.

Wrap-Up

There's more I could say about my style. Expectations and page-caching theories aside, I think there's entertainment value in a good story-essay; you can't really weave in good jokes without some supplemental material, for instance. And I like to tackle inherently complex topics because they're more interesting, so it's never as easy as summarizing with something as pithy as "Java sucks". It's not that simple, no matter how people want it to be so.

So yes, there's more I could say, but my gut tells me I've reached the one-sitting limit. So I'll wrap up here.

I know I've oversimplified. I know I have no business talking about cog sci when I've never even read a book on it, unless you count Gödel, Escher, Bach. And I know that even if I'm right, I may still sometimes overshoot the ideal length significantly.

But I'm convinced, and I hope you are now as well, that my blog entries are successful because of their length, and not in spite of it. It's OK if you don't agree with my pet theory as to why the longer ones are more successful; I've certainly got nothing but intuition backing me up here. But by getting you to disagree with it, I've left my mark. At least you'll remember the idea now. Consider yourself imprinted. This one's on the house!

At this point I recommend stretching your legs. Take a walk, get some fresh air, let those disk drives cool down. You can ponder this stuff later. It'll still be there in your brain, like it or not. I guarantee it.