Artisanal Software

This page holds a variety of notes about the craft of software design by
Tinderbox designer Mark Bernstein, originally written for a plenary lecture at OOPSLA. We are accustomed to
viewing software as a means to an end, a mass-produced industrial tool that ought to be beneath our notice. We are
told that software ought to be intuitive, invisible, and free. This is wrong.

Software is the characteristic art of our age. Beautiful software is not an absurdity or
an oxymoron, and interesting software is not a defect. Software has should express personality,
attitude, and emotion.

I’m working on a book about Software Aesthetics. Some of these ideas are pursued in
The Tinderbox Way, and I hope some are reflected in
Tinderbox.

We software creators woke up one day to find ourselves living in the software factory. The floor is hard, from time to time it gets very cold at night, and they say the factory is going to close and move somewhere else. We are unhappy with our modern computing and alienated from our work, we experience constant, inexorable guilt.

I'm going to talk about why we in computing seem unhappy, and how we might fix it.

Photos: Jake Hildebrandt, Lady Raven Eve

Why do I say we are unhappy?

Everyone has pretty much the same computer. Your computer is my computer. Nobody is really very happy about their computer; the very best minds in the field walk around with old Dells or MacBooks, just like your grandmother. Almost everyone has pretty much the same software.

Our tools and our rhetoric are often concerned with control of cheap, interchangeable labor: reuse, programming teams, patterns, workflow, specifications.

Perhaps in consequence, people aren't eager to make software or to study computer science. Enrollments are down, even at the best schools.

Our scientific conferences are filled with papers that focus on incremental improvements observed when asking unskilled laborers (whom we call "novices") to perform office chores. We call this "usability".

Scholars interested in arts and humanities computing are strangely obsessed with box office and weirdly uninterested in making software, or making meaning.

Our tone is often defensive. When we talk about the Web, about weblog journalism, about wikis, about computer games, we seems always to be apologizing or promising.

We sound unhappy. Our best Web discourse (Tim Bray and John Gruber and Joel Spolsky and Scott Rosenberg, for example) focuses relentlessly on what a few vendors are doing, and often pleads with those vendors for small favors: new DRM policies for our iPods, or better perspective in the application dock. Our worst discourse (usenet, slashdot, valleywag, the comment section of any popular tech blog after comment #12) is consistently puerile; it's often hard to imagine that these are written by scholars, scientists and engineers, and not petulant children.

I think we all woke up one day to find ourselves living in the software factory. The floor is hard, from time to time it gets very cold at night, and they say the factory is going to close and move somewhere else.

Photo:Lewis Wickes Hines, NYPL 91PH056.029

What might cheer up the software world? The usual answer is: piles of money. Money is nice. You can exchange it for goods and services! But I think we want something else:

To follow knowledge like a sinking star,Beyond the utmost bound of human thought.

That's what these machines are intended to do, and that's what we wanted to do. But this isn't a sentiment that fits well with Postmodern Programming. It's not really congruent with the early Modern purism of Dijkstra, or with the late modernist abstractions of UML. It speaks to the thought of an earlier age, refracted through the lens of our greater knowledge and our changed circumstances.

I see hints of similar yearnings all over the place, from steampunk to cosplay, from art to architecture to fiction. It's not just nostalgia or dressing up. It's not something we're borrowing from the arts. We are the arts.

I think we can learn a tremendous amount by pulling back from The Enterprise and putting our skills in the service of individual knowledge workers, real people doing important work.

Photo: Aldo Murillo

In this series of posts, I'm going to speculate on what's really wrong and suggest how we might begin to change things. The short version: The Arts & Crafts movement failed in consumer goods, but it could succeed in software.

I'm going to be using some slides from my upcoming talks on NeoVictorian Computing in this series, but my argument here will be quite different from the talks. Don't worry too much about spoilers.

The laborer feels himself first to be other than his labor and his labor to be other than himself. He is at home when he is not laboring, and when he is laboring he is not at home.

Who would call a car company's payroll system, "home"? That was the task of the original Extreme Programming team. In software, much of our work and many of our dreams focus on Enterprise. Most of the rest concentrate on tiny tools that are expected to appeal to a mass audience, or grand tools — Office, open or otherwise — that can be imposed as standards on the world of work.

This isn't working. We've been stuck for years, the backlog never goes away, and we fight the same old fights with a new generation of management. The Enterprise is too complex, too turbulent, too confused, to be a fruitful place to study the craft of software. We don't know when it's right. Yes, we sometimes know when it's wrong, when we can't even deliver the software. But what is success? Praise from a self-interested manager? An incremental improvement in corporate throughput? A pile of surveys filled in by our students? A nice writeup in The Journal?

I propose that enterprise software is a hard problem that we can understand only after we solve an easier case, one that lies close to hand. Before we can tackle the enterprise, we need to write software for people. Not software for everyone, but software for you and for me.

When I say that software is "built for people", I don't mean some fuzzy notion that the software is intuitive or "friendly" or that it can be sold to millions of consumers. I mean, simply, that it offers some specific people three specific virtues: commodity, firmness, and delight.

It helps to get stuff done: not filling out forms or filing pictures or retrieving records, but the endlessly difficult, challenging, everyday stuff of understanding what is going on around us.

It doesn't break down in use. That doesn't mean it never fails: failure is part of software, just as brushstrokes are part of painting. Firmness means we can trust it, the way we trust the handle of a good hammer — including the knowledge that even good tools can crack.

And, sometimes, it makes us smile. Or it makes us think.

Photo: Natalie Tuke, Brooks Institute, Fiji

We can know when personal software is right, in exactly the same way we know when a theory is right, when a painting or a sentence is right. We don't need clinical studies or usability labs. We don't need box office numbers. We don't need to see the reviews from the newspapers or the VC's. We know.

And, yes, we might be mistaken. We sometimes deceive ourselves. It happens. At times, an audience helps us know that we're really right, and on occasion the approbation of our students or the cheering accumulation of sales might reassure us. But these are secondary; we have to start by knowing what is right and true, because we confront too many crucial choices to work from focus groups and popularity polls.

In the beginning, so our myths and stories tell us, the programmer created the program from the eternal nothingness of the void. Whether it is Stallman typing teco macros and wearing out the shift keys; Chuck Moore typing backwards at Kitt Peak; Goldberg, Deutsch, Robson et. al. in the parclands of California; billg hunched over Allen’s Altair emulator; Bill Joy’s VAX crashing and deleting vi multibuffer support for the next ten years; Gabriel doing The Right Thing at Lucid; or Larry Wall doing whatever....

This is the language of romance and Romanticism, honoring the individual mind and body and its struggle against the elements and the Gods and the crashing VAX. This is the language in which we programmers secretly believe.

Photo of the photographer's wife writing in her diary in Kerala, India. By Erik Palkhiwala

We're told we ought not to believe it. We're told that programs come from teams, from corporate departments. Even those wild-eyed romantics, the agilists, imagine that the Customer is the ultimate arbiter.

Customers are as likely as anyone else to get things wrong. Yes, the customer may know their business, but customers are no less subject to self-deception than we. Businesses convince themselves all the time that they are making progress when they are not. Managers convince their bosses all the time that they are making profits when they are not.

Steampunk flat panel display by Jake von Slatt

Why do I call this NeoVictorian Computing? "NeoVictorian" is a handy and imprecise placeholder. I'm thinking of a very long 19th century that runs, roughly, from Sir Joseph Banks to Heisenberg, Pauli and Dirac. From Isembard Kingdom Brunel to Louis Sullivan. From Austen to Ibsen. From Realism and Romanticism through Impressionism. What connects all these threads?

In part, a belief in right answers, in the power of the artist or the designer to find true solutions to questions. Before this era, these questions were intractable. Later, it seemed there were no right answers at all. Now, chastened by our modern knowledge of the limitations of knowledge, of reason, of ourselves, we know that those bright certainties are not as certain or are real as we once thought. But we've spent our time mourning, in depression and cynicism and irony, and we've all been taught that, if truth is a slippery and uncertain thing, lies are very real.

We long for things that are ours, even if it's just our peculiar preference for an arcane latte at our favorite coffee house, the one where we have our own mug.

NeoVictorian Computing isn't about nostalgia for brass fittings and kid gloves, but rather about an underlying belief in true answers and true designs — even though we understand, now, that sometimes truth is situated, or contingent, or just a cigar.

Our shop is configured (admittedly somewhat unconsciously) around these principles. We aren’t a shop designed to build and maintain the sort of large enterprise software Mark discusses – we could never compete with the money and resources a McKesson or SAP or Oracle bring to the table, and aren’t particularly interested in doing so. Rather, our typical project is designed to meet the needs of one to maybe a dozen people. These are people who have a unique problem that no off-the-shelf software handles well.

The NeoVictorian Computing impulse is Romantic, but it is also grounded in Realism. Not the realism of endlessly replicating variations on the same Human Interface Guidelines to keep Look and Feel
from escaping from their cages, but the realism that asks us to look at things as they are and as they should be, and to do something about the difference.

One of the impulses behind 19th century realism was to tell the stories of people whose lives had not, previously, been deemed story-worthy. Another was to describe the world as it is.

Part of Realism is addressing the world as it is. Data are complicated; when we reduce everything to a hierarchy, we're distorting the world. Everything is intertwingled; when we pretend that clear signage will keep people from being lost or being confused, we're distorting the world. The audience is smarter than we are; when we pretend that real people can't understand inheritance, we're deceiving ourselves. Software that distorts the world can be a lie.

photo: Lady Raven Eve, Singapore

Honest materials are important to Realism; we show the painter in the studio because that's the painter's world, filled with clutter and turpentine. Realism prefers honest wood to cheap gilding. If something needs to be clay or plastic, let it be what it is: don't pretend it's sterling. Don't fold your napkin into a pheasant, and don't hide the structure behind layers of sham.

All the fuss about the icon dock's perspective and reflections, all the brushed metal windows and all the skinnable apps, they're all dross. It's simulated kitsch, so it does no particular harm, but it's a game. Delicious Library has terrific wood shelves, but it's just a list of your stuff.

Limitations matter to Realism, too. We may struggle against them, creating paintings more real than photographs. We may accept them, knowing that brushstrokes and chisel marks are part of art. Either way, we are aware of them, and we want the viewer to know what's going on. Realism doesn't pretend to be a friendly paper clip: if you have got something to say, say it.

National Museum, Baghdad, 2003. photo: D. Miles Cullen, US Army

Finally, Realism accepts that real people have real work to do. It's not merely filling out forms or looking up facts: these are terrific things to study in the usability lab, but they're not what people need to do. People need to rebuild wrecked museums and wrecked families. They need to make sense of lymphoma, or partial differential equations, or RFC 822. People find themselves in astonishing, unexpected situations: one day you're a travel writer or an unemployed Republican protégé, and tomorrow you're going to be a minister in the Iraq reconstruction. How can you learn what you need to know, in time?

In 2003, most of those people failed. Their masters may have been scoundrels, but they were not. They had a job to do, and it appears to have been a job they could not have been expected to do. This a task for engineers: to build machines that let people do what they can't do with their bare hands.

Today, my computer is your computer. We all have, pretty much, the same computer. Yours might be a year or two newer. Mine might be green.

photo: Heidi Kristensen

Today, your software is my software. Some details might change: maybe you use Mellel and I use Word, or you use Excel and I use Numbers. Small differences matter. But it's all pretty much the same. People expect that they won't need to read a manual, that everything is just like it's always been and that nothing ever changes much.

I want this to change. I want a software world where we might again enjoy new software that does things we couldn't do before. I want software that fits specific needs. I'm a software professional; why should I be using the same tools as a sixth grader, or a professional photographer, or an interior decorator?

Why do we have so little variety in our software? One reason is that we ask it to do what it cannot, and we expect to do too little.

We should expect to learn. Sophisticated tools require study and effort, and they repay that effort by letting us do things we could not do otherwise. Calculus is a lot of work, but you can't understand physics or the stock market until you understand derivatives. Learning to draw the figure is a lot of work; once you do the work, you can draw.

Users and software designers should embrace personality and style. Software made by committee must adhere to the committee's standards, but software made by people and made for people may be infused with the creator's personal style just as it is adapted for the user's personal needs.

We should accept failure. Software fails in many ways. We have tried to change this, we have made great progress, but it is the nature of software to fail.

We once thought that there should be no errors, that errors were a sin. Errors are natural. I suspect we routinely spend $100 in development to catch errors that would cost our users $10. I know we routinely spend thousands of dollars to forestall cosmetic errors that will cost our users nothing save transient aesthetic annoyance. Operating systems are not the appropriate standard; since everyone uses the operating system all the time, and since operating system failure probably collapses the user's entire house of cards, it makes sense to over-engineer operating systems.

In the 19th century, British railroads went bankrupt building the rail system they wanted — with level grades and safe crossings and solid infrastructure. American railroads built cheap and fast, cutting corners with abandon. They accepted that they'd need to rebuild the worst parts in ten or twenty years, while the British rails would still be in fine condition a century later. The American answer was the right answer: yes, some American routes were built and rebuilt, and American rails suffered accidents and breakage, but some of those super-engineered routes are now little commuter spurs. Some are abandoned.

photo: Nathaniel Luckhurst

It's hard to know what is a defect, and what is merely a surprise. The cult of usability has enshrined the belief that anything a novice doesn't expect is a defect. If we're just interested in how many copies we can sell to novices, usability matters. If we're interested in utility

To follow knowledge like a sinking star,Beyond the utmost bound of human thought.

then novice usability is a smaller component. I don't care whether the perspective of the application icons is consistent: I care whether they give me the information I need and offer the affordances to help me learn more and do more.

In the 19th century, Arts And Crafts sought an alternative to the seamless simulacra of mass production. Instead of using uniform dishes from The State Dish Factory, couldn't your dishes be made for you, and mine for me? There is no free lunch: if we want dishes made just for us, we accept that they'll cost more. And since the point is that the dishes are made for you by people, not by the State Dish Factory, we accept (and enjoy) things that come along with human involvement: they won't all be identical, we might sometimes see fingerprints, and sometimes we might detect the trace of the particular maker and their particular situation on a particular day.

It didn't quite work with Arts and Crafts, though plenty of artisan work continues today. The cost difference was too large. Worse, distribution was impossible with manual office procedures: it was barely possible to fill individual orders for identical commodities, and handling unique cases would have required armies of order clerks.

But software is the stuff of thought. We customize it all the time, through preferences and scripts and macros and plug-ins. We can, and should, embrace the workshop, and learn to treat our software as an artifact and also as material we shape to fit our needs.

It's made by people; it will have brushstrokes and thumbprints. It is what it is: it expresses its nature instead of hiding behind brushed metal shams. It is made for us, and if the maker did not always anticipate everything, we respect the effort and the intention, and we take some responsibility for picking up the pieces.

Mark Bernstein, the Tinderbox guy, is working on a series called NeoVictorian Computing which, while perhaps not quite as elegantly-turned-out as [Stephen] Fry’s œuvre, digs a little deeper. He is making a direct appeal for software to adopt the principles of the Arts and Crafts Movement. Since I am writing this sitting in a Stickley chair in a wooden house in the A&C style, I’m inclined to be sympathetic. Mark’s message resists summarization, so I won’t try.

The classical ideal was the unbroken column and the functional subroutine: call it once with the proper arguments and receive your answer. I remember programs that were vast stacks of cards, 3000 per box; I wrote a game that ran to fifteen boxes and had thousand-line modules.

Over the span of a generation, the size of our pieces has shrunk and the parts of proliferated. Modular programming suggested that subroutines fit on a page; in John Hsoi Daub's excellent PowerPlant framework (used in Tinderbox), it's quite common to find methods like LPOP3Connection::GetOneMessage that run to 60 or 80 lines of code. The smallest method body in LPOP3Connection is four lines, and that's unusual. All of our object code used to look like this: big objects with big methods.

We don't write this way very much nowadays. Agile Methods have something to do with it. So does refactoring.

A core cause, though, is a change in materials; just as the production of cheap cast iron transformed Victorian architecture, the discovery that method calls need not slow the program transformed the way we think about objects. And then there's test driven development: small objects and small methods are much easier to test, and the advantages of pervasive testing for avoiding reversion bugs are compelling. (I've never been a fan of methodologies and I hate anything that slows down coding, even type declarations. But Test Driven Development really has transformed my code work.)

If you're going to have lots of small parts and lots of small methods, closeup views are going to seem complicated. If you're going to build an office building or a railroad station out of small pieces of iron and steel, you're going to have lots of members and lots of rivets.

You might try to hide the complexity. But NeoVictorians ask, "why?" The parts are there. The rivets are there. Covering them with a facade makes them seem simpler, but if you need to poke holes in the facade for testing and to get the objects to do what you want, the facade becomes a sham -- a frilly cover that's supposed to hide the legs but just attracts dust.

The visuals from my OOPSLA talk on NeoVictorian Computing are now available from my Lecture Notes page. (I may try to provide audio as well in the future; because this talk includes extended sections in which the visuals act in counterpoint to the lecture, it might be hard to follow from visuals alone.)

We fly across continents to vast technical conferences. We meet in glittering ballrooms, filled with our colleagues. We are, it seems, miserably unhappy.

Peter Merholz is president of Adaptive Path. He does User Experience. He went to DUX last week. It's a user experience conference. He should be happy as a clam. Instead, he's twittering in the middle of the second session:

The moment an academic takes the stage, the conference screeches to a halt.

Dave Winer has written a lot about the failures of conferences over the years. He just wrote about why most conferences suck. Winer thinks it's because "we don't have enough to do." He once thought the answer was the audience-centered unconference, but is no longer so confident:

At first the joy of finding out that everyone has something to say is overwhelming, that was the first two BloggerCons for me. But after that, it wasn't that big a thrill, then it mattered more what they had to say.

What's the problem here? Part of it is inexperience; academics, especially, aren't trained in presenting. And where a corporate researcher might be on stage once or twice a year, academics meet an audience every day; it's nothing special, and the goal isn't to take risks and change minds. The goal is to get through the class.

Part of it is timid programming of safe, easy topics. At OOPSLA, when John McCarthy stopped to talk about the afternoon he wrote cons, nobody was reading their email.

There were a thousand elite software people in the room, and they all knew that McCarthy was talking about an afternoon in which he happened to invent garbage collection, dynamic languages, and pretty much all of modern programming. It was just an afternoon with nothing better to do.

But this only works because McCarthy (and OOPSLA) trusts us to know what cons is and why it matters so much. That trust is in short supply.

Still more fundamentally, the mass of guilt that weighs upon the field deadens our conferences. That guilt arises from the divergence of what we like from what we think we should like. We enjoy exciting new systems that do what nothing else could do; we think we should like systematic demonstrations that this widget lets students do a task 5% faster than that one. We enjoy daring prototypes and agile development; we think we should be planning our work and proving correctness. We enjoy astonishing code; we think we should write code so clear that our most mediocre students (and the management team) will grasp it without effort.

Guilt inhibits our joy in doing our work, and because we can't admit to that joy — we cannot be seen to exult in enjoying what we know to be wrong — we find our speakers addressing us in slow, measured tones about slow, measured studies.

We're setting the table for Tinderbox Weekend Boston this weekend. It's going to be great: everything from Tinderbox for plotting to using Tinderbox on location when the film you're shooting is not exactly what you had in mind.

We always make a few extra copies of the handouts as a Remote Membership package. It's not designed (unlike most everything else we do at Eastgate) to be a great deal. It's a way to support Tinderbox Weekend if you can't come. You get a copy of The Tinderbox Way, a CD filled with sample files and presentations, and your very own nametag lanyard.

For this batch, there's a little bonus: I spent an afternoon behind the microphone and worked up a screencast of the slides and audio from my recent OOPSLA Lecture on NeoVictorian Computing. So, if you'd much rather watch and listen to the talk on your iPod than read about NeoVictorian Computing here, now you can.

So to that guy that was sitting next to me, typing madly and muttering to himself, during a really interesting session, I wished your batteries died and you lost all network connections and your pen ran out of ink.

I suspect, though, that sitting around the conference table on the beach at Troy, the wily Odysseus spent a fare amount of time composing witty dactyls. I bet that plenty of salesmen at the proverbial annual shindig spend the sessions planning just how they're going to close the deal, make the quota, or spend the night on the town.

There's a reason we aren't in the moment; we aren't paying attention, because we can't admit to enjoying the parts of our work that delight us, and we don't want to admit that the rest is slave's work, unredeem'd.

At OOPSLA, Fred Brooks, Jr. (The Mythical Man Month and a legendary software manager) showed a slide that compared two kinds of hardware and software designs: those with fan clubs (some of the examples are his, some mine):

Seymour Cray's computers

MacOS

LISP

EMACS (or vi)

ruby

and those without fan clubs

IBM computers

Windows

COBOL

Microsoft Word

javascript

He observed that, pretty much without exception, the designs with fan clubs have conceptual integrity because they are chiefly the work of one or two people.

Now, it's also interesting to note that some of the unclubbable designs were profitable, and some of the clubbable designs were not. (Sometimes, you get intense fan clubs for products that are otherwise hapless and hopeless: the Amiga computer, say, or languages like APL or FORTH. I shared a cab after OOPSLA with a SNOBOL fan; that takes you back...)

What I want to point out is, whatever their success in the marketplace, the designs that inspire devotion are interesting and important because of the passion they create. We're much to quick to assume that the wisdom of markets is wise. It's not: bad timing or bad marketing or bad execution can matter, too.

One of the defining properties of NeoVictorian Computing is inspiration: the conviction that a design can be, and is, right. This need not mean the arrogant conviction that the design is perfect, nor does it expect or require a proof of optimality. And we don't require a supernatural insistence that some divine spirit instills the design into us. All we need is the sense that this is the best we can do; there might be other answers just as good, but this is your best.

You can't get that in a committee, by definition. And I think this property of conviction, integrity, and passion -- this quality without a name -- is of fundamental interest in a way that friendliness and usability and standards-compliance and fashion are not.

Conventional designs and specs tend to be too general and too elaborate. You start out with a data structure to hold a couple of People. Someone suggests that it would be better to have a vector, so you could have more than two if you need to. Then someone else wants to put it in a mySQL database. And then we're gonna run it on a remote system, on rails. It's easy to add generality to a spec. It makes you look really smart, especially when someone else is going to do the coding. But too much generality too soon makes the code age prematurely; you can get old, brittle, confusing code that looks like it's yellowed with age, even though it's not even finished yet.

Much of the time, this generality won't turn out to be needed. Even if it is needed, building it later means that you defer the development and testing cost, and that saves money. In the past decade, as we came to understand refactoring better, we have become much more comfortable with deffering these costs.

In the NeoVictorian workshop, YAGNI is balanced by the opposing sentiment: Go Ahead, You're Entitled.

Also known as 'Go ahead, live a little!' It's what your aunt might tell your mom when she can't decide if she should have some apple tarte with crême anglaise and a cute little glass of spätlese, when mom thinks she really should go fold the laundry.

Sometimes, it's worth building the good generality early, because it will open new vistas and new opportunities. You might, for example, see a place where you could get by with a pair of People but the code would be really elegant if you went right ahead and used a Composite. GAYE says, "if it's really cool, go ahead and invest now." It makes the code better, opens new vistas, new opportunities. It lets you try new things, because you've got a better, more general starting point for quick experiments.

If it doesn't work out, you can always refactor back to the simpler YAGNI plan.

What's the difference? The guy in the spec meeting is trying to look smart, or maybe just trying to look like he's paying attention and contributing to the team: the excess generality will be water for someone else to carry. The NeoVictorian is probably going to carry the water himself. It might be a little more work to carry that pail, but if it makes our part of the system better, that work is little to ask. Perhaps it will open new vistas, and perhaps it will let us build something else tomorrow, something that will matter.

Worst case, a little GAYEity might be an economic wash. On the one hand, if YAGNI applied then you've spent some development time building something you didn't need. But that time might have gone down the drain anyway, because the alienated coder with no opportunity to build a proud and soaring thing might take the afternoon browsing and brooding, or polishing her resume. Thou shalt not muzzle the ox when he treadeth out the corn, or fence the thoughts of your programmers when they're turning bits and rust to gold.

by Kent Beck

Though we have not found the silver bullet that would cure the software crisis by making programming teams vastly more productive, the last decade has seen a tremendous revolution in programming. The difference: while teams are only incrementally better, individual programmers can do vastly more. The largest programs that we can write have only grown modestly, but the largest programs that one or two people can write -- which is to say, the largest programs that an individual can understand -- have grown dramatically.

The kind of program that was a stunning achievement in 1967 -- a solid and efficient interpreter for an interesting new language, say -- was still a solid MA thesis in 1987. Today, we give that project to the summer intern.

The way this was expected to work was that programming languages would improve, gaining bigger and better abstractions so that programmers could avoid all that detail. The way this worked in practice, though, was not quite what we expected: languages did improve, to be sure, but the key change is that people mastered a new style of programming. It's not just object-oriented programming; it's small methods on small objects.

Right now, about half my readers are scratching their heads because they're expert programmers themselves and they have no idea what I'm talking about. Small methods?

If you're in this crowd, go to your bookshelf and grab your copy of that wonderful collections of Software Tools
by Kernighan and Plauger. You know the one I mean — the critical step in explaining UNIX to the world, one of the greatest examples of tech writing and programming style in history. Look at that battered cover, and remember what wonderful code you learned from it.

Now, open it to a random page — say, something in chapter 5. Read the code examples. Try to resist the impulse to refactor — to simplify the conditionals, isolate the loop bodies, clarify the iteration. If you came across this in code you were working on, you'd say, "there are Code Smells here." But this is monumentally fine code — the very best of 1981.

Want to cry? Grab volume 1 of Knuth. Open it to p. 264: topological sort. If you're like me, you learned about topological sort right here. (Toplogical sorting is what we do when we've got a path of links in a hypertext and we want to unroll them into some sort of a sequence; it's what happens in Path View in Storyspace and Tinderbox and I've written it a dozen times over the years). You won't be able to read the code. This used to be a model of clarity; now, it's like reading the Latin you learned in school and haven't touched since.

Kent Beck is one of the leading voices on the subject of small methods programming. Much of his work is situated in the context of methodologies for programming teams, Extreme Programming and Test Driven Development. This volume is all about coding: writing clear and sensible code to do what you want. It's the clearest explanation of small methods style I've seen.

Small methods programming is especially well suited to NeoVictorian Computing, because it assumes that the coder is free to design and refactor objects on the fly. Small methods, in the end, mean that you have to give developers a lot of design autonomy, so your design has to emerge from the hand of the artisan, rather than having it imposed by the grand architect or the design committee.

Several people have emailed me to ask, "what programs do you mean, specifically?" This is a tricky question, of course, because software (like other works of art) carries with it a complex of circumstances and associations. Worse, there's no software canon, no body of work with which we can assume most people are familiar.

One hint, though: famous software that is intensely associated with its specific creator often (not always) has a strong NeoVictorian flavor. Reviewing the bidding, by "NeoVictorian" I mean systems that are:

Built for people

Built by people

Crafted in workshops

Irregular

Inspired

If we ask, "what inspired systems were clearly built by individual people in small workshops”, some things that leap to mind are aspects of Stallman's EMACS, Iverson’s APL, Atkinson’s MacPaint, Marshall's VIKI, Engelbart's Augment, Berners-Lee's World Wide Web, Kapor's Agenda, Bricklin’s VisiCalc, Cunningham's Wiki, Herzfeld and Horn’s Finder, Crowther and Woods' Adventure. (I'm sticking to software that's ten or twenty years old, in part because I don't want to get into fights. The list is just a start. Your mileage may vary. Ideas matter: emacs vs. vi doesn't.)

I'm off to Cork for BlogTalk, where I'm planning to talk about NeoVictorian, Nobitic, and Narrative weblogs.

In my earlier Big Setpieces on weblogs, I looked at ways we could nurture the blogosphere and ensure the prosperity of the long tail. The way things played out, the blogosphere took decent care of itself, and the long tail was sold out, and then sold off as scrap to a couple of “social networking” sites.

This time, I'm sticking to description: what do weblogs want? In particular, what are the ideas that underpin blogging? Our era is understandably allergic to manifestoes, but weblogs do have Big Ideas, even while people pretend that they're silly little personal pages, generally incapable of serious thought and good, mostly, ads a parking space for millions of cheap ads.

A little bonus tension: I seem to have lost my voice, and my alarm clock. Stay tuned.

Tim Bray observes that there's a lot of change going on right now in the software world right now, an unusual ferment in which established ideas about programming languages, databases, networks, processors, business models, and kitchen sinks are very much up for grabs.

Dirk Riehle, the guiding spirit of WikiSym, spends much of his time studying open source software for SAP. His essay on “The Economic Motivation of Open Source Software: Stakeholder Perspectives” (IEEE Computer, vol. 40, no. 4 (April 2007). Page 25-32) is very much worth reading; in place of the usual quasi-religious handwaving, this excellent essay approaches the question with sound analytical tools.

I think it might be mistaken. I can’t see a way to be sure. In some ways, that uncertainty is worse than simply being mistaken.

Perhaps the most interesting part of the paper is Dirk’s analysis of the impact of lower software pricing on the strategic choices of system integrators. Broadly speaking, he argues that lower software costs from open source let integrators keep what they would formerly have spent on software; alternatively, they can cut prices and buy market share. The details of the argument are sometimes confusing because Dirk (or his editor) assumes that the reader can't face algebra or calculus, and so the argument is couched in terms of geometric analogies ("the total profit is represented as the area of the gray triangle"). The reader is left to struggle, or expected to derive the equations herself.

In discussing the impact of open source on software vendors, Dirk examines an interesting strategy in which he envisions two existing, competing products. He shows how a failing competitor might cripple its more-successful rival by open sourcing its own, doomed product; if the community can keep it viable at no cost to its original owners, the abandoned product will provide a permanent drag on the victorious competitor's profits.

I'm not convinced this outcome is actually desirable. Admittedly, prices are lower, the loser's product remains available, ultimately both products are available as cheap community projects. What Dirk fails to consider here is capital: who will ever build new software, or new software companies, if they can be so easily destroyed? The scenario demonstrates how open source can act as an agent of capital destruction; it's not clear what role it plays in renewing this resource.

Finally, Dirk suggests that dominant community open source projects help developers by making it easier for them to find new jobs. If everyone uses Apache, then an Apache developer can easily pick up and work for anyone. This is not necessarily good news for employees:

Hiring and firing becomes easier because there’s a larger labor pool to draw from, and switching costs between employees are lower compared with the closed source situation. Given the natural imbalance between employers and employees, this aspect of open source is likely to increase competition for jobs and to drive down salaries.

Where the employee is a commodity, easily interchanged with another employee, then prudent management will drive wages and benefits down to subsistence levels. If developers don’t like subsistence wages, they can find another job. Unskilled labor, after all, is readily portable. As I wrote a few months ago:

We all woke up one day to find ourselves living in the software factory. The floor is hard, from time to time it gets very cold at night, and they say the factory is going to close and move somewhere else.

The essay suggests that rational developers should rush to become committers to successful open source projects; once securely ensconced in that role, they can command a premium salary (even though they may no longer have time to do actual work for their employer!). But will this work? There can only be a few committers; this seems a recipe for building a software world with a few well-paid foremen (who don't actually do any work) and a lot of poorly-paid mechanics who spend their days writing production code and their nights struggling to please the foreman and aspiring to reach the airy heights of open source. This seems a recipe for recreating the worst aspects of the labor union, stripped of the notion (sometimes, arguably, a pretense) that the union represents its members. In this brave future, we’re all going to be Teamsters.

My main concern, though, is that the framework is not developed in sufficient depth to be testable. Is the omission of capital a mere detail that might require amendment, or a consequential oversight that changes the result entirely? We don’t know; I don’t see any sure way to find out.

It requires capital to start a software project. The failing competitor can treat this as a sunk cost and write it off, and if Dirk is right all our existing software is destined to be Open Source or abandoned, and all the capital invested in these projects will be written off. Where the capital is then to be found for new software becomes an interesting question. I think Dirk assumes that willing volunteers will contribute sweat equity, and so the requisite capital will appear, but I don't think this is explicit in his model. Nor is it clear how the market could efficiently allocate this capital since there is no market! My guess at the software designer's exit strategy: lots of tiny firms that make artisanal software. Give people or businesses a unique and useful tool to do jobs that need doing, specialize, and keep well away from commodity software.

Update: Peter Merholz (whom I thought to be blissfully honeymooning in Dublin) deplores the design industry’s exploitation of its workers. “It's not uncommon for services firms to have their staff work 50+ hour weeks,” he observes. (A fifty hour week is light for me, but then again I have a great job and only myself to blame.)

Make sure you know the rate you’re billing out at, figure out how much money you’re bringing into the company, and how much of that you are seeing... Find out what your company’s profit margin is, and what the company is doing with those profits. You don’t need to put up with bullshit in order to work on sexy projects — I know design firms that land great work AND treat their employees well.

At the end of the talk, Greg asked a few people if they’ve ever written software that they’re proud of, and I could tell from the blank stares in the room that no one was able to come up with anything of significance.

Despite a hot room and a busy day — two other colloquia were scheduled at FEUP on the same day! — Monday’s colloquium on The New Knowledge Forge had terrific energy. It was great to see such a good crowd, and to hear so many good questions — especially the enthusiasm that Stewart Mader’s practical ideas on wiki adotion generated.

George Landow’s talk on Moving Beyond The Hammer is a great introduction to the impact of Web 2.0 ideas on scholarship — and on the invisible machinery of the printed book. And J. Nathan Matias’s discussion of Ethical Explanations made an extremely interesting connection between the way we describe laws (in his case, documenting the procedures of rules of order) and the ethics of software documentation.

Lots of good discussion in the breaks. What do the cosplay images in my NeoVictorian slides mean, exactly? And why are there so many Asian women?

I don’t pretend to fully understand cosplay. The simple answer is that ground zero of cosplay is in Tokyo, and that I was able to find more images of women (often images they took themselves) than men, and that they make a good illustration of NeoVictorian programming as something new, not just nostalgia for old technologies. The ways in which cosplay is not authentic — is better than authentic — are fascinating: cosplay is all about the rules of decorum, design, ethnicity, and class.

Even better were the discussions of the workplace. Who has fun in the mill? The mechanic — the fellow who fixes stuff! But software maintenance is ghastly, and operations is worse: can this be fixed? Would our workplace be better if we insisted on using our own tools: if it were a worker’s right to own and use her laptop, and to replace it when she sees fit with whatever brand of computer (and whatever software) lets her produce the best work? Should designers demand the right to sign their software, and writers insist on their right to be credited for — and to show prospective employers — the documents on which they work? Above all, do workers have a right to publish a professional blog?

We are facing a bad time. We are already in a recession — many of us have sensed for some time that times are not good, that governments are shading the statistics for their own purposes. But soon, clearly, a lot of programmers and designers are going to be on the street, and the street will not welcome them.

We all woke up one day to find ourselves living in the software factory. The floor is hard, from time to time it gets very cold at night, and they say the factory is going to close and move somewhere else.

Photo:Lewis Wickes Hines, NYPL 91PH056.029

In the coming recession (or the second depression), even more programmers and designers will be told to work longer hours on worse projects and worse products, to cut more corners, to grind out more junk because that’s what the client wants — and if we lose this client, you (or all your coworkers) might be out there looking in. Even more scientists and scholars will be asked to share in the general privation along with the CEOs and investors.

The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges.

The nonchalance of boys who are sure of a dinner, and would disdain as much as a lord to do or say aught to conciliate one, is the healthy attitude of human nature. A boy is in the parlour what the pit is in the playhouse; independent, irresponsible, looking out from his corner on such people and facts as pass by, he tries and sentences them on their merits, in the swift, summary way of boys, as good, bad, interesting, silly, eloquent, troublesome. He cumbers himself never about consequences, about interests: he gives an independent, genuine verdict. You must court him: he does not court you. But the man is, as it were, clapped into jail by his consciousness. (On Self Reliance)

We remember the last little market crash for shaking out the dotcom excess and ending the first weird years of Web exuberance. What we have too often forgotten is how some people picked themselves up the wreckage of the Web boom and created surprising new things and surprising new companies. These were the years when the netroots got started, when Rails happened. Everyone said "blogging is so 1999", but the first pro bloggers didn’t care about that.

We’re going to need self reliance. The government will be good — for the first time in many years — but the government will be busy. The VCs and the banks aren’t going to be paying attention to ideas — their prattle about cloud computing and Web 3.0 won’t be missed — and they won’t be paying attention to us.

But we’ll be here, and we’ll be paying attention, and we’re sure going to be reading a lot of Web pages and using a lot of new software. That’s the audience you want, and it’s all the audience you need.

It may be that we’re in for some rain. It’s not our fault, and it’s not our doing. Until the sun comes out, we can work indoors: write books, write software. Or we can go outside and get wet; that’s fine, too. We’re strong and young, the water will do us no harm, we can always find ourselves a dry towel and a hearth and we can be sure of dinner.

In The Emerging Democratic Majority
, Teixeira and Judis argue that a key Democratic constituency moving forward will be professionals. Doctors, lawyers, and professors used to vote Republican. They're now overwhelmingly Democratic.

This is important to American politics, and it explains one of the vast (if quiet) changes we’ve seen recently. Suburbs used to be Republican strongholds, red rings that surrounded blue cities. It’s not true anymore, because the suburbs are filled with new professionals, often following new professions. Computer programmers, tech writers, industrial designers, analysts, artists, actors, architects, Web developers. They've all got something in common.

But what, exactly? The old answer was, they all had jobs where they could keep their hands clean all day. But that’s not quite right, and it doesn’t explain why they’re progressives. Another old answer was that they were independent, either self-employed or able to switch jobs whenever they felt like it. But that’s not right either — and it’s not really true anymore.

Teixeira and Judis offer an intriguing new definition: to them, “professionals” are people who are chiefly motivated to create great things or great ideas. This contrasts to the assembly-line worker, motivated to get the best deal in exchanging time for money. It contrasts, too, with managers and entrepreneurs who are motivated chiefly by performance.

It’s a handy division. Michael Ruhlman, for example, has often explored the question: “Is cooking an art? Is a chef a craftsman, a business executive, a performer, or a salesman?” Obviously, all are true some of the time, for some people. The Teixeira/Judis definition is really handy here: the line between chef-professional and chef-manager is the line between cook and shoemaker, the line between the obsessive (It’s not right; do it over. The patron will wait.) and the pragmatist (time’s up, good enough, get it out of my face).

What does the smashup mean for software? Once we start to pick up pieces, we’re all going to need a lot of intelligence, and we’re going to want to gather lots of data and plenty of ideas. There’s lots to do, to plan, to build. And we’re going to have to fix the climate before it gets worse.

What we won’t have is capital; no huge teams, no huge implementation armies rolling out vast integrated enterprise systems that cost tens or hundreds of millions before they get switched on — or that crash into huge implementation failures. And I think we’ll see less buzzword-compliant vaporware, and more cool little tools from hungry little teams.

I originally argued that artisan software was desirable because it made more inspiring software, and because it made programmers less miserable. But the world changed recently, and for a while I think we’re going to need artisanal software simply to get software built at all.

While we’re on the subject of the economy, though, I wonder whether pair programming is a great idea. It’s better than huge distributed teams, and it avoids the mythical man-month problem by not generating endless, enervating meetings. But, compared to solo programming, it doubles labor costs. When is the benefit justified? (My guess is that, in the optimal case, pair programming is best for training, for maintaining dying code, and when fixing bugs will be exceptionally difficult or costly. But I used to detest having a lab partner, so perhaps those memories unduly influence me. Hard data and sabremetrics should yield real results on the question, eventually.)

Update: This post generated more reaction than the Tarte Tatin. Which is saying a lot. But (unlike that tasty apple confection) this one was half-baked and off-center. The change, I think, is that a few months ago our development baseline was “best practices”; now, it’s “everything is on hold or cancelled”.

by Robert D. Richardson, Jr.

I'm reading Richardson’s Emerson: the mind on fire. He’s an interesting fellow, and he formed an even more interesting circle. In fact, you could argue that Emerson’s fireside was the place where the real definition of “the American” was fixed.

Besides, my wife is working on a Masters in this period, so I’m bound to find it interesting.

Problem is, it’s not my period. Sometimes I need for a scorecard. Sarah Ripley: is she connected to George Ripley? Is Mary Emerson the sister, or the aunt? Just a hint of who and what people were would help a lot; sometimes, all you need is a poke to get on the right track.

Lydia Child? Oh, she's the woman who always brought lunch to her husband who was in debtor’s prison.

An encyclopedia, or wikipedia, is nice to have, but it's really too much and too slow. I’m used to having the Oxford Classical Dictionary ready to hand for Antiquity; I don’t think there’s anything like this for 1830 Boston.

But, you know, it should be easy enough to make in Tinderbox. Tinderbox could take care of lots of clerical minutiae — searching, sorting, organizing. And Tinderbox can remind you what you’re missing: who needs to have birthdates checked, who needs more narrative filled in. You don’t need a database: I fancy a few hundred people would cover the crying needs, and that’s a small Tinderbox document.

You could share the work among a bunch of hands — either a class (in which case it could be the seed of something like Landow’s Victorian Web ) or just as a study-group enterprise. I bet a grad student with a flair for witty commentary could publish this. (Another example: What Jane Austen Ate and Charles Dickens Knew
). If you’re managing an enterprise product, I imagine the same sort of information on your customers would be great to have — and building the system would be a great assignment for an intern or an newcomer.

It's not just academic. Think about all the family you know — or can remember, or about whom you remember stories. Will your niece’s children meet these people, or hear these stories? Write stuff down. And I don’t know of a better way to write this stuff down so it can be used, so it doesn’t turn into a big bunch of cards in the back of a drawer.

Want to work on ways and means? What to hear how to do it? Want to lend a hand? Email me.

In the New Yorker, Kelefa Sanneh offers a terrific and thoughtful overview of recent enthusiasms for care, craftsmanship, and artisanal work: Out Of The Office. A sensible response to Matthew B. Crawford’s much-twittered New York Times Magazine defense of The Case for Working With Your Hands.

Sanneh observes that parallels have been drawn to open source development, but I think that’s probably the wrong end of NeoVictorian Computing from which to launch this argument. I predict that we’ll learn more by looking at artisans and applications than at Linux, and in the end what we learn will inform the vast platform projects as well. In this connection, it’s worth looking at John Gruber’s WWDC wrapup which observes that a lot of people are moving into the development space – many of them iPhone developers migrating to Macintosh, rather than vice versa. (Note how the iPhone Apps store is probably the first real success for micropayments – something we’ve been trying forever.)

If you need to print a big-city newspaper every day, using 20th century technology, you’re going to have big, expensive printing presses — and lots of people to operate and maintain those presses. You’re going to have lots of people delivering the paper to stores, to street corners, to news stands, to houses — and you’re going to have fleets of trucks (or, before 1945, horse-carts) to carry those them. That’s all sunk cost: you have to buy the machinery and pay the delivery people, whatever else you do. Good paper or bad, big paper or small, the horses have to be fed.

That means you need investors and bank loans. That means your competition will be limited, because not everyone can have investors and bank loans. It means you can spend lots of making your product a little bit better; you’ve already paid so big an ante to get in the game that a little extra investment doesn’t change the economics much.

And, of course, it means you have lots of eyes on the product. Publishers, editors, investors, bankers – everyone has an opinion, and everyone thinks they are a stakeholder.

Media that require large production and distribution costs always skew toward terrific production values and conservative, cautious, consensus work. Newspapers, big magazines, network news, Hollywood: they’re all shaped by the economics of investment.

Conversely, media that don’t incur big production and distribution costs can be quirky and individual, the unedited voice of a person. You don’t need much capital to publish a book; in practice, it has generally required only two people: the author, and one acquiring editor at a major publishing house. It’s easy to show a painting: you need an artist and a gallery. That means you can risk more in books and in painting. You can leave the brushmarks on the canvas; as long as your publisher or your gallery doesn’t mind. Uou can try all sorts of things that the conventional wisdom considers sloppy, or strange, or crazy.

Blogging, of course, only requires one person. You don’t need a banker, you don’t need an acquiring editor, you don’t need a gallery. To be crazy is not necessarily a disadvantage for a blogger.

In between, you’ve got a spectrum of options. Drama requires more than a couple of people, but you only need one theater, a script, and some actors. Cinema requires a theater in every town. So, off-Broadway you have less polish than Broadway, and New York remains more free to experiment than Hollywood. Music’s all over the place, from the one-man band by the quick-lunch stand who plays real good for free to the big city symphony orchestras and stadium tours. On the street, you get quirks; at symphony hall, you get polish.

Take two news organizations, one designed for newspapers and one designed for the Web. They will be different, because the Team Newspaper is built around the capital requirements of the bank loans that pay for the presses and the trucks. Team Web isn’t. Team Newspaper checks everything, then check again: not because they are “professionals” or because it is holy writ, but because the editors and publishers and investors and the bank are always watching. It’s their paper. Not theirs alone — we’re all grownups, we know there are multiple stakeholders — but if you’re going to annoy one of the people at the top table, you’d better be right. Drudge can annoy who he likes, because he doesn’t need investors.

Gruber is right; a lot of the newspaper staff is not editorial. But that’s all starts with production: getting the paper out. Simply lopping off the production staff is unlikely to be the answer, because the whole enterprise was built around production. Besides, you’ve got equipment and real estate and debt and contracts, all structured around production and distribution; you can’t just walk away because it’s collateral. In addition, there’s a history problem: newspapers historically exploited pressmen and deliverymen, the employees eventually got a decent deal, and now nobody is going to trust the company if it tries to fire all the pressmen and delivery men. The newspapers have been trying to do that for a hundred years: it’s called breaking the union. So, sure, there are big non-editorial staffs, but that’s what newspapers are.

I’ve been working long hours lately, moving Tinderbox to a new development platform with new compiler and new libraries. This pays down some accumulated technical debt, and the need to examine every object and every file helps build a cleaner system. But the main benefit you’ll see immediately is speed.

Part of the speed boost will come from having a universal binary. Lots more comes from careful revisions, such as finding ways to do less work when loading and saving files. Lots of this is expensive; it's easy to spend several thousand dollars to save a few seconds.

Is it worthwhile?

The stock answer is, “Of course! Nothing is too good for customers, user experience is crucial, and speed makes the experience better.” But that’s wrong. User experience is not what matters: getting your work done is what counts. Getting the book written, getting the plan approved, getting the right answer – all of these matter a lot more than a few seconds here or there.

But seconds do count. If we can make Tinderbox load faster, that saves only a few seconds — but it saves those seconds many times a day, for many people. Perhaps none of those people really care about saving those few seconds. Perhaps none of them would pay to save them. But, in aggregate, it can make sense for them to pool together and invest a few thousand dollars because, in aggregate, it’s worth it.

But it’s not an easy call. Lots of businesses get this wrong, investing in over-engineered products when the customer would prefer to save the money. Business presentations are a conspicuous example: people spend a fortune on fancy PowerPoint decks that nobody needs to see. Books, too, have long been over-engineered: the history of the late 20th century demonstrates how readers, given the chance, prefer to keep the money at the cost of less perfectly-produced books and newspapers. (Magazines, on the other hand, seem to thrive on production value; as the paperback and DocuTech have flourished in the book world, the pulp magazine has pretty much vanished.)

Update:a sensible response from Loryn Jenkins, who is more concerned with “flow” state and less worried about the economics of temps perdu. Another developer wrote a convincing email to argue that most software managers actually undervalue both user experience and freedom from technical debt.

Most software developers and usability experts (shocker) focus on the easy part, rather than on the software being fun to use after they learn it. I have been trying to value the opposite priorities. Of course, I am not trying to make something hard to use. Rather, I am focusing on making sure the thing is fun to use. I still apply tons of usability techniques and make sure to include all the big UI five for the user.

Ed Blachman pointed this one out to me, drawing the connection to my NeoVictorian revival. And he’s got an even better point. He writes that we’re in a vicious cycle; “[enterprise software] developers decide we’re developing for drudges, so fun doesn’t matter; users are stuck with software that’s never any fun, so they regard using it as drudgework.” This is the textbook definition of alienation.

Fun is important. Results are even more important; getting the right answer, doing work that you could not do otherwise, beats fun and ease hands down. But, in the end, results are fun; they make you happy, they make customers happy, and along the way you’ll be rewarded with tokens that you can exchange for goods and services!

We talk far too much about first impressions and out of the box experiences, and far too little about letting people do things they couldn’t do before.

Today, I mostly paste libraries together. So do you, most likely, if you work in software. Doesn’t that seem anticlimactic? We did all those courses on LR grammars and concurrent software and referentially transparent functional languages. We messed about with Prolog, Lisp and APL. We studied invariants and formal preconditions and operating system theory. Now how much of that do we use? A huge part of my job these days seems to be impedence-matching between big opaque chunks of library software that sort of do most of what my program is meant to achieve, but don’t quite work right together so I have to, I don’t know, translate USMARC records into Dublin Core or something. Is that programming? Really? Yes, it takes taste and discernment and experience to do well; but it doesn’t require brilliance and it doesn’t excite. It’s not what we dreamed of as fourteen-year-olds and trained for as eighteen-year-olds. It doesn’t get the juices flowing. It’s not making.

I do a lot of work to paste libraries together too, but I do get to do a fair amount of exciting code as well. Artisanal software matters: it's our way to stop having to sleep on the cold floor of the software factory.

The torrent of emotion that has drenched discussion of the iPad in the tech community stems from an expectation that is wildly improbable. The iPad is not your next computer. It’s not your mother’s next computer. It’s not going to replace your laptop.

The iPad is a new thing.

Dave Winer is right: it's important to make stuff, and it’s nice to make stuff on the same device you’re going to run it. This is like using your own software: if you start developing here and running there, you can tell yourself that problems and imperfections don’t matter because your users won’t notice. It doesn’t affect you: you’re working somewhere else. And sometimes that’s the right decision. But this can also lead your company to start thinking of the customers as children to be placated, or marks to be fleeced.

The iPad isn't a place for developers to develop. Nor does it slice, dice, or make salad. That’s not what it wants to be.

Al3x is right; it’s important to have places where people learn to program, and places where people who don’t know better can once again take a shot at programs we all know are impossible.

The iPad isn’t a place for programming. It’s also not a place for dancing.

Adrian Miles is vexed that his colleagues, who still haven’t understood how important new media are going to be, won’t stop students from leasing obsolete laptops and get them to grab iPads. I think he’s wrong: students need a general-purpose computer. And they also need something like the iPad. (The iPad is a luxury: you could get by with a good laptop, but you’ll do better with both. $500 isn’t a lot compared to the cost of being a student.)

All the fuss about iPad and the enterprise (no printing!) is froth. Do you know how often you’ll feel like printing something from your iPad? No you do not. You won’t know until you’ve done a lot of work with it. You can guess, you can theorize, but you don’t know anything. (Funny: people will advocate user-centered design until the cows come home, but when they have a newspaper pulpit and a review assignment, they somehow forget all about it.)

You say: '$500? That’s an expensive book.' Sure. Who knows what the price point will be? In the meantime, have you worked out what your company pays as rent for bookshelves? Across my office, I see lots of books that used to be next to my computer. Thinking FORTH. Petzold. Knuth. CSCW ’90. I pay rent for them every month.

Relax. Take a deep breath. Enjoy iPad for what it is. Figure out what it’s trying to do — not what its maker thinks it’s trying to do, not what they say they want it to do, not what the script kiddies or the Wall Street Journal or your grandmother imagine it wants to do. Look what the thing itself is attempting.

Listen to the the work.

Then, when you know what the system wants, you can criticize it intelligently.

Until then, it’s not criticism, it’s merely a projection of your anxieties and your politics.

Stein and I have met only a handful of times. That’s strange, when you think about it: we’ve both been working and thinking about making things that are better than books for decades. He’s more interested in design, in how things look. I’m more interested in structure and sequence, in links. He’s a radical whose vision of the electronic page is deeply conservative in the best (and now almost-forgotten) sense of the word. I’m a liberal whose would gladly send the page to the wall if we could get something better, because people need something better if we’re to have any hope of saving the world. And they need it now.

Asked about why he has devoted so much energy to creating tools for making ebooks, Stein answers:

I was telling a programmer today that if I have one regret it’s that I never learned to program. And I think that’s part of the reason why I’ve wanted these tools.

This strikes me as an odd thing to regret. Learning to program isn’t like learning to be a ballet dancer or a major league second baseman. At worst, it requires a bit of study and practice. (Learning to be a terrific programmer might — perhaps — be another kettle of fish. But even then I’m not entirely sure. And you don’t need to be a terrific programmer to build stuff, just as you don’t need to a terrific driver to run down to the grocery.) If you’re forty years old, you’re never going to learn to hit a fastball, but there’s no reason you can’t pick up Ruby or Objective C or whatever you like. And Stein is by no means shy of hard work or daunting challenges.

Calvetica is an iOS calendar, fully compatible with the built-in calendar, that promises a simpler user interface, trading fewer taps for slightly more arcana. It’s a good tradeoff, since people who use calendars use them all the time; learn once, use a thousand times.

I’m not a great fan of publishing software development plans. Either you know what the next version will have, in which case you should ship it, or you don’t, in which case the reader can’t really rely on your plans. But, if you’re writing a calendar, it’s clever to format your roadmap as a calendar.

Calvetica is also interesting for full-throated embrace of Swiss Poster Design in the UI: Helvetica everywhere, and everything is red, white, black, grey, and gridded within an inch of it life.

I think that computers and the advancedness of computers hasn't changed art very much. It's enabled more to happen. Again, that counts a bit more. Better resolution, longer lengths, more color variety, but all in all it's the same thing. It's what experience can I deliver to you that is provocative, that can change how you think.

This is true, of course: the art speaks, not the canvas. But it’s also misleading – especially for someone whose career has been so closely identified with new media – because changes in medium and style and technique all let us say new things, and let the audience see old things in new ways, and this is precisely how art has always changes how people think since Thespis had the idea of putting a second character on stage and turned liturgy to drama.

Everyone saw the focus, the insistence, and the scorn for bozos – for people who were happy enough to get by. What people always missed about Jobs at Apple was the agile mind, able and eager to shift from the inside to the outside and back again.

Apple was not saved by design or innovation. What saved Apple is the constantly iterated shift from design to execution, from surface to depth, from style to science and back again.

Jobs learned from bad times but did not let bad times shape him. When he was kicked out as a dreamy incompetent, he went elsewhere, made a couple of new fortunes, came back, and kept dreaming. When the press assumed that Microsoft would simply discontinue Office for Mac and let someone buy the wreckage at the fire sale, Apple stood on a hill before the setting sun and shook its fist at the heavens and vowed that it would never be hungry (and powerless) again. But Apple did not become a defensive shell or an outlaw.

The original iMac, Steve’s machine, was Bondi Blue. Everything else was business beige. A couple of months later, you could tell which galleries on Santa Fe’s Canyon Road were doing well because the prosperous galleries all had those Bondi Blue iMacs. Some were sitting on 17th century Spanish oak, some on polished steel, and some on two planks of raw pine thrown across a couple of old trestles – depends on the gallery – but if they were selling art, they were buying that iMac.

Apple built the iMac into a nice little business, and then wrecked its own business with laptops. Same thing with MP3 players: people ridiculed the new iPod as underpowered and overpriced, then watched in amazement as it consumed the entire sector. And watched again as it fought off every challenge until, once again, Apple demolished that market with a new kind of phone.

Mac OS was clearly a better UI package than its competitors. Rather than refine it, Jobs replaced it with Mac OS X. This required tons of work and a vast leap of faith, since a company that had always rolled its own foundations now learned to depend on Unix and Postscript.

The dramatic shifts – abandoning floppies, abandoning Pascal and OpenDoc and Java, embracing virtual memory, abandoning the PowerPC, abandoning CDs – masked a steady reengineering of everything. Compare today’s MacBook Air to the original. They look pretty much the same, but the new one isn’t just faster. It feels better: more solid, more durable. Remember hinge kits? Hinges don’t wobble anymore. (A visible side effect of the process is the maxim that every new Apple laptop has a new video connector.)

After Apple had brought color to computers and everyone else was trying to slap juicy colors on their cases, Apple seized white. Dell and HP kept black for themselves. This was pure style, but Apple has exploited that blunder for a decade.

Look at the Apple stores. The analysts thought they were the desperate indulgence of a washed-up hippie who didn’t understand business, and they turned out to make heaps of money. But they’re not just distribution and maintenance centers. Everything is geared to say, you can do stuff with Apple stuff. You. MicroCenter and Fry’s were exciting with their racks of components and motherboards, but Apple put Grandma right in the center of the store and – look at that! – you were standing there learning stuff about video production, along with Grandma.

Everyone knew that companies should build on their core competency. What does a boutique computer company know about retail? Apple went about it like building a new system, with a fresh package and style and innovative systems in the background. (Everyone thinks Lion is about the scroll bars, but blocks and Grand Central Dispatch are going to change everything in ways that matter a lot more than scrolling.)

Have you ever seen an Apple Store employee standing around, looking bored, waiting? That’s execution. It can’t have been easy to see why this is important, to convince people it matters, even to make it possible. It’s not something you can instill by walking around and asking programmer A whether programmer B is really a bozo, which was Steve’s original management skill. This sort of polish goes all the way down. You never see a pile of Apple products in the stores: the piles are in the back, out of sight. You never see money. Do they even take cash? There’s no cashier and no cash register.

It’s still going on, and the analysts still can’t follow the shift from outside to inside. The iPhone 4 was outside: new design, new display, new camera. Now, we do the inside, with a new fast processor. “Who needs a fast processor?” they wonder. You need it to do stuff, because phone software has only a few seconds to do what you have in mind. (Desktop programs like Photoshop can take minutes to load, but if an iOS app doesn’t get itself loaded in five seconds, the system assumes it’s run amok and throws it in the drunk tank.) And what do you want your phone to do? Well, Knowledge Navigator is a nice start, isn’t it? If you want to do speech recognition, you need a separate thread and a separate core – and look what we have here? GCD to manage the cores, and mobile processors with multiple cores and decent battery life. The action this year is inside.

And down the road, when everyone is finally looking at the inside again, you’ve got to think that Tim will remind us again that there’s one more thing...

I’m finding the first chapter of Spuybroek’s The Sympathy Of Things
absolutely fascinating. It explores “The Digital Nature of the Gothic,” connecting the artistic impulse behind fan vaults and foliated stonework to the craft of new media. Ruskin’s characteristics of the Gothic – savageness, changefulness, naturalism, grotesqueness, rigidity, and redundance — apply with great force to the nature of the digital.

It’s tons of fun, believe it or not. It’s not easy to follow Spuybroek’s argument, which twists through spandrels and ogees and technical issues of vaulting long before we get to technical issues of the digital. I think I see how the gothic nature illuminates Storyspace-style hypertext in new and powerful ways. It’s worth a little architectural research to find out.

It will be interesting to see how NeoVictorian artisanal programming looks in the light of Gothic Revival architecture. There’s certainly an interesting point to be made about the use of new tools and finishes to achieve things to which (we think) ancient artists aspired. To accompany this, we might want to set up an iPod playlist with Respighi’s Ancient Airs, Vaughan Williams’ fantasias on Greensleeves and Thomas Tallis, a Mendelssohn oratorio, and Ormandy’s orchestral transcriptions of Bach.

If you want to keep the software and services around that you enjoy, do what you can to make their businesses successful enough that it’s more attractive to keep running them than to be hired by a big tech company.

We have certain work to do for our bread, and that is to be done strenuously; other work to do for our delight, and that is to be done heartily: neither is to be done by halves nor shifts, but with a will; and what is not worth this effort is not to be done at all.‬‎

From Marx, Estranged Labor (1844)

The worker therefore only feels himself outside his work, and in his work feels outside himself. He feels at home when he is not working, and when he is working he does not feel at home. His labor is therefore not voluntary, but coerced.

From Thoreau, Walden (1854)

I dug my cellar in the side of a hill sloping to the south, where a woodchuck had formerly dug his burrow, down through sumach and blackberry roots, and the lowest stain of vegetation, six feet square by seven deep, to a fine sand where potatoes would not freeze in any winter. The sides were left shelving, and not stoned; but the sun having never shone on them, the sand still keeps its place. It was but two hours' work. I took particular pleasure in this breaking of ground, for in almost all latitudes men dig into the earth for an equable temperature. Under the most splendid house in the city is still to be found the cellar where they store their roots as of old, and long after the superstructure has disappeared posterity remark its dent in the earth. The house is still but a sort of porch at the entrance of a burrow.

I’m speaking this week at Presidency University in Kolkata, about NeoVictorian computing and the digital humanities. We’ve made lots of progress in the digital humanities in the last thirty years; already, we can see the time when they’ll just be the humanities. The important discoveries may come from a direction most people don’t expect.

The ancestor of Presidency was founded around 1818. I studied at Harvard. Here’s what Harvard looked like in 1828. (I did my undergraduate work at Swarthmore, which wouldn’t get going for another 28 years.)

In 1837, a very junior Ralph Waldo Emerson have a speech at the start of the school year. At that time, the US had been independent for about sixty years, roughly the interval that separates us from 1948. The annual celebration of studies, he wrote, was “a friendly sign of the survival of the love of letters amongst a people too busy to give to letters any more.”

Meek young men grow up in libraries, believing it their duty to accept the views, which Cicero, which Locke, which Bacon, have given, forgetful that Cicero, Locke, and Bacon were only young men in libraries, when they wrote these books.

This seems quiet, but it’s quietly hot stuff, a call to revolution. It was hot stuff then: that’s why the US today is filled with Unitarian Universalist churches. And it’s still pretty hot:

“We will walk on our own feet; we will work with our own hands; we will speak our own minds. The study of letters shall be no longer a name for pity, for doubt, and for sensual indulgence.

Pity and doubt have certainly returned to the humanities, and it seems they’re seldom farther away from the heart of Digital Humanities than the shadow cast by a rejected application.

This call for self-reliance echoes the best of humanities and, at the same time, indicts the common frailties of #elit. Too often, elit has reacher for tools off the shelf and chosen to say what wass easily said with those tools. Too often, elit has been so worried that those young folks in libraries might find it inconvenient to read the work that elit has lost confidence in making the work at all.