Archives

Archives for December 2004

I’ve been off-blog for a time here as I tend to family during the holidays — normal posting will resume next week.

In the meantime, best wishes for the new year to all of you who keep up with my small effort in the blogosphere here. We celebrated, as is often the case here at our home, with good beer and spicy Szechuan meat sauce noodles — noodles being, as I understand it, symbolic, in the Chinese culture, of long life. Also tasty.

And of course it’s not possible this year to celebrate without thinking of the thousands lost and the thousands still coping with the sorrows and privations of the Indian Ocean tsunami. Living here as we do on the Hayward and San Andreas faults, we can only acknowledge our collective vulnerability, and offer whatever help we’re able.

But not everyone sees the humanitarian value in such decisions: Our friends at the Ayn Rand Institute argue that the U.S. government shouldn’t offer any humanitarian aid at all. After all, the government has no money of its own except what it raises by taxation — and taxation is, you know, like, theft. “Every dollar the government hands out as foreign aid has to be extorted from an American taxpayer first” — so let’s stand on that principle while thousands die!

This ludicrous argument has no virtue other than consistency with the rest of the rad-lib[ertarian] “starve the beast” mentality that, alas, has achieved more influence in the Beltway than anyone would have imagined possible a decade ago. It’s a perspective that’s not far removed from those of proponents of Social Security pseudo-reform, who are really eager to scuttle the program so that the government is no longer involved in securing a safety net for the elderly.

In the end, these people see no role for the government in taking care of anyone, ever. We’ve gone way beyond the days of complaining about welfare queens and the nanny state. We now face determined ideologues who honestly believe that government should let people die of starvation before taxing citizens a cent. Surely the best retort to their extremist idiocy is a simple demonstration of the effectiveness of both public and private aid in the face of nature’s implacable havoc. May such help be there, for them as for all of us, should it ever be needed.

I’m catching up on e-mail as my flight is delayed in O’Hare and came across the following tidbit about Slate Magazine in the latest Edupage mailing:

“Although the magazine only recently achieved break-even status on revenue of about $6 million per year, Slate won a National Magazine Award for its editorial content, and mainstream news organizations frequently cite it. The publication is also given credit for shaping Web publishing and introducing the use of hyperlinks and Web logs.“

(Emphasis mine.)

Am I reading that right? Edupage wants me to believe that Slate is responsible for introducing hyperlinks to the world?

I’m having a very, very hard time believing that.

Am I alone?

No, Jeremy, you’re not alone. The source of this odd statement is almost certainly David Carr’s New York Times piece, which included the following passage: “Although Slate has never achieved steady profitability, it is credited with helping to shape Web publishing as well as pioneering the use of hyperlinks and Web logs.”

Carr’s “pioneering” was marginally closer to reality than Edupage’s feeble substitution of “introducing.” But neither is particularly correct.

I sincerely doubt anyone at Slate would have claimed to have introduced either hyperlinks or blogs to the world. Slate was in fact rather shy of linking for the longest time — in the early days, the links in each article were typically segregated in a little afterword section. As for blogs, Slate gave Mickey Kaus’s blog a home at a time when, quite possibly, only three people in the Washington Post newsroom knew what a blog was; but at the same time, blogs were already a widespread format, and widely known to the web-aware world.

Slate deserves tons of credit for many things; after a lot of false starts in the first few years, it became quite adept at devising creative Web-native formats for writers (like the e-mail exchanges). But “pioneering the use of hyperlinks and Web logs” is just not an accurate statement.

I imagine Carr meant to write something more like “The publication is also given credit for raising the profile of hyperlinks and blogs in the media and government circles that constitute some of its core readership.” Or if he didn’t, he should have.

Congratulations and best of luck to everyone at Slate, which is being purchased from Microsoft by the Washington Post Co. Let’s hope the Posties pick up some of Slate’s online savvy, and the Slatesters get the benefit of smart media owners. Any way you cut it, keeping a high-quality Web site going is not easy (I say that from intimate experience), and persuading another business to buy in represents a real achievement. We can assume everyone involved did this out of faith in Slate as a publication; if there is one certainty here, it’s that Microsoft didn’t sell Slate out of a need for extra cash.

Randall Stross’s piece on Firefox in the Sunday Times business section, with its comical quotes from a Microsoft spokesman who suggests that unhappy users buy themselves new computers, brought a little wisp of browser-war nostalgia to mind.

It’s undeniable that, today, if you want to protect your computing life and you run Windows, you’re insane to continue running basic Microsoft applications like Internet Explorer and Outlook. (Firefox and Thunderbird are great alternatives in the open source world. I’m still wedded to Opera and Eudora out of years-long habits. Opera does a great job of saving multiple open windows with multiple open tabs from session to session, even when you suffer a system freeze.) These programs function together in a variety of ways that Microsoft presented as good ideas at the time they were written. Hey, integration means everything works seamlessly, and everyone knows how highly the business world prizes the word “seamless.”

Today it is precisely the same integration — the way, for instance, that ActiveX controls and other code pass freely across the borders of these applications, allowing them to work together in potentially useful but hugely insecure ways — that make IE and Outlook such free-fire zones for viruses and other mischief. (It’s certainly true that the Microsoft universe is targeted by virus authors because it’s where the most users are; but it’s also true that Microsoft’s products are sitting ducks in a way that its competitors in the Apple and open source worlds simply are not.) If you’re willing to turn on Microsoft’s auto-update to keep up with the operating system patches, and to abandon Outlook and IE for your day-to-day work, you can rest relatively easy. But you never know when some other application is calling on that “embedded browser functionality,” when you’re using that Outlook code without even realizing it.

Stross is strangely mum on the antitrust background of these matters. It’s the ultimate, though not entirely unforeseen, irony of the Microsoft saga that the very integration-with-the-operating-system that enabled Microsoft to “cut off the air supply” of its Netscape competition is now looking more and more like the franchise’s Achilles heel. Microsoft fought a tedious, embarrassing and costly legal war with the government to defend its right to embed Web browser functionality in the heart of the operating system. “Our operating system is whatever we say it is! How dare government bureaucrats meddle with our technology!” was the company’s war cry.

Now it turns out that if Gates and company had paid a little more heed to the government they might have done their users, and their business, a favor. Microsoft’s tight browser/operating system integration helped spell Netscape’s corporate doom; today it is one of the biggest gaping holes in Windows security, and a legion of hostile viruses swarms through it.

Stross writes, “Stuck with code from a bygone era when the need for protection against bad guys was little considered, Microsoft cannot do much. It does not offer a new stand-alone version of Internet Explorer. Instead, the loyal customer must download and install the newest version of Service Pack 2. That, in turn, requires Windows XP. Those who have an earlier version of Windows are out of luck if they wish to stick with Internet Explorer.”

But it’s not quite that simple. Microsoft’s reluctance to invest in browser development has stemmed only partly from the kind of inertia that comes from having won a war in a previous generation (“The browser? We own that space, we don’t have to keep improving it”). Even more deeply, Microsoft has been reluctant to make the browser better — more reliable, more secure, more flexible as an interface for more kinds of applications — because its leaders understood very well what that would mean: The better the browser is, the less dependent people are on the operating system’s features — as today’s users of well-designed Web applications like Gmail, Flickr and Basecamp demonstrate every day. This is not where Microsoft wants to see the computing world go, so why, once it gained a stranglehold on the browser market, would it help the process along?

In other words, what happened once Microsoft left the courtroom was precisely and exactly what the government’s antitrust lawyers said would happen: Microsoft’s goal in integrating the browser was not to serve the public and the users, but to shut down further innovation and development. Netscape argued that Microsoft wanted to control browsers because it wanted to make sure they did not emerge as a platform for applications that would undermine Windows’ importance. Netscape, the record now shows, was right.

We lost three or four years of Internet time (from the collapse of the bubble to this year’s Renaissance of Web applications) thanks to Microsoft’s stonewalling and the Bush administration’s unwillingness to represent the public interest in this matter. The next time a worm comes crawling through your Windows, curse the Justice Department’s settlement — and go download Firefox.

In the world of research and development, as in the world of entrepreneurial capitalism, there’s this notion of a “proof of concept.” A proof of concept is a small-scale test or prototype demonstration that takes some new idea and subjects it to some stress-testing by reality — not a full dose but enough to show that the idea might be worth pursuing. Prove the concept, and maybe you’ll risk fully funding the idea. Can’t prove the concept? Give up. Move on to something else.

For two decades now, ever since Ronald Reagan unveiled his “Star Wars” vision, a faction of the defense-industrial complex has been trying to produce a proof of concept for missile defense — to show that we can, with some level of reliability, defend the U.S. by shooting down hostile incoming missiles.

As proofs of concept go, this was not a cheap one — the single test cost $85 million. We’ve spent $80 billion to date on this program, and President Bush wants to spend another $50 billion in the next few years.

But the real issue is not cost but methodology: The whole point of the proof of concept approach is that, if you can’t prove the concept, you pull the plug while you’re still in the R&D phase. The Bush administration is instead ignoring the simple reality of the results of its experiments and barreling forward.

I guess it’s just being consistent: If you don’t accept simple budgetary arithmetic and you don’t accept the results of weapons inspections in Iraq and you don’t accept the overwhelming scientific consensus on global warming, why should you break the pattern and accept the data from your missile-defense experiments? After all, that might be inching uncomfortably close to the “reality-based community.” (See Fred Kaplan in Slate for a more detailed argument: “We can’t even count on the rocket getting out of its launch silo, much less the millions of minute operations that must follow. President Bush fielded a half-dozen antimissile missiles and called them ‘operational.’ But they’re a ruse.”)

What we have here, aside from a massive and repeated technical failure, is a proof of concept for our government’s new, proof-of-concept-free approach to spending our money. If we can get away with this reality-denial, the Bush administration’s logic goes, let’s keep doing it on a bigger and bigger scale! And indeed that’s what’s unfolding as the comic opera known as the Bush economic plan plays its overture to Act Two.

Let’s see, we had enough money to support Social Security until we cut taxes repeatedly and manufactured a crisis, which is now being used to justify a ridiculous privatization scheme. But we still have enough money to pour into the black hole of missile defense.

I hate to be cynical, and certainly a lot of this is being driven by stupid blind ideology, but there is a common thread here: There’s profit to be made by parking billions of Social Security money on Wall Street. And there’s money to be made in missile defense.

Hey, maybe some of that money will be kicked back in 2008, when it’s time to find and fund another Republican to keep this con game going!

I’ve been enjoying reading music critic Alex Ross’s blog over at “The Rest is Noise” for some time now. This thoughtful comment on the role of the critic caught my eye — it pretty well sums up what I aspired to in the many years I devoted to writing about theater and movies:

“As a critic, I’m obliged to describe musical reality precisely as I hear it; I can’t sway in the breeze of intermission chatter. All the same, I want to write a review that will be of use even to a listener who had an entirely different experience. This entails writing with a certain humble awareness that my experience is not universal, that my account will never be carved in granite. Criticism is at its best where confidence meets generosity. It’s a tricky business: the slide into fake omniscience is deliciously quick. But I’m working on it.”

Ecco Pro — the outliner/PIM that I have writtenaboutperiodically and am still using today, despite the fact that it has been orphaned by its owners and not modified since 1997 or so — looks like it may be released as open source. (Thanks to Andrew Brown for the link.) Whether this means that the heart of Ecco will be transplanted by enterprising programmers into some newer, modern body — or just that Ecco devotees will have an opportunity to tweak and debug the trusty application — it’s wonderful news, if it actually happens.

For those of us who are still consumers of those bundles of printed content known as books, the importance of today’s news of Google’s library deal is almost impossible to overstate. It’s just huge.

While the Web has represented an enormous leap in the availability of human knowledge and the ease of human communication, its status as a sort of modern-day Library of Alexandria has remained suspect as long as nearly the entire corpus of human knowledge pre-Web remained locked away off-line between bound covers. “All human knowledge except what’s in books” is sort of like saying “All human music except what’s in scores.” There’s lots of good stuff there, but not the heart of things. Your Library of Alexandria is sort of a joke without, you know, the books.

Now Google, in partnership with some of the world’s leading university libraries (including Stanford and Harvard), is undertaking the vast — but not, as Brewster Kahle reminded us at Web 2.0, limitless — project of scanning, digitizing and rendering searchable the world of books.

Google’s leaders are demonstrating that their corporate mission statement — “to organize the world’s information and make it universally accessible and useful” — is not just empty words. If you’re serious about organizing the world’s information, you’d better have a plan for dealing with the legacy matter of the human species’ nearly three millennia of written material. So, simply, bravo for the ambition and know-how of a company that’s willing to say, “Sure, we can do it.”

Amazon’s “look inside the book” feature provides a limited subset of this sort of data. But where Amazon has seemed mostly interested in providing limited “browsability” as a marketing tool, Google has its eye on the more universal picture. And so the first books that will be fully searchable and readable through this new project are books that are old enough to be out of copyright. The public domain just got a lot more public. (And presumably, as John Battelle suggests, we’ll see a new business ecosystem spring up around providing print-on-demand physical copies of these newly digitized, previously unavailable public-domain texts.)

This is all such a Good Thing for the public itself that we may be inclined to overlook some of the more troubling aspects of the Google project. Google is making clear that, as it digitizes the holdings of university libraries, it’s handing the universities their own copies of the data, to do with as they please. But apparently the Google copies of this information will be made widely available in an advertising-supported model.

For the moment, that seems fine: Google’s approach to advertising is the least intrusive and most user-respectful you can find online today; if anyone can make advertising attractive and desirable, Google can.

But Google is a public company. The people leading it today will not be leading it forever. It’s not inconceivable that in some future downturn Google will find itself under pressure to “monetize” its trove of books more ruthlessly.

Today’s Google represents an extremely benign face of capitalism, and it may be that the only way to get a project of this magnitude done efficiently is in the private sector. But capitalism has its own dynamic, and ad-supported businesses tend to move in one direction — towards more and more aggressive advertising.

Since we are, after all, talking about digitizing the entire body of published human knowledge, I can’t help thinking that a public-sector effort — whether government-backed or non-profit or both — is more likely to serve the long-term public good. I know that’s an unfashionable position in this market-driven era. It’s also an unrealistic one given the current U.S. government’s priorities.

But public investment has a pretty enviable track record: Think of the public goods that Americans enjoy today because the government chose to seed them and insure their universality — from the still-essential Social Security program to the interstate highway system to the Internet itself. In an ideal world, it seems to me, Google would be a technology contractor for an institution like the Library of Congress. I’d rather see the company that builds the tools of access to information be an enabler of universal access than a gatekeeper or toll-taker.

The public has a big interest in making sure that no one business has a chokehold on the flow of human knowledge. As long as Google’s amazing project puts more knowledge in more hands and heads, who could object? But in this area, taking the long view is not just smart — it’s ethically essential. So as details of Google’s project emerge, it will be important not just to rely on Google’s assurances but to keep an eye out for public guarantees of access, freedom of expression and limits to censorship.

Today I confronted the sheer vastness of the topic I have chosen to write my book about. I indulged my vertigo for about 15 minutes. Then I borrowed a page from some of the people I’m writing about.

A while back, the team at OSAF, in an effort to wrestle the schedule of their project to the ground, temporarily moved their planning process off the wiki and onto a whiteboard. They broke their project down into roughly equal chunks of work and wrote the name of each chunk on a simple yellow sticky note. Instantly, the outlines of a schedule became easier to discern.

Stickies (a k a “Post-It” notes)! I’d seen play- and screen-writing friends do the same for their projects. I’m a devotee of outlining software, and I’m using a venerable outliner to organize my research. But I needed a different approach to get beyond the sense of “Oh crap, how do I find a way out of this swamp and onto that mountain range?” Somehow, laying all the pieces out in an open-ended, non-hierarchical way on a two-dimensional plane just helped: Something about being able to take in all the pieces in a map-like overview rather than peering in through the keyhole of screen real estate.

My stickies are now marshalled out on a 3′ x 4′ foamcore board and looming over my desk. Over the next few months I will add to them, rearrange and reorganize them, then remove them from the board one by one as they pass from concept into actual pieces of writing.

Over the years I have accumulated a large collection of cassette tapes. Typically, I’d own LPs (later, CDs) but I’d transfer them to cassette to listen to them in the car. You could fit two LPs on one C-90, so it was efficient, and everyone knows that music and driving go together like, say, cinnamon and sugar. (Convenience of this sort is, of course, on the wane as the world of “digital rights management” tries to lock down everything it can.)

This was my mode for many years; I still remember debating whether it was worth dubbing my multi-LP set of Laurie Anderson’s “United States” to listen to during the cross-country drive in 1986 as I moved my life from Boston to San Francisco. I knew I’d made the right choice somewhere on I-80 on the long, slow climb up from the plains on the Nebraska/Wyoming border. Anderson’s voice intoned its futuristic alienations and fragile hopes as I hung suspended between two coasts and two lives, and the wind began roaring down from the mountains, buffeting my old car back toward the past. (I also listened to a lot of Buddy Holly — alienation only gets you so far.)

I’ll keep those tapes, and a handful of others. But I’ve got hundreds more that just duplicate music I have in other, better formats. So what does one do with several hundred old cassette tapes? They were once reasonably high quality blanks; it seems criminal to toss them in landfill. I’d welcome any ideas.