A Project of the Institute for the Future of the Book

Monthly Archives: October 2005

Ted Nelson (introduced last week by Ben) is a lonely revolutionary marching a lonely march, and whenever he’s in the news mockery is heard. Some of this is with good reason: nobody’s willing to dismantle the Internet we have for his improved version of the Internet (which doesn’t quite work yet). You don’t have to poke around too long on his website to find things that reek of crackpottery. But the problems that Nelson has identified in the electronic world are real, even if the solutions he’s proposing prove to be untenable. I’d like to expand on on one particular aspect of Nelson’s thought prominent in his latest missive: his ideas about the inherent ideologies of document formats. While this sounds very blue sky, I think his ideas do have some repercussions for what we’re doing at the Institute, and it’s worth investigating them, if not necessarily buying off on Xanadu.

Nelson starts from the position that attempting to simulate paper with computers is a mistaken idea. (He’s not talking about e-ink & the idea of electronic paper, though a related criticism could be made of that: e-ink by itself won’t solve the problem of reading on screens.) This is correct: we could do many more things with virtual space than we can with a static page. Look at this Flash demonstration of Jef Raskin’s proposed zooming interface (previously discussed here), for example. But we don’t usually go that far because we tend to think of electronic space in terms of the technology that preceded it – paper space. This has carried over into the way in which we structure documents for online reading.

There are two major types of electronic documents online. In one, the debt to paper space is explicit: PDFs, one of the major formats currently used for electronic books, are a compressed version of Postscript, a specification designed to tell a printer exactly what should be on a printed page. While a PDF has more functionality than a printed page – you can search it, for example, and if you’re tricky you can embed hyperlinks and tables of content in them – it’s built on the same paradigm. A PDF is also like a printed page in that it’s a finalized product: while content in a PDF can be written over with annotations, it’s difficult to make substantial changes to it. A PDF is designed to be an electronic reproduction of the printed page. More functionality has been welded on to it by Adobe, who created the format, but it is, at its heart attempting to maintain fidelity to the printed page.

The other dominant paradigm is that of the markup language. A quick, not too technical introduction: a markup language is a way of encoding instructions for how a text is to be structured and formatted in the text. HTML is a markup language; so is XML. This web page is created in a markup language; if you look at it with the “View Source” option on your browser, you’ll see that it’s a text file divided up by a lot of HTML tags, which are specially designed to format web pages: putting <i> and </i> around a word, for example, makes it italic. XML is a broader concept than HTML: it’s a specification that allows people to create their own tags to do other things: some people are using their own version of XML to represent ebooks.

There’s a lot of excitement about XML – it’s a technology that can be (and is)bent to many different uses. A huge percentage of the system files on your computer, for example, probably use some flavor of XML, even if you’ve never thought of composing an XML documents. Nelson’s point, however, is that there’s a central premise to all XML: that all information can be divided up into a logical hierarchy – an outline, if you will. A lot of documents do work this way: book is divided into chapters; a chapter is divided into paragraphs; paragraphs are divided into words. A newspaper is divided into stories; each story has a headline and body copy; the body copy is divided into paragraphs; a paragraph is divided into sentences; a sentence is divided into words; and words are divided into letters, the atom of the markup universe.

II. A Victorian example

But while this is the dominant way we arrange information, this isn’t necessarily a natural way to arrange things, Nelson points out, or the only way. It’s one way of many possible ones. Consider this spread of pages (double-click to enlarge them):

This is a title page from a book printed by William Morris, another self-identified humanist. We mostly think of William Morris (when we’re not confusing him with the talent agency) as a source of wallpaper, but his work as a book designer can’t be overvalued. The book was printed in 1893; it’s entitled The Tale of King Florus and the Fair Jehane. Like all of Morris’s books, it’s sumptuous to the point of being unreadable: Morris was dead set on bringing beauty back into design’s balance of aesthetics & utility, and maybe over-corrected to offset the Victorian fixation on the latter.

I offer this spread of pages as an example because the elements that make up the page don’t break down easily into hierarchical units. Let’s imagine that we wanted to come up with an outline for what’s on these pages – let’s consider how we would structure them if we wanted to represent them in XML. I’m not interested in how we could represent this on the Web or somewhere else – it’s easy enough to do that as an image. I’m more interested in how we would make something like this if we were starting from scratch & wanted to emulate Morris’s type and woodcuts – a more theoretical proposition.

First, we can look at the elements that comprise the page. We can tell each page is individually important. Each page has a text box, with decorative grapevines around the text box; inside the text box, the title gets its own page; on the second page, there’s the title repeated, followed by two body paragraphs, separated by a fleuron. The first paragraph gets an illustrated dropcap. Each word, if you want to go down that far, is composed of letters.

But if you look closer, you’ll find that the elements on the page don’t decompose into categories quite so neatly. If you look at the left-hand page, you can see that the title’s not all there – this is the second title page in the book. The title isn’t part of the page – as would almost certainly be assumed under XML – rather, they’re overlapping units. And the page backgrounds aren’t mirror images of each other: each has been created uniquely. Look at the title at the top of the right-hand page: it’s followed by seven fleurons because it takes seven of them to nicely fill the space. Everything here’s been minutely adjusted by hand. Notice the characters in the title on the right and how they interact with the flourishes around them: the two A’s are different, as are the two F’s, the two N’s, the two R’s, the two E’s. You couldn’t replicate this lettering with a font. You can’t really build a schema to represent what’s on these two pages. A further argument: to make this spread of pages rigorous, as you’d have to to represent it in XML, would be to ruin them aesthetically. The vines are the way they are because the letters are the way they are: they’ve been created together.

The inability of XML to adequately handle what’s shown on these pages isn’t a function of the screen environment. It’s a function of the way we build electronic documents right now. Morris could build pages this way because he didn’t have to answer to the particular restraints we do now.

III. The ideologies of documents

Let’s go back to Ted:

Nearly every form of electronic document- Word, Acrobat, HTML, XML- represents some business or ideological agenda. Many believe Word and Acrobat are out to entrap users; HTML and XML enact a very limited kind of hypertext with great internal complexity. All imitate paper and (internally) hierarchy.

For years, hierarchy simulation and paper simulation have been imposed throughout the computer world and the world of electronic documents. Falsely portrayed as necessitated by “technology,” these are really just the world-view of those who build software. I believe that for representing human documents and thought, which are parallel and interpenetrating– some like to say “intertwingled”– hierarchy and paper simulation are all wrong.

It’s possible to imagine software that would let us follow our fancy and create on the screen pages that look like William Morris’s – a tool that would let a designer make an electronic woodcut with ease. Certainly there are approximations. But the sort of tool I imagine doesn’t exist right now. This is the sort of tool we should have – there’s no reason not to have it already. Ted again:

I propose a different document agenda: I believe we need new electronic documents which are transparent, public, principled, and freed from the traditions of hierarchy and paper. In that case they can be far more powerful, with deep and rich new interconnections and properties- able to quote dynamically from other documents and buckle sideways to other documents, such as comments or successive versions; able to present third-party links; and much more.

Most urgently: if we have different document structures we can build a new copyright realm, where everything can be freely and legally quoted and remixed in any amount without negotiation.

Ben does a fine job of going into the ramifications of Nelson’s ideas about transclusion, which he proposes as a solution. I think it’s an interesting idea which will probably never be implemented on a grand scale because there’s not enough of an impetus to do so. But again: just because Nelson’s work is unpragmatic doesn’t mean that his critique is baseless.

I feel there’s something similar in the grandiosity of Nelson’s ideas and Morris’s beautiful but unreadable pages. William Morris wasn’t just a designer: he saw his program of arts and crafts (of which his books were a part) as a way to emphasize the beauty of individual creation as a course correction to the increasingly mechanized & dehumanized Victorian world. Walter Benjamin declares (in “The Author as Producer”) that there is “a difference between merely supplying a production apparatus and trying to change the production apparatus”. You don’t have to make books exactly like William Morris’s or implement Ted Nelson’s particular production apparatus to have your thinking changed by them. Morris, like Nelson, was trying to change the production apparatus because he saw that another world was possible.

And a postscript: as mentioned around here occasionally, the Institute’s in the process of creating new tools for electronic book-making. I’m in the process of writing up an introduction to Sophie (which will be posted soon) which does its best to justify the need for something new in an overcrowded world: Nelson’s statement neatly dovetailed with my own thinking on the subject on why we need something new: so that we have the opportunity to make things in other ways. Sophie won’t be quite as radical as Nelson’s vision, but we will have something out next year. It would be nice if Nelson could do the same.

There’s an interesting discussion going on right now under Kim’s Wikibooks post about how an open source model might be made to work for the creation of authoritative knowledge — textbooks, encyclopedias etc. A couple of weeks ago there was some dicussion here about an article that, among other things, took some rather cheap shots at Wikipedia, quoting (very selectively) a couple of shoddy passages. Clearly, the wide-open model of Wikipedia presents some problems, but considering the advantages it presents (at least in potential) — never out of date, interconnected, universally accessible, bringing in voices from the margins — critics are wrong to dismiss it out of hand. Holding up specific passages for critique is like shooting fish in a barrel. Even Wikipedia’s directors admit that most of the content right now is of middling quality, some of it downright awful. It doesn’t then follow to say that the whole project is bunk. That’s a bit like expelling an entire kindergarten for poor spelling. Wikipedia is at an early stage of development. Things take time.
Instead we should be talking about possible directions in which it might go, and how it might be improved. Dan for one, is concerned about the market (excerpted from comments):

What I worry about…is that we’re tearing down the old hierarchies and leaving a vacuum in their wake…. The problem with this sort of vacuum, I think, is that capitalism tends to swoop in, simply because there are more resources on that side….
…I’m not entirely sure if the world of knowledge functions analogously, but Wikipedia does presume the same sort of tabula rasa. The world’s not flat: it tilts precariously if you’ve got the cash. There’s something in the back of my mind that suspects that Wikipedia’s not protected against this – it’s kind of in the state right now that the Web as a whole was in 1995 before the corporate world had discovered it. If Wikipedia follows the model of the web, capitalism will be sweeping in shortly.

Unless… the experts swoop in first. Wikipedia is part of a foundation, so it’s not exactly just bobbing in the open seas waiting to be swept away. If enough academics and librarians started knocking on the door saying, hey, we’d like to participate, then perhaps Wikipedia (and Wikibooks) would kick up to the next level. Inevitably, these newcomers would insist on setting up some new vetting mechanisms and a few useful hierarchies that would help ensure quality. What would these be? That’s exactly the kind of thing we should be discussing.
The Guardian ran a nice piece earlier this week in which they asked several “experts” to evaluate a Wikipedia article on their particular subject. They all more or less agreed that, while what’s up there is not insubstantial, there’s still a long way to go. The biggest challenge then, it seems to me, is to get these sorts of folks to give Wikipedia more than just a passing glance. To actually get them involved.
For this to really work, however, another group needs to get involved: the users. That might sound strange, since millions of people write, edit and use Wikipedia, but I would venture that most are not willing to rely on it as a bedrock source. No doubt, it’s incredibly useful to get a basic sense of a subject. Bloggers (including this one) link to it all the time — it’s like the conversational equivalent of a reference work. And for certain subjects, like computer technology and pop culture, it’s actually pretty solid. But that hits on the problem right there. Wikipedia, even at its best, has not gained the confidence of the general reader. And though the Wikimaniacs would be loathe to admit it, this probably has something to do with its core philosophy.
Karen G. Schneider, a librarian who has done a lot of thinking about these questions, puts it nicely:

Wikipedia has a tagline on its main page: “the free-content encyclopedia that anyone can edit.” That’s an intriguing revelation. What are the selling points of Wikipedia? It’s free (free is good, whether you mean no-cost or freely-accessible). That’s an idea librarians can connect with; in this country alone we’ve spent over a century connecting people with ideas.
However, the rest of the tagline demonstrates a problem with Wikipedia. Marketing this tool as a resource “anyone can edit” is a pitch oriented at its creators and maintainers, not the broader world of users. It’s the opposite of Ranganathan’s First Law, “books are for use.” Ranganathan wasn’t writing in the abstract; he was referring to a tendency in some people to fetishize the information source itself and lose sight that ultimately, information does not exist to please and amuse its creators or curators; as a common good, information can only be assessed in context of the needs of its users.

I think we are all in need of a good Wikipedia, since in the long run it might be all we’ve got. And I’m in now way opposed to its spirit of openness and transparency (I think the preservation of version histories is a fascinating element and one which should be explored further — perhaps the encyclopedia of the future can encompass multiple versions of the “the truth”). But that exhilarating throwing open of the doors should be tempered with caution and with an embrace of the parts of the old system that work. Not everything need be thrown away in our rush to explore the new. Some people know more than other people. Some editors have better judgement than others. There is such a thing as a good kind of gatekeeping.
If these two impulses could be brought into constructive dialogue then we might get somewhere. This is exactly the kind of conversation the Wikimedia Foundation should be trying to foster.

Jimmy Wales believes that the Wikibooks project will do for the textbook what Wikipedia did for the encyclopedia; replacing costly printed books with free online content developed by a community of contributors. But will it? Or, more accurately, should it? The open source volunteer format works for encyclopedia entries, which don’t require deep knowledge of a particular subject. But the sustained examination and comprehensive vision required to understand and contextualize a particular subject area is out of reach for most wiki contributors. The communal voice of the open source textbook is also problematic, especially for humanities texts, as it lacks the power of an inspired authoritative narrator. This is not to say that I think open source textbooks are doomed to failure. In fact, I agree with Jimmy Wales that open source textbooks represent an exciting, liberating and inevitable change. But there are some real concerns that we need to address in order to help this format reach its full potential. Including: how to create a coherent narrative out of a chorus of anonymous voices, how to prevent plagiarism, and how to ensure superior scholarship.
To illustrate these points, I’m going to pick on a Wikibook called: Art History. This book won the distinction of “collaboration of the month” for October, which suggests that, within the purview of wikibooks, it represents a superior effort. Because space is limited, I’m only going to examine two passages from Chapter One, comparing the wikibook to similar sections in a traditional art history textbook. Below is the opening paragraph, framing the section on Paleolithic Art and cave paintings, which begins the larger story of art history.

Art has been part of human culture for millenia. Our ancient ancestors left behind paintings and sculptures of delicate beauty and expressive strength. The earliest finds date from the Middle Paleolithic period (between 200,000 and 40,000 years ago), although the origins of Art might be older still, lost to the impermanence of materials.

What Genesis is to the biblical account of the fall and redemption of man, early cave art is to the history of his intelligence, imagination, and creative power. In the caves of southern France and of northern Spain, discovered only about a century ago and still being explored, we may witness the birth of that characteristically human capability that has made man master of his environment–the making of images and symbols. By this original and tremendous feat of abstraction upper Paleolithic men were able to fix the world of their experience, rendering the continuous processes of life in discrete and unmoving shapes that had identity and meaning as the living animals that were their prey.
In that remote time during the last advance and retreat of the great glaciers man made the critical breakthrough and became wholly human. Our intellectual and imaginative processes function through the recognition and construction of images and symbols; we see and understand the world pretty much as we were taught to by the representations of it familiar to our time and place. The immense achievement of Stone Age man, the invention of representation, cannot be exaggerated.

As you can see the wiki book introduction seems rather anemic and uninspired when compared to Gardner’s. The Gardner’s introduction also sets up a narrative arc placing art of this era in the context of an overarching story of human civilization.
I chose Gardner’s Art Through the Ages because it is the classic “Intro to Art History” textbook (75 years old, in its eleventh edition). I bought my copy in high school and still have it. That book, along with my brilliant art history teacher Gretchen Whitman, gave me a lifelong passion for visual art and a deep understanding of its significance in the larger story of western civilization. My tattered but beloved Gardner’s volume still serves me well, some 20 odd years later. Perhaps it is the beauty of the writing, or the solidity of the authorial voice, or the engaging manner in which the “story” of art is told.
Let’s compare another passage; this one describes pictorial techniques employed by stone age painters. First the wikibook:

Another feature of the Lascaux paintings deserves attention. The bulls there show a convention of representing horns that has been called twisted perspective, because the viewer sees the heads in profile but the horns from the front. Thus, the painter’s approach is not strictly or consistently optical. Rather, the approach is descriptive of the fact that cattle have two horns. Two horns are part of the concept “bull.” In strict optical-perspective profile, only one horn would be visible, but to paint the animal in that way would, as it were, amount to an incomplete definition of it.

And now Gardner’s:

The pictures of cattle at Lascaux and elsewhere show a convention of representation of horns that has been called twisted perspective, since we see the heads in profile but the horns from a different angle. Thus, the approach of the artist is not strictly or consistently optical–that is, organized from a fixed-viewpoint perspective. Rather, the approach is descriptive of the fact that cattle have two horns. Two horns would be part of the concepts “cow” or “bull.” In a strict optical-perspective profile only one horn would be visible, but to paint the animal in such a way would, as it were, amount to an incomplete definition of it.

This brings up another very serious problem with open-source textbooks–plagiarism. If the first page of the wikibook-of-the month blatantly rips-off one of the most popular art history books in print and nobody notices, how will Wikibooks be able to police the other 11,000 plus textbooks it intends to sponsor? What will the consequences be if poorly written, plagairized, open-source textbooks become the runaway hit that Wikibooks predicts?

first up: I appreciate you coming over to defend yourself. The blogosphere is far too often self-reinforcing – the left (for example) reads left-leaning blogs and the right reads right-leaning blogs & there’s not a lot of dialogue between people on opposite sides, to everyone’s loss.

Here’s something that’s been nagging me for the past week or so: your book seems to effectively be conservative. Bear with me for a bit: I’m not saying that it’s Bill O’Reilly-style invective. I do think, however, that it effectively reinforces the status quo. Would I be wrong in taking away as the message of the book the chain of logic that:

Our pop culture’s making us smarter.

Therefore it must be good.

Therefore we don’t need to change what we’re doing.

I’ll wager that you wouldn’t sign off on (3) & would argue that your book isn’t in the business of prescribing further action. I’m not accusing you of having malicious intentions, and we can’t entirely blame a writer for the distortions we bring to their work as readers (hey Nietzsche!). But I think (3)’s implicitly in the book: this is certainly the message most reviewers, at least, seem to be taking away from the book. Certainly you offer caveats (if the kids are watching television, there’s good television & there’s bad television), but I think this is ultimately a Panglossian view of the world: everything is getting better and better, we just need to stand back and let pop culture work upon us. Granted, the title may be a joke, but can you really expect us, the attention-deficit-addled masses, to realize that?

Even to get to (2) in that chain of reasoning, you need to buy into (1), which I don’t know that I do. Even before you can prove that rising intelligence is linked to the increased complexity of popular culture – which I’ll agree is interesting & does invite scrunity – you need to make the argument that intelligence is something that can be measured in a meaningful way. Entirely coincidentally – really – I happened to re-read Stephen Jay Gould’s Mismeasure of Man before starting in on EBIGFY; not, as I’m sure you know, a happy combination, but I think a relevant one. Not to reopen the internecine warfare of the Harvard evolutionary biology department in the 1990s, but I think the argument that Gould wrings out of the morass of intelligence/IQ studies still holds: if you know who your “smart” kids are, you can define “smartness” in their favor. There remain severe misgivings about the concept of g, which you skirt: I’m not an expert on the current state of thought on IQ, so I’ll skirt this too. But I do think it’s worth noting that while you’re not coming to Murray & Herrnstein’s racist conclusions, you’re still making use of the same data & methodology they used for The Bell Curve, the same data & methodology that Gould persuasively argued was fundamentally flawed. Science, the history of intelligence testing sadly proves, doesn’t exist outside of a political and economic context.

But even if smartness can be measured as an abstract quantity and if we are “smarter” than those of times past, to what end? This is the phrase I found myself writing over and over in the margin of your book. Is there a concrete result in this world of our being better at standardized tests? Sure, it’s interesting that we seem to be smarter, but what does that mean for us? Maybe the weakest part of your book argues that we’re now able to do a better job of picking political leaders. Are you living in the same country I’m living in? and watching the same elections? If we get any smarter, we’ll all be done for.

I’ll grant that you didn’t have political intentions in writing this, but the ramifications are there, and need to be explored if we’re going to seriously engage with your ideas. Technology – the application of science to the world in which we live – can’t exist in an economic and political vacuum.

Folks, enjoying the discussion here. I had a couple of responses to several points that have been raised.
1. The title. I think some of you are taking it a little too seriously — it’s meant to be funny, not a strict statement of my thesis. Calling it hyperbolic or misleading is like criticizing Neil Postman for calling his book “Amusing Ourselves To Death” when no one actually *died* from watching too much television in the early eighties.
2. IQ. As I say in the book, we don’t really know if the increased complexity of the culture is partially behind the Flynn Effect, though I suspect it is (and Flynn, for what it’s worth, suspects it is as well.) But I’m not just interested in IQ as a measure of the increased intelligence of the gaming/net generation. I focused on that because it was the one area where there was actually some good data, in the sense that we definitely know that IQ scores are rising. But I suspect that there are many other — potentially more important — ways in which we’re getting smarter as well, most of which we don’t test for. Probably the most important is what we sometimes call system thinking: analyzing a complex system with multiple interacting variables changing over time. IQ scores don’t track this skill at all, but it’s precisely the sort of thing you get extremely good at if you play a lot of SimCity-like games. It is not a trivial form of intelligence at all — it’s precisely the *lack* of skill at this kind of thinking that makes it hard for people to intuitively understand things like ecosystems or complex social problems.
3. The focus of the book itself. People seem to have a hard time accepting the fact that I really do think the content/values discussion about pop culture has its merits. I just chose to write a book that would focus on another angle, since it was an angle that was chronically ignored in the discussion of pop culture (or chronically misunderstood.) Everything Bad is not a unified field theory of pop culture; it’s an attempt to look at one specific facet of the culture from a fresh perspective. If Bob (and others) end up responding by saying that the culture is both making us smarter on a cognitive level, but less wise on a social/historical level (because of the materialism, etc) that’s a perfectly reasonable position to take, one that doesn’t contradict anything I’m saying in the book. I happen to think that — despite that limited perspective — the Sleeper Curve hypothesis was worthy of a book because 1) increased cognitive complexity is hardly a trivial development, and 2) everyone seemed to think that the exact opposite was happening, that the culture was dumbing us all down. In a way, I wrote the book to encourage people to spend their time worrying about real problems — instead of holding congressional hearings to decide if videogames were damaging the youth of American, maybe they could focus on, you know, poverty or global warming or untangling the Iraq mess.
As far as the materialistic values question goes, I think it’s worth pointing out that the most significant challenge to the capitalist/private property model to come along in generations has emerged precisely out of the gaming/geek community: open source software, gift economy sharing, wikipedia, peer-to-peer file sharing, etc. If you’re looking for evidence of people using their minds to imagine alternatives to the dominant economic structures of their time, you’ll find far more experiments in this direction coming out of today’s pop culture than you would have in the pop culture of the late seventies or eighties. Thanks to their immersion this networked culture, the “kids today” are much more likely to embrace collective projects that operate outside the traditional channels of commercial ownership. They’re also much more likely to think of themselves as producers of media, sharing things for the love of it, than the passive TV generation that Postman chronicled. There’s still plenty of mindless materialism out there, of course, but I think the trend is a positive one.
Steven

Microsoft’s forthcoming “MSN Book Search” is the latest entity to join the Open Content Alliance, the non-controversial rival to Google Print. ZDNet says: “Microsoft has committed to paying for the digitization of 150,000 books in the first year, which will be about $5 million, assuming costs of about 10 cents a page and 300 pages, on average, per book…”
Apparently having learned from Google’s mistakes, OCA operates under a strict “opt-in” policy for publishers vis-a-vis copyrighted works (whereas with Google, publishers have until November 1 to opt out). Judging by the growing roster of participants, including Yahoo, the National Archives of Britain, the University of California, Columbia University, and Rice University, not to mention the Internet Archive, it would seem that less hubris equals more results, or at least lower legal fees. Supposedly there is some communication between Google and OCA about potential cooperation.
Also story in NY Times.

in 1980 and 81 i had a dream job — charlie van doren, the editorial director of Encyclopedia Britannica, hired me to think about the future of encyclopedias in the digital era. i parlayed that gig into an eighteen-month stint with Alan Kay when he was the chief scientist at Atari. Alan had read the paper i wrote for britannica — EB and the Intellectual Tools of the Future — and in his enthusiastic impulsive style, said, “this is just the sort of thing i want to work on, why not join me at Atari.”
while we figured that the future encyclopedia should at the least be able to answer most any factual question someone might have, we really didn’t have any idea of the range of questions people would ask. we reasoned that while people are curious by nature, they fall out of the childhood habit of asking questions about anything and everything because they get used to the fact that no one in their immediate vicinity actually knows or can explain the answer and the likelihood of finding the answer in a readily available book isn’t much greater.
so, as an experiment we gave a bunch of people tape recorders and asked them to record any question that came to mind during the day — anything. we started collecting question journals in which people whispered their wonderings — both the mundane and the profound. michael naimark, a colleague at Atari was particularly fascinated by this project and he went to the philippines to gather questions from a mountain tribe. anyway, this is a long intro to the realization that between wikipedia and google, alan’s and my dream of a universal question/answer machine is actually coming into being. although we could imagine what it would be like to have the ability to get answers to most any question, we assumed that the foundation would be a bunch of editors responsible for the collecting and organizing vast amounts of information. we didnt’ imagine the world wide web as a magnet which would motivate people collectively to store a remarkable range of human knowledge in a searchable database.
on the other hand we assumed that the encylopedia of the future would be intelligent enough to enter into conversation with individual users, helping them through rough spots like a patient tutor. looks like we’ll have to wait awhile for that.

Wired has a piece today about authors who are in favor of Google’s plans to digitize millions of books and make them searchable online. Most seem to agree that obscurity is a writer’s greatest enemy, and that the exposure afforded by Google’s program far outweighs any intellectual property concerns. Sometimes to get more you have to give a little.
The article also mentions the institute.

Responding to Bob’s “games provide much more than a cognitive workout”…
Growing up in the 80s, video games were much less sophisticated and probably less effective as a matrix for training consumption. TV performed that role. I remember watching on Nickelodeon competitions between children in a toy store in which each contestant had 60, or 120 seconds to fill a shopping cart with as many toys as they possibly could. The winner — whoever had managed to grab the most — got to keep the contents of their cart. The physical challenge of the game was obvious. You could even argue that it presented a cognitive challenge insofar as you had to strategize the most effective pattern through the aisles, balancing the desirability of toys with their geometric propensity to fly off the shelves quickly. But did that excuse the game ethically?
I’ve played a bit of Katamari lately and have enjoyed it. It’s a world charged with static electricity, everything sticks. Each object has been lovingly rendered in its peculiarity and stubbornness. If your katamari picks up something long and narrow, say, a #2 pencil, and attaches to it in such a way that it sticks out far from the clump, it will impede your movement. Each time the pencil hits the ground, you have to kind of pole vault the entire ball. It’s not hard to see how the game trains visual puzzle-solving skills, sensitivity to shape, spatial relationships (at least virtual ones), etc.
That being said, I agree with Bob and Rylish that there is an internal economy at work here that teaches children to be consumers. A deep acquisition anxiety runs through the game, bringing to mind another Japanese pop phenom: Pokémon. Pokémon (called “Pocket Monsters” in Japan) always struck me as particularly insidious, far more predatory than anything I grew up with, because its whole narrative universe is based on consumption. “Collect ‘em all” is not just the marketing slogan for spinoff products, but the very essence of the game itself. The advertising is totally integrated with the story. Here’s Wikipedia (not a bad source for things like this) on how it works:

“The Pokémon games are role-playing games with a strategy element which allow players to catch, collect, and train pets with various abilities, and battle them against each other to build their strength and evolve them into more powerful Pokémon. Pokémon battles are based on the non-lethal Eastern sport of fighting insects, but the Pokémon never bleed or die, only faint. The game’s catchphrase used to be “Gotta catch ‘em all!”, although now it is no longer officially used.”

Similarly, the Katamari backstory involves the lord of the universe getting drunk one night and shattering the solar system. Each level of the game is the reassembly of a star or planet. If you succeed, a heavenly body is restored to the firmament.
After an hour playing Katamari, having traversed a number of wildly imaginative landscapes (and having absorbed a soundtrack that can only be described as Japanese chipmunks on nitrous) I re-enter the actual world in a mildly fevered state. The cardinal rule in the game is that to succeed I must devour as much as possible. No time is afforded to savor the strange, psychedelic topography, to examine the wonderful array of objects (everything from thumbtacks to blue whales) scattered about in my path. Each stage is a terrain that must be gobbled up, emptied. A throbbing orb of light in the upper left corner of the screen, set within concentric rings representing target diameters, measures my progress toward the goal: a katamari “n” meters in size. The clock in the upper right corner pressures me to keep rolling.
Video games today may not be as blatant as the consumerist spectacle of the Nickelodeon game, and they may provide richly textured worlds posing greater problem-solving challenges than any electronic media that has preceded them. But it seems to me that many of them do not differ ideologically from that shopping cart contest.

The Washington Post has run a pair of op-eds, one from each side of the Google Print dispute. Neither says anything particularly new. Moreover, they enforce the perception that there can be only two positions on the subject — an endemic problem in newspaper opinion pages with their addiction to binaries, where two cardboard boxers are allotted their space to throw a persuasive punch. So you’re either for Google or against it? That’s awfully close to you’re either for technology — for progress — or against it. Unfortunately, like technology’s impact, the Google book-scanning project is a little trickier to figure out, and a more nuanced conversation is probably in order.
The first piece, “Riches We Must Share…”, is submitted in support of Google by University of Michigan President Sue Coleman (a partner in the Google library project). She argues that opening up the elitist vaults of the world’s great (english) research libraries will constitute a democratic revolution. “We believe the result can be a widening of human conversation comparable to the emergence of mass literacy itself.” She goes on to deliver some boilerplate about the “Net Generation” — too impatient to look for books unless they’re online etc. etc. (great to see a major university president being led by the students instead of leading herself).
Coleman then devotes a couple of paragraphs to the copyright question, failing to tackle any of its controversial elements:

Universities are no strangers to the responsible management of complex copyright, permission and security issues; we deal with them every day in our classrooms, libraries, laboratories and performance halls. We will continue to work within the current criteria for fair use as we move ahead with digitization.

The problem is, Google is stretching the current criteria of fair use, possibly to the breaking point. Coleman does not acknowledge or address this. She does, however, remind the plaintiffs that copyright is not only about the owners:

The protections of copyright are designed to balance the rights of the creator with the rights of the public. At its core is the most important principle of all: to facilitate the sharing of knowledge, not to stifle such exchange.

All in all a rather bland statement in support of open access. It fails to weigh in on the fair use question — something about which the academy should have a few things to say — and does not indicate any larger concern about what Google might do with its books database down the road.
The opposing view, “…But Not at Writers’ Expense”, comes from Nick Taylor, writer, and president of the Authors’ Guild (which sued Google last month). Taylor asserts that mega-rich Google is tramping on the dignity of working writers. But a couple of paragraphs in, he gets a little mixed up about contemporary publishing:

Except for a few big-name authors, publishers roll the dice and hope that a book’s sales will return their investment. Because of this, readers have a wealth of wonderful books to choose from.

A dubious assessment, since publishing conglomerates are not exactly enthusiastic dice rollers. I would counter that risk-averse corporate publishing has steadily shrunk the number of available titles, counting on a handful of blockbusters to drive the market. Taylor goes on to defend not just the publishing status quo, but the legal one:

Now that the Authors Guild has objected, in the form of a lawsuit, to Google’s appropriation of our books, we’re getting heat for standing in the way of progress, again for thoughtlessly wanting to be paid. It’s been tradition in this country to believe in property rights. When did we decide that socialism was the way to run the Internet?

First of all, it’s funny to think of the huge corporations that dominate the web as socialist. Second, this talk about being paid for appropriating books for a search database is revealing of the two totally different worldviews that are at odds in this struggle. The authors say that any use of their book requires a payment. Google sees including the books in the database as a kind of payment in itself. No one with a web page expects Google to pay them for indexing their site. They are grateful that they do! Otherwise, they are totally invisible. This is the unspoken compact that underpins web search. Google assumed the same would apply with books. Taylor says not so fast.
Here’s Taylor on fair use:

Google contends that the portions of books it will make available to searchers amount to “fair use,” the provision under copyright that allows limited use of protected works without seeking permission. That makes a private company, which is profiting from the access it provides, the arbiter of a legal concept it has no right to interpret. And they’re scanning the entire books, with who knows what result in the future.

Actually, Google is not doing all the interpreting. There is a legal precedent for Google’s reading of fair use established in the 2003 9th Circuit Court decision Kelly v. Arriba Soft. In the case, Kelly, a photographer, sued Arriba Soft, an online image search system, for indexing several of his photographs in their database. Kelly believed that his intellectual property had been stolen, but the court ruled that Arriba’s indexing of thumbnail-sized copies of images (which always linked to their source sites) was fair use: “Arriba’s use of the images serves a different function than Kelly’s use – improving access to information on the internet versus artistic expression.” Still, Taylor’s “with who knows what result in the future” concern is valid.
So on the one hand we have many writers and most publishers trying to defend their architecture of revenue (or, as Taylor would have it, their dignity). But I can’t imagine how Google Print would really be damaging that architecture, at least not in the foreseeable future. Rather it leverages it by placing it within the frame of another architecture: web search. The irony for the authors is that the current architecture doesn’t seem to be serving them terribly well. With print-on-demand gaining in quality and legitimacy, online book search could totally re-define what is an acceptable risk to publishers, and maybe more non-blockbuster authors would get published.
On the other hand we have the universities and libraries participating in Google’s program, delivering the good news of accessibility. But they are not sufficiently questioning what Google might do with its database down the road, or the implications of a private technology company becoming the principal gatekeeper of the world’s corpus.
If only this debate could be framed in a subtler way, rather than the for-Google-or-against-it paradigm we have now. I’m cautiously optimistic about the effect of having books searchable on the web. And I tend to believe it will be beneficial to authors and publishers. But I have other, deep reservations about the direction in which Google is heading, and feel that a number of things could go wrong. We think the cencorship of the marketplace is bad now in the age of publishing conglomerates. What if one company has total control of everything? And is keeping track of every book, every page, that you read. And is reading you while you read, throwing ads into your peripheral vision. I’m curious to hear from readers what they feel could be the hazards of Google Print.