The Glass Box And The Commonplace Book

The following is a transcript of the Hearst New Media lecture I gave last night at Columbia University, subtitled "Two Paths For The Future of Text." Thanks to everyone who came out, and to the Journalism school for the invitation.

I want to start with a page out of history—the handwriting of Thomas Jefferson, taken from one of his notebooks on religion. The words on this page belongs to a long and fruitful tradition that peaked in Enlightenment-era Europe and America, particularly in England: the practice of maintaining a “commonplace” book.
Scholars, amateur scientists, aspiring men of letters—just about anyone with intellectual ambition in the seventeenth and eighteenth centuries was likely to keep a commonplace book. In its most customary form, “commonplacing,” as it was called, involved transcribing interesting or inspirational passages from one’s reading, assembling a personalized encyclopedia of quotations. It was a kind of solitary version of the original web logs: an archive of interesting tidbits that one encountered during one’s textual browsing. The great minds of the period—Milton, Bacon, Locke—were zealous believers in the memory-enhancing powers of the commonplace book. There is a distinct self-help quality to the early descriptions of commonplacing’s virtues: in the words of one advocate, maintaining the books enabled one to “lay up a fund of knowledge, from which we may at all times select what is useful in the several pursuits of life.”

The philosopher John Locke first began maintaining a commonplace book in 1652, during his first year at Oxford. Over the next decade he developed and refined an elaborate system for indexing the book’s content. Locke thought his method important enough that he appended it to a printing of his canonical work, An Essay Concerning Human Understanding. Here’s an excerpt from his “instructions for use”:

When I meet with any thing, that I think fit to put into my common-place-book, I first find a proper head. Suppose for example that the head be EPISTOLA, I look unto the index for the first letter and the following vowel which in this instance are E. i. if in the space marked E. i. there is any number that directs me to the page designed for words that begin with an E and whose first vowel after the initial letter is I, I must then write under the word Epistola in that page what I have to remark.

Locke’s approach seems almost comical in its intricacy, but it was a response to a specific set of design constraints: creating a functional index in only two pages that could be expanded as the commonplace book accumulated more quotes and observations. In a certain sense, this is a search algorithm, a defined series of steps that allows the user to index the text in a way that makes it easier to query. Locke’s method proved so popular that a century later, an enterprising publisher named John Bell printed a notebook entitled: “Bell’s Common-Place Book, Formed generally upon the Principles Recommended and Practised by Mr Locke.” Put another way, Bell created a commonplace book by commonplacing someone else’s technique for maintaining a commonplace book. The book included eight pages of instructions on Locke’s indexing method, a system which not only made it easier to find passages, but also served the higher purpose of “facilitat[ing] reflexive thought.”

The tradition of the commonplace book contains a central tension between order and chaos, between the desire for methodical arrangement, and the desire for surprising new links of association. The historian Robert Darnton describes this tangled mix of writing and reading:

Unlike modern readers, who follow the flow of a narrative from beginning to end, early modern Englishmen read in fits and starts and jumped from book to book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your personality.

Each rereading of the commonplace book becomes a new kind of revelation. You see the evolutionary paths of all your past hunches: the ones that turned out to be red herrings; the ones that turned out to be too obvious to write; even the ones that turned into entire books. But each encounter holds the promise that some long-forgotten hunch will connect in a new way with some emerging obsession. The beauty of Locke’s scheme was that it provided just enough order to find snippets when you were looking for them, but at the same time it allowed the main body of the commonplace book to have its own unruly, unplanned meanderings.

But all of this magic was predicated on one thing: that the words could be copied, re-arranged, put to surprising new uses in surprising new contexts. By stitching together passages written by multiple authors, without their explicit permission or consultation, some new awareness could take shape.

Since the heyday of the commonplace book, there have been a few isolated attempts to turn these textual remixes into a finished product, into a standalone work of collage. The most famous is probably Jefferson’s bible, his controversial “remix” of the New Testament. There’s also Walter Benjamin’s unfinished, and ultimately unpublishable Passagenwerk, or “Arcades Project,” his rumination on the early shopping malls of Paris built out of photos, quotes, and aphoristic musings. Just this year, David Shields published a book, Reality Hunger, built out of quotes from a wide variety of sources. And of course, there are parallel works in music, painting, and architecture that are constructed out of “quotes” lifted from original sources and remixed in imaginative ways.

***

NOW, BEFORE I TAKE the next step in the argument, I want to pause for a brief autobiographical confession. Exactly twenty years ago I arrived here at Columbia as a grad student, holding an undergraduate degree in Semiotics, much to the bafflement of my parents. I was here to study literary theory, to work with giants like Edward Said and Giyatri Spivak. I took a seminar on Jacques Derrida my second year here, and Derrida actually showed up in person for the first class, the silent, white-haired dude in the corner who didn’t introduce himself until the professor arrived. I could talk about the open text and deconstruction and the death of the author with the best of them. Technically I was enrolled in the English Department, but even that was misleading. All of my writing read like it had been translated from the French.

I tell you this story because I think 22-year-old Morningside Heights Steven would have listened to those opening remarks and nodded enthusiastically at where I was going. The idea of a purely linear text is a myth; readers stitch together meanings in much more complex ways than we have traditionally imagined; the true text is more of a network than a single, fixed document. These were all the defining beliefs of postmodern theory. I still think all of these things are true, though I choose to say them slightly differently.

But I think 22-year-old Steven would have had a more difficult time wrapping his head around this next image. This is what happens when you search Google for the ostensible topic of our discussion tonight: “journalism.”

What I want to suggest to you is that, in some improbable way, this page is as much of an heir to the structure of a commonplace book as the most avant-garde textual collage. Who is the “author” of this page? There are, in all likelihood, thousands of them. It has been constructed, algorithmically, by remixing small snippets of text from diverse sources, with diverse goals, and transformed into something categorically different and genuinely valuable. In the center column, we have short snippets of text written by ten individuals or groups, though of course, Google reports that it has 32 million more snippets to survey if we want to keep clicking. The selection of these initial ten links is itself dependant on millions of other snippets of text that link to these and other journalism-related pages on the Web. Along the right side of the page, we have short snippets of text written by five advertisers, mostly journalism schools as it happens, though they are in a silent competition with other snippets of text created by other advertisers bidding to be on this page. And then we have the text in the search field, created by me, which summons this entire network of text together in a fraction of a second.

What you see on this page is, in a very real sense, textual play: the recombining of words into new forms and associations that their original creators never dreamed of. But what separates it from the textual play that I was earnestly studying twenty years ago is the fact that it has engendered a two hundred billion dollar business.

***

WHEN TEXT IS free to combine in new, surprising ways, new forms of value are created. Value for consumers searching for information, value for advertisers trying to share their messages with consumers searching for related topics, value for content creators who want an audience. And of course, value to the entity that serves as the middleman between all those different groups. This is in part what Jeff Jarvis has called the “link economy,” but as Jarvis has himself observed, it is not just a matter of links. What is crucial to this system is that text can be easily moved and re-contextualized and analyzed, sometimes by humans and sometimes by machines.

Ecologists talk about the “productivity” of an ecosystem, which is a measure of how effectively the ecosystem converts the energy and nutrients coming into the system into biological growth. A productive ecosystem, like a rainforest, sustains more life per unit of energy than an unproductive ecosystem, like a desert. We need a comparable yardstick for information systems, a measure of a system’s ability to extract value from a given unit of information. Call it, in this example: textual productivity. By creating fluid networks of words, by creating those digital-age commonplaces, we increase the textual productivity of the system.

The overall increase in textual productivity may be the single most important fact about the Web’s growth over the past fifteen years. Think about it this way: let’s say it’s 1995, and you are cultivating a page of “hot links” to interesting discoveries on the Web. You find an article about a Columbia journalism lecture and you link to it on your page. The information value you have created is useful exclusively to two groups: people interested in journalism who happen to visit your page, and the people maintaining the Columbia page, who benefit from the increased traffic. Fast forward to 2010, and you check-in at Foursquare for this lecture tonight, and tweet a link to a description of the talk. What happens to that information? For starters, it goes out to friends of yours, and into your twitter feed, and into Google’s index. The geo-data embedded in the link alerts local businesses who can offer your promotions through foursquare; the link to the talk helps Google build its index of the web, which then attracts advertisers interested in your location or the topic of journalism itself. Because that tiny little snippet of information is free to make new connections, by checking in here you are helping your friends figure out what to do tonight; you’re helping the Journalism school in promoting this venue; you’re helping the bar across Broadway attract more customers, you’re helping Google organize the web; you’re helping people searching google for information about journalism; you’re helping journalism schools advertising on Google to attract new students. Not bad for 140 characters.

When text is free to flow and combine, new forms of value are created, and the overall productivity of the system increases. But of course, when text is free, value is sometimes subtracted for the publishers who used to charge for that text. So let me make one point clear: recognizing the value creation of open textual networks is not argument against paywalls. I happen to think it is perfectly reasonable for online publishers to ask people to pay for the privilege of reading their journalism. If people are willing to buy virtual tractors on Farmville, or cough up two bucks for the Flight Control app on the iPhone, a meaningful number of people are going to be willing to pay for a well reported and edited newspaper or magazine. I don’t think erecting paywalls is some kind of magic cure that will instantly restore the newspaper business to the forty-percent margins they commanded back in the day when they had a virtual monopoly on local ads and classifieds. But there is nothing in the idea of charging for content that is in conflict with the value of textual networks. Search engines can still index the paywalled content, and there are a number of clever schemes out there —including the metered usage model that the Times is apparently going to roll out — that allow publishers to charge for content while still allowing that content to be linked to, excerpted, and remixed in new ways.

But there are worse things than paywalls. Take a look at this screen. This, as you all probably know, is Apple’s new iBook application for the iPad.
What I’ve done here is shown you what happens when you try to copy a paragraph of text. You get the familiar iPhone-style clipping handles, and you get two options “Highlight” and “Bookmark.” But you can’t actually copy the text, to paste it into your own private commonplace book, or email it to a friend, or blog about it. And of course there’s no way to link to it. What’s worse: the book in question is Penguin's edition of Darwin’s Descent of Man, which is in the public domain. Those are our words on that screen. We have a right to them.

Interestingly, the Kindle – even the Kindle app for the iPad – does allow you to clip passages and automatically store them on a file that can be downloaded to your computer, where you can post, archive, forward, tweet to your heart’s content. You are apparently limited to a certain percentage of the overall text of the book, which is perfectly reasonable in my mind. The process of actually getting your hands on the text is a little complicated, probably deliberately so, but I can live with it.

But it gets worse. This is a page from the NY Times Editor’s Choice iPad app, showing what happens when you try simply to select text from an article. You can't do it. Just so you know that I am an equal opportunity critic, this is what happens when you try to copy text on the WSJ’s app.
You can’t do anything with the words. They’re frozen there, uncopyable, unlinkable, like some beautiful ice sculpture. Frozen is the right word, because we’re so used to selecting and copying digital text, encountering text on a screen that can’t be selected leaves you with a strange initial assumption: that the application has crashed, and the screen is frozen.

Now, it may well be true that Apple, and The Times, and The Journal intend to add extensive tools that encourage the textual productivity of their apps. If that happens, I will be delighted. The iPad is only about two weeks old, after all, and it famously took Apple two years to introduce copy-and-paste to the iPhone OS. But there are plenty of first generation iPad apps that facilitate new textual networks, like Twitterific or Evernote or Instapaper. Apple itself has made it incredibly easy for developers to build rich connections to the Web into their apps through their Webkit framework. And so I get worried when I look at iBook, and the Journal and Times apps. In part because these are both extremely thoughtfully designed apps. I happily purchased both of them, and use them both. They have a lot of elements that I like. It’s precisely the skill and care with which they have been built that scares me, because that makes the frozen nature of the text seem more like a feature than a bug, something they’ve deliberated chosen, rather than a flaw that they didn’t have time to correct.

The contrast here suggests to me that we have two potential futures ahead of us, where digital text is concerned, or that the future is going to involve a battle between two contradictory impulses. We can try to put a protective layer of glass of the words, or we can embrace the idea that we are all better off when words are allowed to network with each other. What’s the point of going to all this trouble to build machines capable of displaying digital text if we can’t exploit the basic interactivity of that text? People don’t want to read on a screen just for the thrill of it; even with the iPad’s beautiful display, reading on paper is still a higher-resolution experience, and much easier on the eyes. Yes, the iPad makes it easier to carry around a dozen books and magazines, but that’s not the only promise of the technology. The promise also lies in doing things with the words, forging new links of association, remixing them. We have all the tools at our disposal to create commonplace books that would astound Locke and Jefferson. And yet we are, deliberately, trying to crawl back into the glass box.

As with paywalls, I am not dogmatic about these things. I don’t think it’s incumbent upon the New York Times or The Wall Street Journal to allow all their content to flow freely through the infosphere with no restrictions. I do not pull out my crucifix when people use the phrase “Digital Rights Management.” If publishers want to put reasonable limits on what their audience can do with their words, I’m totally fine with that. As I said, I think the Kindle has a workable compromise, though I would like to see it improved in a few key areas. But I also don’t want to mince words. When your digital news feed doesn’t contain links, when it cannot be linked to, when it can’t be indexed, when you can’t copy a paragraph and paste it into another application: when this happens your news feed is not flawed or backwards looking or frustrating. It is broken.

***

I SAID THERE were two potential futures—the glass box and the commonplace book—and the good news is that I think the commonplace book model has a number of trends on its side. The web is bursting with organizations that recognize the importance of textual productivity, many of them explicitly trying to imagine what journalism is going to look like in this new world. Here’s one: ProPublica, the nonprofit news org which won a Pulitzer Prize last week for its collaboration with the NY Times. I draw your attention to the bar that runs along the top of every page on the site. “Steal Our Stories.” This is playful but important: Propublica has licensed its content under creative commons, so that anyone who wants to publish their articles can do so, as long as they credit (and link to) ProPublica and include all links in the original story. Instead of putting their journalism under glass, they’re effectively saying to their text: go forth and multiply.

One of the reasons Propublica can do this, of course, is because they are a non-profit whose mission is to be influential and not to make money. It seems to me that this is one area that has been under-analyzed in the vast, sprawling conversation about the future of journalism over the past year or so. A number of commentators have discussed the role of non-profits in filling the hole created by the decline of print newspapers. But they have underestimated the textual productivity of organizations that are incentivized to connect, not protect, their words. A single piece of information designed to flow through the entire ecosystem of news will create more value than a piece of information sealed up in a glass box. And ProPublica, of course, is just the tip of the iceberg. There are thousands of organizations – some of the focused on journalism, some of the government-based, some of them new creatures indigenous to the web – that create information that can be freely recombined into private commonplace books or Pulitzer-prize winning investigative journalism. A journalist today can get the idea for an investigation from a document on Wikileaks, get background information from Wikipedia, download government statistics or transcripts from open.gov or the Sunlight Foundation. You cannot measure the health of journalism simply by looking at the number of editors and reporters on the payroll of newspapers. There are undoubtedly going to be fewer of them. The question is whether that loss is going to be offset by the tremendous increase in textual productivity we get from a connected web. Presuming, of course, that we don’t replace that web with glass boxes.

There is an additional civic value here, one that goes beyond simply preserving professional journalism. For about ten years now, a few of us have been waging a sometimes lonely battle against the premise that the internet leads to political echo chambers, where like-minded partisans reinforce their beliefs by filtering out dissenting views, an argument associated with the legal scholar and now Obama administration official Cass Sunstein. This is Sunstein’s description of the phenomenon:

If Republicans are talking only with Republicans, if Democrats are talking primarily with Democrats, if members of the religious right speak mostly to each other, and if radical feminists talk largely to radical feminists, there is a potential for the development of different forms of extremism, and for profound mutual misunderstandings with individuals outside the group.

My argument has been that the connective power of the web is stronger than its filtering, that even the most partisan blogs are usually only one click away from the political opposites, whereas in the old world of print magazines or face-to-face groups, the opportunity to stumble across an opposing point of view was much rarer. Some of you might have seen a David Brooks column this week that reported on a new study that actually looked an exposure to differing points of view in various forms of media, and in real-world encounters. It turns out that the web, at least according to this study, actually reducesthe echo-chamber effect, compared to real-world civic space. People who spend a lot of time on political sites are far more likely to encounter diverse perspectives than people who hang out with their friends and colleagues at the bar or the watercooler. As Brooks described it, “This study suggests that Internet users are a bunch of ideological Jack Kerouacs. They’re not burrowing down into comforting nests. They’re cruising far and wide looking for adventure, information, combat and arousal.”

This is just one study, of course, and these are complicated social realities. I think it is fair to say that our pundits and social critics can no longer make the easy assumption that the web and the blogosphere are echo-chamber amplifiers. But whether or not this study proves to be accurate, one thing is certain. The force that enables these unlikely encounters between people of different persuasions, the force that makes the web a space of serendipity and discovery, is precisely the open, combinatorial, connective nature of the medium. So when we choose to take our text out of that medium, when we keep our words from being copied, linked, indexed, that’s a choice with real civic consequences that are not to be taken lightly.

The reason the web works as wonderfully as it does is because the medium leads us, sometimes against our will, into common places, not glass boxes. It’s our job—as journalists, as educators, as publishers, as software developers, and maybe most importantly, as readers—to keep those connections alive.

Comments

One minor note, which I might forget by the time I finish reading: Walter Benjamin's Passagenwerk may have been unpublishable in his lifetime, because (as SBJ says) Benjamin never completed it. However, it eventually _was_ edited, translated, and published (in English), by Harvard, in 2002, with at least a handful of photos. At the risk of unseemly self-promotion, I can point out a diptych I made with one of those photos plus one of mine: http://www.flickr.com/photos/photogrammaton/69870741/in/set-1491813/.

I still haven't read the Booth School of Business study that David Brooks cited, and I may be misunderstanding even the summary that Brooks and Berlin have provided. That said, it still seems possible to me that the Internet is, on the whole, relatively neutral, i.e., that it neither encourages nor discourages the echo chamber, just as, on the whole, bookstores are neutral, and books and magazines themselves are neutral, and TV is neutral, and a world wired with telephones is neutral. I seem to recall predictions that TV would make it possible for diverse peoples to understand one another better; I believe the same prediction was made (unlikely as it sounds now) for telephones. Cass Sunstein's inversion of that prediction for the Internet, as well as the Internet-optimist predictions by SBJ and others, may all be canards. But I can see that this isn't SBJ's main point.

The analogy with the commonplance book of John Locke fails. John Locke had to copy the pieces he liked by hand, he couldn't 'clip and copy' the text with one click of a mouse. And of course, with a glass-text-future, readers are still free to whip out their writing pads, or even start up their text editor, and type over the words they like. It costs more time than the easy copy-paste, but that is exactly what Locke would have done.

daniel schut makes a point that is to some extent covered in the post (in which Johnson notes that limits on percentage of text copied, etc. are understandable).

The real concern that "content owners" have is the wholesale replication and near-infinite reproduction of entire works. This is an even greater concern for music and movie distributors because there is no (theoretical) degradation of quality across even a million digital copies.

Because "fair use" reproduction has proven impossible (or undesirable) to define, content owners have drawn a line in the sand at zero reproduction, and have repeatedly demonstrated that their main interest is to wring every last egg from the golden goose before it drops dead of exhaustion. Which it is doing.

I think this piece is a must-read for anyone interested in the future of information or the media, and I'd suggest it should also have noted that as hardware makers (e.g. Apple) battle to win the [content delivery] hardware war, we are already seeing them getting into the information censorship business (e.g. Apple's rejection of Pulitzer Prize-winning cartoonist Mark Fiore's iPad app, later approved, but only, as Fiore noted, because he'd won the Pulitzer).

The question isn't what Locke "would have done," it's what Lock would do today, with today's technology. The technology of John Locke's time was pen, ink, and paper. The technology of our time includes cut-and-paste. I have no doubt that 21st century Locke would quite happily use cut-and-paste to construct his commonplace blog if his iPad allowed it.

This is, as one would expect, a thoughtful and well-turned speech. But, like Daniel Schut in the comment above, I bridle a bit at the analogy between commonplacing and cutting-and-pasting. While I think there is, at a mechanical level, a clear parallel between the two practices, at an intellectual level they could hardly be more different. Commonplacing was a means of more deeply internalizing an author's words, as its early practitioners often pointed out. It was a sign of attentiveness, of profound engagement with text. The cutting and pasting, or mashing up, that we do online today tends to be much more cursory and superficial - it's done with a couple of mouse clicks rather than with the painstaking retracing of a passage in longhand. And what's cut-and-pasted is rarely kept in the way that the passages in commonplace books were kept. (Rewriting a passage was often the first step in a process of memorization.) With cutting-and-pasting, the words remain external; we borrow them, briefly, rather than making them our own.

Chip Bayers may be right that, if alive today, Locke would cut-and-paste (or merely link to) interesting passages rather than copy them in longhand, but, if so, he would be doing something very different from commonplacing.

Since you cannot copy the text would you consider bookmarking pages a type of commonplace book? I know lots of people who bookmark every interesting page they come across and kinda of sort them via folders etc in the bookmarks of their respective browser.

Great comments/observations everyone. Wanted to reply specifically to Daniel Schut and Nick Carr, who point out in different ways an ambiguity that should have been clearer in the piece. I certainly don't mean to imply that a Google search results page is in any way a reproduction of 18th-century commonplacing, or even that blogging is a direct reproduction. What I was trying to say is that commonplacing shows that re-arranging bits of text out of context, from different authors, creates a new kind of value that is different from the original value of the text. And so when we introduce artificial blocks that make it harder to copy or link to digital text, we are limiting all those potentially valuable new uses.

And yes, Nick, it's true that the way blogs and tweets work is far less studious and attentive than John Locke was with his commonplace book, but it's just as important to point out that Locke's book was exclusively a private affair: whatever he captured went into his mind alone, whereas the blogger can now circulate his discoveries through the minds of thousands. Not the same thing, by any means, but valuable in a different way.

It seems to me that this talk represents some of the most important thinking going on in intellectual life today. The possibility of using the new communication tools to spread thought around is one of the things that keeps me from despairing in the face of so much that is dispiriting in our public life.

It should also be noted that when texts are remixed deliberately into a new text, *produced* explicitly as something new, the creator(s) become much more familiar with their sources than if they are simply archived.

Ask my digital media production students how familiar they become with their source materials when they produce a video mashup or digital story: it is surely at a level approaching Locke's "more deeply internalizing an author's words" if not, perhaps, exceeding it. This is because there are specific goals in mind when working with the pieces of the originals that drives the depth of their encounter with those originals.

Not only are digital technologies changing the way we archive and filter texts, but they are changing the ways we make new ones as well.

I think the question of longhand vs. copy/paste is and should be something to be decided evolutionarily. That is to say, the current technology should moot what was once not possible, rather than conforming to the past as something de facto. I'm hoping the conditions of the iPad I bought will be challenged by similar devices to come: AndroidSlate, UbuntuPad... in all the sense that Steven notes. But then there's what the iPad itself moots, which is up to the benign hack-o-sphere, or just crowdwise. Ease or difficulty shouldn't be the litmus on the quality of the results. Rather, evolution favors the possible?

BTW: I was able to copy/paste out of my iBooks copy of James Joyce Ulysses and The Complete Works of William Shakespeare. Haven't had a chance to download my fill of Darwin yet. Looking forward to doing the same reading there as on the Newton my iPad is superseding.

While I certainly agree with your assessment of the flaws (whether deliberate or not) of the iBook/NYT/WSJ apps for the iPad, I think they serve to highlight the major contrast between the intended functions of the commonplace book vs. the iPad: the iPad seems deliberately designed for more passive readers; that its textual functionality is significantly pared down from that of a MacBook or even a cheap netbook is the *point* for most people who will use it. It's a media viewer and player, not a creative tool like the MacBook is. It is *for* passive consumption.

The commonplace book, in the technological idiom of its time, was a tool for the active refinement and incubation of content.

But nothing other than mental inflexibility or functional illiteracy prevents anyone from "remixing" or commonplace book-ing that content even from an iPad app, with, as commentors above have noted, nothing but a pen and paper. Much as I agree that the design of the apps in question is backwards and silly, it's not a big actual hindrance to anyone actually motivated to use it to create derivative content as long as they can still *write.* Nothing prevents anyone from copying a passage down and retyping it into a tweet, facebook post, or blog.

It's for this reason that I believe it's ever more important, not less, to ensure that students are learning the basic, low-tech mechanics of writing, even as our dependence as a society and economy on computers grows, as well as the kind of active reading, writing, and reflection that Jefferson and Locke practiced in their commonplace books. Several months ago, a column on the Newsweek site argued that we should stop teaching old-fashioned cursive writing to elementary school students, since they'll come of age in an economy that places a premium on computer literacy, not long-hand writing. I think the issues to which you call attention with the iPad demonstrate why competence and comfort with old-fashioned skills like handwriting are more important than ever, not less.

I am a phd student. I am doing my literature review. At the moment, I spend at least four hours a day typing highlighted quotes into Microsoft word from the stock of books/articles I have read over the past few months. I am still undecided whether it would be quicker to sit on a beach for the next two years until some future hero of mine actually makes a ipad/kindle product that I can highlight in and which automatically transfers it into Microsoft word or some equivalent. And if they could include a function that automatically copies not only the text, but also automatically provides an accurate citation in accordance with whatever referencing method my supervisor feels is best that week, well then there would be me, a deck chair and a daquiri waiting for them on a beach in Fiji.

Writing or typing is a memory aid; copy/pasting is not. The remix/reuse is a conceptual but not memory aid; in any case, that is the same for writing or typing and copy/pasting. We tend, I think, to use our computers as memory aids: we archive and search, rather than keeping data in active (biological) memory. At the same time this expands our access to data even as it degrades our ability to access data that exists (or not) in our minds.

As someone who keeps a daily electronic version of a commonplace book, I typically "annotate" and/or comment the clipping in the same action so it is definitely not a housekeeping function of "copy and paste" either. The software I'm using (which I designed) has always been mindful of both pre- and post- processing of text, clippings and media. The process is very organic and it's flexible enough to allow deeper, more purposeful reflection of the contents. It is "my" notebook. Not all personal learning need be intellectual either. Humor and satire also populate my notebooks.

On the larger, more philosophical plane, instead of just one wikipedia, anyone can construct their version of a personal encyclopedia galactica of "what" matters; from the sublime to religious consciousness.

Similar "read only" features are becoming, alas, commonplace, in media companies' apps for Android. I've been trying the New York Times and USA Today apps on a Droid phone and noticed the same limitation.

Perhaps copying and pasting are features being saved for future "pay" versions? Hmm. Would people pay for the "freedom to quote?" (To quote without retyping, that is.)

In the meantime, the "mobile.nytimes.com" HTML+CSS version is more flexible than the "value added" app. Plain open-standards CSS and HTML allow me to link news sites, blogs and Delicious bookmarks in a personal hypertext Web reminiscent of Vannevar Bush's Memex -- which I'd consider a hypertextual commonplace book in its own hypothetical 1945 way. http://bit.ly/bkPTP8

I am a student. After researching online for papers and reading articles or blogs, I have come to a conclusion that I will experience some form of interaction. Technology, I think, serves as a way for us to interact with others directly and indirectly.

When we blog, there is direct interaction. We are voicing our opinions. On the other hand, when we "copy and paste", we are indirectly interacting with the writer. This is because we are using their ideas for our own purposes without communicating with the writers themselves.

Overall, I believe many of us long for some form of interaction when we read through a screen, which is why many of us become disappointed when we read through a "glass box". We are technically given "untouchable" text, which is what a book alone consists of. With this, I feel, the essence of online reading should be about the ability to experience interaction and connection with both readers and writers at a global level.

Locke's technique that you describe above is literally an example of a hashing algorithm: the generation of a key to a storage location for an item based on a computation involving the item's values (in this case the concatenation of the first letter with the first vowel).

If the key for the item turns out not to be unique - for instance, if Locke found another item already on that page in his common-place book - then a simple linear search is performed through the list of items sharing that key to find the desired match.

The lexical analyzers of all modern computer languages employ just this method for storing and subsequently retrieving text items.

I'm a father of three boys, husband of one wife, and author of nine books, host of one television series, and co-founder of three web sites. We split our time between Brooklyn, NY and Marin County, CA. Personal correspondence should go to sbeej68 at gmail dot com. If you're interested in having me speak at an event, drop a line to Wesley Neff at the Leigh Bureau (WesN at Leighbureau dot com.)

Where Good Ideas Come From: The Natural History of InnovationAn exploration of environments that lead to breakthrough innovation, in science, technology, business, and the arts. I conceived it as the closing book in a trilogy on innovative thinking, after Ghost Map and Invention. But in a way, it completes an investigation that runs through all the books, and laid the groundwork for How We Got To Now. (Available from IndieBound here.)

The Invention of AirThe story of the British radical chemist Joseph Priestley, who ended up having a Zelig-like role in the American Revolution. My version of a founding fathers book, and a reminder that most of the Enlightenment was driven by open source ideals. (Available from IndieBound here.)

The Ghost MapThe story of a terrifying outbreak of cholera in 1854 London 1854 that ended up changing the world. An idea book wrapped around a page-turner. I like to think of it as a sequel to Emergence if Emergence had been a disease thriller. You can see a trailer for the book here. (Available from IndieBound here.)

Mind Wide Open : Your Brain and the Neuroscience of Everyday LifeMy first best-seller, and the only book I've written in which I appear as a recurring character, subjecting myself to a battery of humiliating brain scans. The last chapter on Freud and the neuroscientific model of the mind is one of my personal favorites. (Available from IndieBound here.)