A public diary on themes around my books

July 26, 2009

If you checked out today’s New York Times Book Review section, you’ll see that FREE made the list in its first week of eligibility. It’s #12, tied for #11 (that’s what that little asterisk means).

We expect it to dip in its second week due to the free versions of the book cannibalizing sales, then stay strong longer than usual as the free offers expire and word of mouth from all the free readers turns into sales. So far in the first two weeks the book was downloaded, in one digital form or another between 200,000 and 300,000 times (we’re still compiling stats). That’s a lot. Now we’ll find out what it means.

July 18, 2009

You can get FREE free on the Kindle (and Kindle iPhone app if you don’t have a Kindle) now. It’s been up for a few days and the free offer will end on Wed, Jul 22nd, so get it now. [UPDATE: the free offer is now over, and the book is now at the usual discount price of $9.99. It’s still available for free on Scribd, Google Books and Shortcovers, as well as in audiobook form on iTunes] As you can see from the above screenshot, it’s already the #1 Kindle book. (US only, I’m afraid.)

July 08, 2009

FREE is now available for free on Google Books, too. Like Scribd, this one is a web-based screen reading experience, but it has the added advantage of a live Table of Contents (see above), so you can easily get from chapter to chapter or pull up sidebars without having to page through the book.

Like the other free text versions, the Google Books one will be time-limted: one month. (The audiobook versions are the only ones that will remain free forever).

Next up, in the coming week: free FREE on Kindle and other ebook readers, including the iPhone.

[UPDATE: many of these versions, including Google, are US-only. This is just a function of the way global book rights work, and the fragmentation thereof. I wish it were different and we’re working to release free versions in other languages when those editions of the book comes out, but in the meantime my apologies to readers outside the US if you’re not getting full text.]

July 06, 2009

We’re going to be rolling out the free digital forms of FREE over the next two weeks. First up: the Scribd form, right here on the blog (and anywhere else you want—it’s embeddable). This is the whole book!

(click “full screen” for a better reading experience).

Also released today: the free unabridged audiobook. You can either download the whole things as zipped MP3 files, or play them on the Wired.com microsite.

Why is the whole book free in audio form, but half the book is $7.49? Because, as the Audible.com listing explains,“Get the point in half the time! In this abridged edition, the author handpicked the most important and engaging chapters and points, cutting three hours from the length without losing key concepts. Time is money!”

Over the next week or so, we’ll be releasing other versions, including iTunes podcast and download, Kindle, Google Books and more. All free, for varying lengths of time (from a week to forever). I’ll be tracking the stats for everything and sharing the results of these experiments here over the next month.

June 29, 2009

It’s now clear that the bane of my next year will be questions about the future of the newspaper industry from journalists. I don’t blame them—newspapers are indeed one of the industries most affected by Free (although that’s just one manifestation of their larger problem: having lost their monopoly on consumer attention). And neither I nor anybody else has any good answers, other than the newspaper business is probably going to shrink but not go away, and that the business model will have to change.

But since journalist Malcolm Gladwell has somewhat parochially decided to make the Future of Paid Journalism the focus of his review of Free (which is, ironically, free on the New Yorker’s website; perhaps this is something Gladwell should take up with David Remnick?), I’ll try to respond in a bit more detail.

Gladwell (who, by the way, I both like and admire, so let’s call this an intellectual debate between corporate cousins) writes:

“[Anderson argues that] newspapers need to accept that content is never again going to be worth what they want it to be worth, and reinvent their business. “Out of the bloodbath will come a new role for professional journalists,” [Anderson] predicts, and he goes on:

“There may be more of them, not fewer, as the ability to participate in journalism extends beyond the credentialed halls of traditional media. But they may be paid far less, and for many it won’t be a full time job at all. Journalism as a profession will share the stage with journalism as an avocation. Meanwhile, others may use their skills to teach and organize amateurs to do a better job covering their own communities, becoming more editor/coach than writer. If so, leveraging the Free—paying people to get other people to write for non-monetary rewards—may not be the enemy of professional journalists. Instead, it may be their salvation.”

Anderson is very good at paragraphs like this—with its reassuring arc from “bloodbath” to “salvation.” His advice is pithy, his tone uncompromising, and his subject matter perfectly timed for a moment when old-line content providers are desperate for answers. That said, it is not entirely clear what distinction is being marked between “paying people to get other people to write” and paying people to write. If you can afford to pay someone to get other people to write, why can’t you pay people to write? It would be nice to know, as well, just how a business goes about reorganizing itself around getting people to work for “non-monetary rewards.””

Well, I wouldn’t propose this as the future of all newspapers, but my model comes from personal experience. About three years ago, I started a parenting blog called GeekDad, and invited a few friends to join in. We soon attracted a large enough audience that it became apparent that we couldn’t post enough to satisfy the demand, so I put out an open call for contributors. Out of the scores who replied, I picked a dozen and one of them was Ken Denmead (at right, with Penn of Penn & Teller).

Ken is, by day, a civil engineer working on the BART extension in the SF Bay Area. But by night he is an amazing community manager. His leadership skills impressed me so much that I turned GeekDad over to him entirely about a year ago. Since then he’s recruited a team of volunteers who have grown the traffic ten-fold, to a million page views a month.

So here’s the calculus:

Wired.com makes good money selling ads on GeekDad (it’s very popular with advertisers)

Ken gets a nominal retainer, but has also managed to parlay GeekDad into a book deal and a lifelong dream of being a writer

The other contributors largely write for free, although if one of their posts becomes insanely popular they’ll get a few bucks. None of them are doing it for the money, but instead for the fun, audience and satisfaction of writing about something they love and getting read by a lot of people.

So that’s the difference between “paying people to write” and “paying people to get other people to write”. Somewhere down the chain, the incentives go from monetary to nonmonetary (attention, reputation, expression, etc).

It works great for all involved. Is it the model for the newspaper industry? Maybe not all of it, but it is the only way I can think of to scale the economics of media down to the hyperlocal level. And I can imagine far more subjects that are better handled by well-coordinated amateurs than those that can support professional journalists. My business card says “Editor in Chief”, but if one of my children follows in my footsteps, I suspect their business card will say “Community Manager.” Both can be good careers.

June 24, 2009

As some of you may have seen, VQR rightly spotted that I failed to cite Wikipedia in some passages in Free. This is entirely my own screwup, and will be corrected in the ebook and digital forms before publication (and in the notes, which will be posted online at the same time the hardcover is released), but I did want to explain a bit more how it happened and what we’re doing about it.

First, as readers of my writings know, I’m a supporter of using Wikipedia as a source (not the only one, of course, and checking the original source material whenever possible). I disagree with those who say it should never be used. But the question is how to use it.

In my drafts, I had intended to blockquote Wikipedia passages, footnoting their URL. But my publisher, like many others, was uncomfortable with the changing nature of Wikipedia, and wanted me to timestamp each URL (something like this: http://en.wikipedia.org/wiki/Chris_Anderson page viewed on July 8th, 2008), which struck me as clumsy and archaic. So at the 11th hour we decided to kill the notes and footnotes entirely and I integrated the attributions into the copy.

In doing so, I went through the document and redid all the attributions, in three groups:

Long passages of direct quotes (indent, with source)

Intellectual debts, phrases and other credit due (author credited inline, as with Michael Pollan)

In the case of source material without an individual author to credit (as in the case of Wikipedia), do a write-through.

Obviously in my rush at the end I missed a few of that last category, which is bad. As you’ll note, these are mostly on the margins of the book’s focus, mostly on historical asides, but that’s no excuse. I should have had a better process to make sure the write-through covered all the text that was not directly sourced.

Also note the VQR is not saying that all the highlighted text is plagiarism; much of is actually properly cited and quoted excerpts of old NY Times articles and other historical sources. And as you’ll see, in most cases I did do a writethrough of the non-quoted Wikipedia text, although clearly I didn’t go nearly far enough and too much of the original Wikipedia authors’ language remained (in a few cases I missed it entirely, such as that short Catholic church usury example, which was a total oversight). This was sloppy and inexcusable, but the part I feel worst about is that in our failure to find a good way to cite Wikipedia as the source we ended up not crediting it at all. That is, among other things, an injustice to the authors of the Wikipedia entry who had done such fine research in the first place, and I’d like to extend a special apology to them.

So now we’ve fixed the digital editions before publication, and we’ll publish those notes after all, online as they should have been to begin with. [UPDATE: A draft version is here. The final version will live in the right column of this blog permanently] That way the links are live and we don’t have to wrestle with how to freeze them in time, which is what threw me in the first place.

Here’s the statement that my publisher, Hyperion, released yesterday:

We are completely satisfied with Chris Anderson’s response. It was an unfortunate mistake, and we are working with the author to correct these errors both in the electronic edition before it posts, and in all future editions of the book.

June 22, 2009

We published an excerpt from the book in Wired this month. Here’s how it starts:

“In 1969, the Neiman Marcus catalog offered the first home PC, a stylish stand-up model called the Honeywell Kitchen Computer, priced at $10,600. The picture shows an aproned housewife caressing the machine, with this tag line: "If she can only cook as well as Honeywell can compute." That image should be on every cubicle in Silicon Valley; it's a testament both to what technologists get right and what they get badly wrong.

To their credit, they understood that Moore's law would bring computing within the reach of regular people. But they had no idea why anyone would want it. Despite countless brainstorming sessions and meetings on the subject, the only application the Honeywell team could think of for a home computer (aside from the perennial checkbook balancing) was recipe card management. So the Kitchen Computer was aimed at housewives and featured integrated counter space. Those housewives would, however, require a programming course (included in the price), since the only way to enter data was with binary toggle switches, and the machine's only display was binary lights. Needless to say, not a single Kitchen Computer is recorded as having sold.

Today, of course, we have computers in every home—and in every pocket and car and practically everywhere else. But one of the few things the average person doesn't use them for is managing recipe cards.

Don't blame Honeywell—blame the computing world of the 1960s. In those days, computers were expensive mainframes. Because processing power was so scarce and valuable, it was reserved for use by IT professionals, mostly working for big companies and the government. Engineers both built the computers and decided how to use them—no wonder they couldn't think of nonengineering applications.

But as the Kitchen Computer hinted, computers would soon get smaller and cheaper. This would take them out of the glass boxes of the mainframe world—and away from the IT establishment—and put them in the hands of consumers. And the real transformation would come when those regular folks found new ways to use computers, revealing their true potential.

All this was possible because Alan Kay, an engineer at Xerox's Palo Alto Research Center in the 1970s, understood what Moore's law was doing to the cost of computing. He decided to do what writer George Gilder calls "wasting transistors." Rather than reserve computing power for core information processing, Kay used outrageous amounts of it for frivolous stuff like drawing cartoons on the screen. Those cartoons—icons, windows, pointers, and animations—became the graphical user interface and eventually the Mac. By 1970s IT standards, Kay had "wasted" computing power. But in doing so he made computers simple enough for all of us to use. And then we changed the world by finding applications for them that the technologists had never dreamed of.

This is the power of waste. When scarce resources become abundant, smart people treat them differently, exploiting them rather than conserving them. It feels wrong, but done right it can change the world.”

March 24, 2009

As a former Economist technology writer, I understand the attractions of “simplify, then exaggerate”. But in the case of your article on freeconomics (“The end of free lunch—again”, March 19th), you have done a bit too much of both.

First, where is your evidence that online advertising is a failing model? To be sure, the crisis has dramatically slowed its growth (like that of every other industry) but unlike most others, it’s still positive. The worst forecasts for the year that I’ve seen predict that it may drop by a few percent from last year’s record figure. That’s a lot better than the offline advertising market and hardly supports your hyperbolic claim that “the demise of a popular but unsustainable business model now seems inevitable.”

Second, there is more to free business models online than advertising. The big shift since the crisis has been the rise of “freemium” (free+premium) models, where products and services are offered in free basic and paid premium versions. Think Flickr and Flicker Pro (more storage), virtually all online games and even your own site (some free and some paid content).

Finally, your scorn blinds you to the fact that this crazy idea of giving away content for free and supporting it by advertising is nearly a hundred years old. It is the basis of the standard radio and television broadcast model (“free to air” content) and countless other companies, from the free daily and weekly newspapers to the vast majority of media websites, including all of our own at Conde Nast. It works great—The Economist should try it!

Regards,

Chris Anderson

Editor in Chief, Wired Magazine and author of the forthcoming “Free: The Future of a Radical Price”

“Who started this rumour that all information should be free and why didn't we challenge this when it first came out? I say this in college classrooms and they start to throw their shoes at me.”

And so on…

My take: I actually don’t think it matters what Time or Newsweek does on the web: they both seem to be trending towards insignificance:

But some of the other Time Inc properties, such as People.com, are doing much better online. And the NYT is doing great. Should they charge?

I think they should—but not for everything and not for everyone. The old WSJ model got the Freemium model about right, I thought. For such premier titles, which can credibly claim to be papers of record and thought leaders, there is clearly a class of readers who will pay what it costs to get that content.

But what WSJ.com used to do was to offer a backdoor to free content for another class of consumer: the social media maven. Paying subscribers could make content free to others by clicking on an icon that created a URL for a free version of the story that they could use for blogging or to submit to sites such as Digg or Yahoo Buzz.

The deal was essentially this: these often influential word-of-mouth generators could trade reputational and attention credits for free content. The content would be part of the online conversation, not walled off behind a paywall, and presumably some fraction of those who followed the links to free content would recognize the value in the premium content around it and subscribe. A very nice Freemium model, in other words.

Sadly, the WSJ doesn’t seem to do that anymore. The social media links it creates just go to short excerpts of the stories, and you have to subscribe for the whole thing. I suspect that this has had the effect of discouraging people from using those links, since it’s going to result in disappointment for most of the people who follow them. I certainly don’t see the WSJ mentioned much on Digg or Reddit, and that may be why.

But as the NYT considers a Freemium strategy, I’d encourage it to revisit the model that the WSJ abandoned. The old Times Select paywall kept its columnists out of the public debate, which annoyed them and diminished the Times’ influence. A more social media-friendly alternative would avoid that dead end, while reintroducing a direct revenue stream. Free may be the best price, but it needn’t be the only one.

February 12, 2009

Britain’s Royal Mail is trying something new with direct mail: sending people a box of free stuff. Called “Matter”, the first one went out in mid-December to 30,000 people who had signed up to receive it. The box contained books, DVDs, CDs, shower products, a candybar, a pre-paid SIM card and a few other goodies.

The idea is that you try them, maybe give some to your friends, tell people about the one you like and otherwise interact with products in a way that’s more interesting than traditional direct-mail advertising.

The project was a collaboration with Tim Milne, who started the arts collective Artomatic. He says that the challenge so far has been getting advertisers to look past traditional direct mail. He told me:

“I've long believed that printed matter will gain new value in a digital world as everyone begins to crave the more physical / tactile / emotional nature of printed stuff.

Matter is revealing some interesting, unexpected behaviours. Matter is very social–people take the items (and sometimes the whole box) to work, to the pub and tell their friends and family about the cool things
they'd been sent. It fact, even though it's a physical medium, it behaves more like a digital channel.

The underlying concept–that if you give people nice things they'll better towards you is certainly true. What we're beginning to discover is how the simple act giving people something nice triggers a whole range of responses–using the objects, telling their friends and telling us what they think.”