Posts about correction

I was listening to Peter Day’s wonderful BBC business show as he interviewed Gordon Moore, who said that his first version of his law had transistors doubling on chips in a year but the public version made that two years. He noted that it’s often said his law calls for doubling every 18 months but that’s not true; he didn’t say that.

Sure enough, that error meme popped up the next day as I read John Markoff’s story about advances in computer storage and he said that Moore’s Law “decrees that the number of transistors on a silicon chip doubles roughly every 18 months.”

Just for the hell of it, I checked with that supposedly fatally flawed Wikipedia, and it said: “Moore’s Law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years.” But the accompanying graphic gives the impression that 18 months is still the number.

I say this not to nya-nya the Times. Search at Google for “Gordon Moore” and “18 months” and you’ll find lots of erroneous statements of the law. Still, the Times’ story is one more expression of a mistake, which many will read and spread yet farther.

So I raise again the question of how we can better map content and corrections. How does Moore assure there is a definitive statement of his law? How do we know it comes from him? Once it’s acknowledged as correct, how do we notify those who got it wrong so the can correct it and start spreading the right meme? Truth is a game of wack-a-mole.

I think there is an elegantly simple solution to the problem of attaching corrections to earlier errors in news: It’s the link, the tag, and the content map.

There has been a great deal of discussion, following NY Times ombudsman Clark Hoyt’s column on errors, regarding what to do about old, incorrect articles on a subject that come up higher in search results than newer, corrected articles. Suggested solutions range from killing the old articles, which Hoyt considers, to correcting them to relying on the web and search. I agree most with that last solution, which comes from taguru David Weinberger. Nicholas Carr gets it exactly backwards when he says that search-engine optimization of article archives manipulates history and so old articles should be killed to make the web forget; that would be the criminal manipulation of history. [See correction below – ed] Weinberger says, for example, that if the Times killed all of Judy Miller’s incorrect stories we would be left without an understanding of the paper’s role in the Iraq invasion. I would follow the ethic of the correction I have learned in the blog world, a standard that requires openness and transparency (that is, admitting our errors as we correct them — quickly).

I say we can use the architecture of the web to fix errors and follow the ethic of the open correction, using those existing tools I listed above. Consider the case of a Times reporter writing an article that follows up on and corrects an earlier article. You can bet that the reporter writing the later story looked up the prior art; we are all trained to check the clip file. So there is likely to be knowledge of the conflict. In this case, here’s how the two can be connected:

* The reporter or editor can link to the old, incorrect article. The web site can then sense any internal links to the original article and display those links on it. If you find the wrong article in a search, you can see that there is a follow-up. Indeed, that follow-up could be labeled “correction” to make it apparent. And the Times site could display anything with the “correction” tag separately and prominently.

* Even if the two are not explicitly linked, they can be connected with tags. If reporters and editors both tag their stories about the subjects, they can be connected.

* Say they aren’t tagged. Their shared topicality can still be sensed. I don’t mean this to be a plug for Daylife, but finding such connections is turning out to be one of the great values of analyzing the body of news, inside one site or across all.

* Now let’s say the correction does not come from the paper that reported the error but from without. Let’s say that here, on Buzzmachine, I write a correction about a Times article. I could link to it and use the tag “correction” and that would then be discoverable (“‘show me all links to this article tagged ‘correction'”). I’d argue that the Times should display such links. But if they don’t, I’ll suggest that Craig Silverman could make a service of this at Regret the Error.

* And let’s say this isn’t about an explicit correction but instead about followups and more information. This is why I want to see the map of content and all its interrelations.

* Now if you want to get really ambitious, it’d be great if I could subscribe to old articles I’d read or written about so I could be alerted if there are any corrections, an idea I talked about last year. I could easily see becoming inundated with corrections but I think there’s a way to prioritize them.

But now pull back to the simplest level: If the Times linked to and tagged articles and exposed the links among them, many of the problems Hoyt et al wrote about would be fixed.

: LATER: I spoke with Times reporter Abby Goodnough at length about this and more for her Week in Review piece today about rumors that do and don’t get traction in media and blogs. It’s also possible that this content map could affect stories as they develop, linking half-baked reports with later reporting and then complete stories and then followups.

: CORRECTION: Nicholas Carr in the comments corrects me: He did not call for killing articles. I got that wrong and apologize. We still disagree about who’s manipulating history. But we don’t disagree about maintaining history. Sorry. This is what Carr said:

So if we are programming the Web to remember, should we also be programming it to forget – not by expunging information, but by encouraging certain information to drift, so to speak, to the back of the Web’s mind?

Though he explicitly said that information should not be expunged, I misinterpreted — and actually still don’t understand — what he means about letting information drift. Expunge or hide, I’d still argue that linking is best.

Here’s my Guardian column this week — about making mistakes and corrections online — in full:

The internet speeds up the dissemination of not only information but also misinformation. So what are we to do about this? Regulate? Legislate? Complain? Ignore? Or respond?

Consider the experience of Tim Toulmin, director of the Press Complaints Commission, when the BBC reported online that he thought bloggers should subscribe to a voluntary code of conduct, or else there is no redress for errors. I was one of many bloggers who responded tartly. On my site and on the MediaGuardian podcast, I called Toulmin – with apologies, dear readers – a “Brit twit” for thinking that one could regulate this vast conversation, which is what blogs really are.

Only problem is, Toulmin didn’t say that. He told me by email that if he had, he might have understood my moniker for him. But instead, he complained to the BBC and to me, making reference to damage and lawyers. Both of us clarified what we wrote. And Toulmin told his tale in last week’s MediaGuardian.

The internet can be better at corrections than old media. A fix can be attached to an error where it occurs, and many online denizens pride themselves on confessing missteps faster than their print and broadcast counterparts. But the internet can also be worse – online, errors can spread wider faster and take on a longer half-life. I wish we had a technical solution – that everyone who linked to an incorrect article could receive an alert and correction.

Should blogs subscribe to a code of conduct? I don’t think so (and neither does Toulmin). Again, blogs are mostly just people in conversation and I don’t wave a code when I talk to my neighbours and friends; I know that my integrity rests on my credibility. On the other hand, when I argue that bloggers who commit acts of journalism should enjoy the rights and privileges of professional journalists, how can I say that they should not suffer the same regulation? Well, for me, that’s easy, because as an American first amendment absolutist, I bristle at any attempt to regulate speech.

And I do fear that in their efforts to protect truth, legislatures, courts and self-appointed industry watchdogs could chill speech in new ways. If the people fear retribution without the legal resources that the owners of presses have, they will either shut up or hide behind the anonymity the internet allows. That would be a tragedy.

We need to recognise that the internet alters how media operate. Blogs – whether written by professionals or amateurs – tend to publish first and edit later, which can work because the audience will edit you. In this medium, stories are never done; rather than turning into fish-wrap, they can grow and become more factual and gather new perspectives, thanks to the power of the link and, yes, the correction.

We all make mistakes. We’re human. And the internet makes our humanity more apparent than polished print and broadcast do. So we need to modify our expectations of media, tune our scepticism, update our laws, restrain our regulation and enhance our technology. We are left, though, with the same ethic of the error we have always had: it’s wrong to make them and right to correct them, and you get a bonus for apology. So, Mr Toulmin, I’m sorry.

“Conventional wisdom, it’s an enemy at a time like this,” said Beth Comstock, president for digital media and market development at NBC Universal, part of General Electric. “In media today, I don’t think there is a single rule that can’t — and frankly, probably shouldn’t — be broken.

“This isn’t just about driving growth,” she added. “It’s about staying in business.”

Preach it, sister.

But at this same confab, there was a wake-up call to newspapers, from consultant Gordon Borrell. If they are not careful, they will soon lose not only their monopolies but their top perches in their markets:

Mr. Borrell discussed a new report from his company showing that local television stations more than doubled their Internet ad revenue last year compared with 2004, to $283 million from $119 million. And, he predicted, the figure would climb to $410 million by the end of 2006.

But ad revenue last year for Web sites operated by local newspapers totaled $2 billion, according to the report, or more than nine times what the Web sites of the local TV stations took in….

… “All media are in flux, and flux is a great time to institute change.”

As an example, Mr. Borrell cited the Web site operated by WRAL-TV, the CBS affiliate in Raleigh, N.C., that is owned by the Capitol Broadcasting Company. The ad revenue for the site (www.wral.com) exceeds the ad revenue for www.newsobserver.com, the Web site operated by the leading local newspaper, The News and Observer, published by the McClatchy Company.

CORRECTION: Just got email from Chris Hendricks, head of online for McClatchy, forwarding a note from Borrell, saying he was misquoted by The Times. Borrell said the station has more traffic according to Nielsen data than the paper — not revenue.