Robots v. Mankind: Who’s in charge?

As you may already know, my feeling is that humans are generally better at editing than algorithms, but by the same token you could say that the main algorithms in use today (Google PageRank, Memeorandum, Google News) are largely based on human decisions, where a link generally counts as a vote. At the other end of the spectrum, you have sites like Digg and Reddit, which are entirely edited by humans in a distributed way.

But I’m not sure if it’s really a case of humans versus algorithms: I think the future could lie with services like Wink, where Google’s search results are rated, tagged and built upon by human minds. In this way, humans could make up for the obvious failings of algorithms – namely the scourge of spam and splogs.

What do you think: who should edit Web 2.0?

The answer is people will edit Web 2.0. People program robots and people make editorial decisions, so in both cases people are doing the editing. The process of editorial decision-making is what is changing and that’s where the dross created by robots and the other option that Pete discusses in his posting, OPML, threaten to overwhelm the solid decisions of people about information.

I’ve been particularly impressed with the Taskable OPML browser I’m running on my lonely PC, which lets me scroll through many people’s view of the Web in simple hierarchical form—but the problem I foresee is that old hierarchies and aging views are going to overwhelm the value of people actively involved with information through OPML. The work of the Attention Trust, which captures what information people are actually using needs to be integrated with the OPML effort to generate more dynamic views of information. And, at the same time, we need a view of the evolving link relationships around information, which is what we’re working on at Persuadio (we’re working on the updated MyDensity conversational clouds, but that public work has been overwhelmed by client work), to tell more about how influence shapes information access and use.

Share this:

Related

Author: Mitch Ratcliffe

Mitch Ratcliffe is a veteran entrepreneur, journalist and business model hacker. He operates this site, which is a collection of the blogs he's published over the years, as well as an archive of his professional publishing record. As always, this is a work in progress. Such is life.
View all posts by Mitch Ratcliffe

2 thoughts on “Robots v. Mankind: Who’s in charge?”

“The answer is people will edit Web 2.0. People program robots and people make editorial decisions, so in both cases people are doing the editing.”

I think the distinction I’m trying to make here is between the more automated system of Memeorandum (where human actions are implicit and made via links) and the explicit human voting system of Digg. We’ve seen that Pagerank (implicit human action through linking) is very vulnerable to splogs and spam, but when people actually look at the content, they can quickly identify whether something is junk.

Pete—I understand the distinction you’re making, but Memeorandum, as all filters must, starts with some explicit assumptions about the sources from which it will trace the conversational environment. It’s a mistake to ignore the human judgment that underpins the implicit action systems, whether we’re talking about Memeorandum or Google, which depends to a very great degree on how linking is weighted rather than direct human expression of semantic agreement about the search term (e.g., “this is a page about the topic you searched for”).

That said, yes, people can recognize junk, and quickly. But they have to provide sustained attention to information that, for the most part, is ephemeral from their perspective. For instance, I may read and tag a story once, but never return to it again—in the meantime, someone comes along and through explicit actions such as editing or tagging change the meaning of the thing I helped escalate to public attention by my tagging, but meaning something else. So, we need processes, like those of “wiki gardening” in the new Web 2.0 way of describing it or “editorial process” in the old been-in-media-before way of describing it, to provide predictable meanings for our junk and the nuggets of value we find.