from the protocols-not-platforms dept

Right. By now you've heard about Reddit's new content moderation policy, which (in short) is basically that it will continue to ban illegal stuff, and then work hard to make "unpleasant" stuff harder to find. There is an awful lot of devil in very few details, mainly around the rather vague "I know it when I see it" standards being applied. So far, I've seen two kinds of general reactions, neither of which really make that much sense to me. You have the free speech absolutists who (incorrectly) think that a right to free speech should mean a right to bother others with their free speech. They're upset about any kind of moderation at all (though, apparently at least some are relieved that racist content won't be hidden entirely). On the flip side, there's lots and lots and lots of moralizing about how Reddit should just outright ban "bad" content.

I think both points of view are a little simplistic. It's easy to say that you "know" bad content when you see it, but then you end up in crazy lawsuits like the one we just discussed up in Canada, where deciding what's good and what's bad seems to be very, very subjective.

I'm a big supporter of free speech, period. No "but." I also worry about what it means for freedom of expression when everyone has to rely on intermediaries to "allow" that expression to occur. At the same time, I recognize that platforms have their own free speech rights to moderate what content appears on that platform. And also that having no moderation at all often leads to platforms being overrun and becoming useless -- starting with spam and, if a platform gets large enough, trollish behavior or other immature behavior that drives away more intelligent and inspired debate. This is different than arguing that certain content shouldn't be spoken or shouldn't be allowed to be spoken -- it's just that maybe it does not belong in a particular community. Obligatory xkcd:

So Reddit is free to do what it wants, and Reddit's users are free to do what they want in response, It's a grand experiment in learning what everyone values in the long run. People will write about it for years.

However, in thinking about all of this (and the similar struggles that Twitter, in particular, has been having), I've been wondering if perhaps the problem is when we put the burden of "protecting free speech" on platforms, when that's not the best role for those platforms. The various platforms serve a variety of different purposes, all of which seem to get conflated into one larger purpose. They are places to post content (express), for one, but also a place to connect as well as a place for discoverability of the content.

And if we're serious about protecting free expression, perhaps those things should be separated. Here's a thought experiment that is only half baked (and I'm hoping many of you help continue the baking in the comments below). What if, instead of being full stack platforms for all of those things, they were split into a protocol for the expression, which was open and distributed, and then the company could continue to play the other roles of connecting and helping with discoverability. This isn't necessarily an entirely crazy idea. Ryan Charles, who worked at Reddit for a period of time, notes that he was hired to build such a thing, and is apparently trying to do so again outside of the company. And plenty of people have discussed building a distributed Twitter for years.

But here's the big question. In such a scenario is there still room for Reddit or Twitter the company, if they no longer host the content themselves? I'd argue yes and, in fact, that it could strengthen the business models for both, though while opening them up to more competition (which would be a challenge).

Think of it this way: if they were designed as protocols, where you could publish the content wherever you want -- including on platforms that you, yourself, control, then people would be free to speak their mind as they see fit using these tools. And that's great. But, then, the companies would just act as more centralized sources to curate and connect -- and it could be done in different ways by different companies. Think of it like HTTP and Google. Via HTTP anyone can publish whatever they want on the web, and Google then acts to make it findable via search.

In this world that we talk about, anyone could publish links or content via an Open Reddit Post Protocol (ORPP) or Open Tweet Protocol (OTP) and that includes the ability to push that content to the Corporate Reddit or the Corporate Twitter (or any other competitors that spring up). And then the platform companies can decide how they want to handle things. If they want a nice pure and clean Reddit where only good stuff and happy discussions occur, they can create that. Those who want angry political debates can set up their own platform that will accept that kind of content. In short, the content can still be expressed, but individuals effectively get to choose whose filtering and discoverability system they prefer. If a site becomes too aggressive, or not aggressive enough, then people can migrate as necessary.

This isn't necessarily a perfect solution by any means. And I'm sure it raises lots of other problems and challenges. And the companies doing the filtering and the discoverability will still face all sorts of questions about how they want to make those choices. Are they looking to pretend that ignorant angry people don't exist in the world? Or are they looking to provide forums to teach angry ignorant people not to be so angry and ignorant? Or do they want to be a forum just for angry ignorant people that the rest of the internet would prefer to, as xkcd notes, show the door.

And, of course, this would eventually lead to more questions about intermediary liability. Already we see these fights where people blame Google for the content that Google finds, even when it's hosted on other sites. If this sort of model really took off and there were really successful companies handling the filtering/discoverability portions, it's not hard to predict lawsuits arguing that it should be illegal for companies to link to certain content. But that's a different kind of battle.

Either way, this seems like a potential scenario that doesn't end up with one of the two extremes of either "all content must be allowed on these platforms even if it's being overrun by trolls and spam" or "we only let nice people talk around here." Because neither is a world that is particularly fun to think about.

from the not-an-easy-issue dept

As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:

Internet platforms and free expression have a strongly symbiotic relationship -- many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it's easy to talk about government attacks on freedom of expression and why that's problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:

I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.

This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:

Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."

Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:

Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.

That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.

And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.

She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:

That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."

From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.

That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:

All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.

With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.

In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.

Posting another person’s private and confidential information is a violation of the Twitter Rules.

Some examples of private and confidential information include:

credit card information

social security or other national identity numbers

addresses or locations that are considered and treated as private

non-public, personal phone numbers

non-public, personal email addresses

images or videos that are considered and treated as private under applicable laws

intimate photos or videos that were taken or distributed without the subject's consent

Like any law/policy, there will be exceptions. Twitter's Rules go on to note that takedown requests will be considered on a case-by-case basis, rather than removing Tweets automatically when reported.

Keep in mind that although you may consider certain information to be private, not all postings of such information may be a violation of this policy. We may consider the context and nature of the information posted, local privacy laws, and other case-specific facts when determining if this policy has been violated. For example, if information was previously posted or displayed elsewhere on the Internet prior to being put on Twitter, it may not be a violation of this policy.

I asked Twitter if there was a “Weiner exception.” How would this apply to a newsworthy intimate photo, such as the bulge-portrait then-Congressman Anthony Weiner accidentally tweeted of himself which went viral and eventually led to his resignation from office? The Twitter employee said there will be a “newsworthiness exception.” So if your bulge or boobs are a front page story in the newspaper, Twitter may not take them down.

The policy also requires something that other sites (like Reddit) policing for revenge porn don't: the takedown request must be made by the person whose personal photos/information are being disseminated without authorization. This will hopefully deter some potential abuse.

One catch is that you have to recognize yourself in the photo and report it; Twitter doesn’t want “body police” going through tweets and reporting every pornographic image they find. If an offending tweet is removed, all native retweets will disappear too, but you’ll have to report all manual RTs and any further postings of the photo or video.

Franks, for one, thinks it’s problematic that bystanders can’t report the posting of explicit images of others. “Every minute private sexual material is available increases the number of people who can view it, download it, and forward it, so even if Twitter responds quickly to complaints, it may be too late to stop the material from going viral,” she said by email.

What Franks views as problematic is actually a practical safeguard. If you give removal power to everyone, it becomes a plaything for abusers.

Twitter will also try to determine whether the photos/info were actually posted without consent. However, at this point, the determination seems to largely rely on the takedown requester's assertions. The statement won't be legally binding or have any other repercussions other than possible suspension of the bogus requester's account. And there appears to be no process in place for the accused to challenge revenge porn accusations.

On the whole, it's not a terrible way to tackle revenge porn, even if it still leaves a lot to be desired. Certainly Twitter will be accused of censorship more frequently as this policy goes into effect, but as a private company, it can police user-generated content in any manner it sees fit. It's up to those using the service to decide whether they want to coexist with the rule tweaks.

Twitter notes that it's a work in progress. That, unfortunately, means the policy could possibly get much worse as Twitter "iterates" to fix "holes." As much as some people (like Franks above) would prefer Twitter to take a more proactive approach to removing revenge porn, the highly-subjective nature requires a reactive stance. Any policy change will be abused by both sides of this equation, and what's been implemented so far appears to be aimed at reducing collateral damage.

from the urls-we-dig-up dept

Everything in moderation. Somehow that adage seems to get lost in the media coverage of diets that claim near miraculous health results from totally eliminating X from a person's diet. Sure, there are things that you don't need even moderate amounts of, such as arsenic, lead and other toxins. But just less gluttony could go a long way.

from the don't-mess-it-up dept

We've written many times about the importance of protection against secondary liability for websites, such that they're not held liable for what their users do. In the US, thankfully, we have Section 230 of the CDA, which clearly states that websites cannot be held liable for speech made by their users. Frankly, we shouldn't need such a law, because it should be obvious: you don't blame the site for the comments made by others. That's just a basic question of properly placing liability on those responsible. But, in a world of Steve Dallas lawsuits, in which people will always sue companies with deep pockets, it makes sense to have explicit safe harbors to stop bogus litigation.

Somehow, with so much focus on the importance of secondary liability, we happened to miss an absolutely insane ruling that came out of the European Court of Human Rights last fall, in the case of Delfi AS v. Estonia, which basically said that any website that allows comments can be liable for those comments. In fact, it found that even when sites took down comments (automatically!) following complaints, they can still be liable, because they should have blocked those comments from going up in the first place. Bizarrely, the court basically says the site should have known that the article in question might lead to negative reactions, and therefor should have blocked comments:

In addressing this question, the Court first examined the context of the comments. Although the Court acknowledged that the news article itself was balanced and addressed a matter of public interest, it considered that Delfi “could have realised that it might cause negative reactions against the shipping company and its managers”. It also considered that there was “a higher-than-average risk that the negative comments could go beyond the boundaries of acceptable criticism and reach the level of gratuitous insult or hate speech.” Accordingly, the Court concluded that Delfi should have exercised particular caution in order to avoid liability.

Next, the Court examined the steps taken by Delfi to deal with readers’ comments. In particular, the Court noted that Delfi had put in place a notice-and-takedown system and an automatic filter based on certain ‘vulgar’ words. The Court concluded that the filter, in particular, was “insufficient for preventing harm being cause to third parties’. Although the notice-and-takedown system was easy to use - it did not require anything more than clicking on a reporting button – and the comments had been removed immediately notice had been received, the comments had been accessible to the public for six weeks.

The Court considered that the applicant company “was in a position to know about an article to be published, to predict the nature of the possible comments prompted by it and, above all, to take technical or manual measures to prevent defamatory statements from being made public”.

Even more troubling for those of us who believe in the importance and value of unregistered and anonymous commenting, the court found those features to be particularly problematic:

By allowing comments to be made by non-registered users, Delfi had assumed a certain responsibility for them. The Court further noted that “the spread of the Internet and the possibility – or for some purposes the danger – that information once made public will remain public and circulate forever, calls for caution”. In the Court’s view, it was a daunting task at the best of times – including for the applicant - to identify and remove defamatory comments. It would be even more onerous for a potentially injured person, “who would be less likely to possess resources for continual monitoring of the Internet”.

The reason that we're bringing this up now is because plenty of folks, quite rightly, freaked out about this ruling, and asked the European Court of Human Rights to reconsider. And that's now going to happen in early July. The Financial Times has a long and quite interesting look at the case and related issues, including a discussion at the beginning about the nature of online comments. For many years we've talked up the value of anonymous comments and how wonderful they've been for our community here. We've always taken an exceptionally light touch to moderation, allowing anyone to comment, and just trying to weed out the spam. And it's worked well for us. A ruling like the one above doesn't directly impact us, seeing as we're an American company with all our servers here, but it's immensely troubling in general and could create widespread chilling effects on any site that relies on user generated content. But it goes beyond that:

For Eric Barendt, Goodman Professor of Media Law at University College London from 1990 until 2010, the ruling doesn’t adequately balance freedom of speech against an individual’s right to protect his or her reputation. “I wouldn’t stick my neck out to say the ECtHR’s judgment was ridiculous,” he tells me, “but I know many people who would. How bizarre that this case could be the straw that breaks the camel’s back.”

The judgment will not only affect whistleblowers, says Aidan Eardley, a London-based barrister specialising in data protection and media-related human rights law. “It’s also bad news for people who want to comment about sensitive personal issues such as domestic abuse, sexual identity, religious persecution, etc.”

As Sarah Laitner, the FT’s communities editor, says: “It’s important to remove any hurdles a reader may face to participation. Some people feel that they are able to comment more freely if they can use a pseudonym.”

On July 9th, the Court will reconsider its original ruling, and for the sake of free speech online, we hope it reverses its earlier ruling. Between this and the recent right to be forgotten ruling in the EU Court of Justice, Europe is quickly becoming a dangerous free speech nightmare. While these rulings may have the best of intentions, the wider impact of both can do an astounding job in stifling public participation and comment.

Why is this story being removed from all the popular subs over and over by mods?

Message the admins about the censorship of this article by /r/news and /r/worldnews mods. They have never seemed to care about this in the past but if enough users message them it will hopefully at least provoke a response of some kind. Something needs to be done about this or this site needs to be abandoned as a platform for legitimate political discourse.

Last night, the original article from firstlook.org was taken down and tagged as "not appropriate subreddit." Meanwhile, another copy of the story was allowed to rise, despite having an editorialized title. Later, the version that had been taken down--which was older and had fewer upvotes because it had been removed--was put back up and the younger version with more upvotes was removed, allegedly because the topic was "already covered."

This tactic has been used to keep other similar stories from rising, such as the one about the NSA sharing information with Israel.

Ninja edit: subscribe to /r/undelete and /r/longtail if you're interested in keeping an eye on popular content that's been removed by mods.

Censorship on reddit? It seems almost ridiculous considering the amount of subreddits available for those submitting stories. But it's there all the same (although not actually "censorship" so much as a bad direction for a community based on meritocracy to go in). According to commenters, both r/news and r/worldnews (two of the biggest subreddits), the firstlook.org post was removed over and over again once they began collecting upvotes, forcing each submission to start over at "0" and face an uphill struggle for visibility.

The decision to clamp down on news detailing this particular leak brought a whole lot of irony with it. The efforts made to remove an unflattering story about intelligence agencies' dirty little efforts to use the internet to destroy reputations and manipulate public perception led to tongue-in-cheek speculation that Reddit itself is compromised. (And there's certainly no way to be sure it isn't…)

Techdirt may have been the inadvertent beneficiary of bad behavior by subreddit mods, but that's hardly reason to celebrate. If the mod situation is as bad as it appears to be, Reddit is going to start heading down the path of Digg, whose infamous "bury brigade" worked tirelessly to ensure only certain news coverage made its way to the top of the list.

This isn't an easily-solvable problem, thanks to Reddit's hydra-like structure, with hundreds of subreddits and no clear demarcation of command. The corporate Reddit, which ostensibly "controls" the community, has largely taken a hands-off approach. This is still the best option and the reversal of the r/politics arbitrary ban list shows the community still has the power to solve some of its mod problems. But widespread story burial, coupled with evidence of subreddits being gamed by mods, isn't exactly comforting, especially considering Reddit's journalistic aspirations.

Like any platform with millions of users, issues will never be non-existent. But a failure to address the abuse of power by mods of larger subreddits will hurt Reddit in the long run. Power coupled with an almost-complete lack of accountability is always a bad thing. But this problem will need to be solved internally by the subreddits themselves. There's power in numbers, something subreddit subscribers should be able to leverage to start cleaning this mess up.

from the urls-we-dig-up dept

Certain things are almost guaranteed to taste good to us -- salt, sugar and fat are just a few examples of ingredients that most people enjoy and (sometimes) can't stop themselves from eating. Eating anything in excess can be bad for you (see the "truckload of vegetables" debating technique), but people seem to especially focus on salt, sugar and fat. Here are just a few links that provide some data points on the health effects of these three tasty food items.

from the that-doesn't-actually-help-advertisers dept

Update: After hearing from a few people at Huffington Post, it appears that the original explanation from Isaf was unclear, and led us to believe they were moderating comments based on advertiser preferences. However, Huffington Post has now clarified that they use the same AI just to determine how to post ads on certain content -- and that's what Isaf meant with his remarks. Not that they moderate comments based on advertiser preferences.

We've been somewhat excited that we're rapidly approaching one million total comments on Techdirt. We thought it was quite a nice milestone. But we feel a bit small to learn that the Huffington Post already has over 70 million comments just this year alone. Over at Poynter, Jeff Sonderman has a fascinating interview with the site's director of community, Justin Isaf, about how they manage all those comments. Apparently they have a staff of 30 full time comment moderators, helped along by some artificial intelligence (named Julia) from a company they bought just for this technology.

Now, obviously, sites have lots of different philosophies on moderating comments. Our own is pretty open. We have a spam filter that tries to cut out obvious spam (of which we get about 1,000 per day, last I checked) and other than that comments are basically unmoderated. We do have a system that allows the community to vote on funny and insightful comments (which we then round up in a weekly "best of" post). We also, just recently, introduced our first word/last word feature, which lets the community promote certain comments. Finally the community can also "report" comments they find problematic, which then minimizes those comments, though they remain available for anyone to see with one click. We've found that this system of trusting the community works pretty damn well overall.

HuffPo, on the other hand, between the technology and the moderators, seems more focused on nudging the conversation themselves. I can understand and respect that choice, but there was one detail that struck me as a bit questionable:

I’m a big fan of having machines help us with the lower level tasks, freeing up time, resources and brain power for more interesting and complex tasks. Julia [the artificial intelligence system that HuffPo owns] takes that a few steps further and helps us with a lot of other aspects of HuffPost in addition to helping weed out abusive members, including identifying intelligent conversations for promotion, and content that is a mismatch for our advertisers. She has allowed us to do a lot more with a lot less.

(Note: see update at the top). I recognize that these are all advertising businesses, but I'm a bit surprised to see HuffPo so blatantly admit that they moderate comments if they're "a mismatch for our advertisers." I've seen plenty of sites say they'll moderate inappropriate commentary, but leave reasonable commentary alone even if it's critical. But HuffPo is basically saying that if advertisers aren't likely to like the comments, they may moderate them. It's their system, and they can do what they want with that, but personally, that makes me feel uncomfortable. We've always tried to promote the fact that our own community is very opinionated (and not shy about it) when we've spoken to advertisers, and we use that as a way of explaining why things they do should be authentic and real, rather than forced and phony. And, because of that, we'd like to think that we're able to drive more interesting engagement. If you leave open the possibility of moderating comments that advertisers won't like, that seems to only encourage bogus and annoying advertising, since marketers may never learn that people don't actually like that kind of thing.

In the end, HuffPo's position is obviously self-serving, even as they pretend that it's best for advertisers. What they may end up doing is hiding the fact that the advertisements are bad, rather than improving the quality of the advertising. Now, obviously, I'm sure AOL does quite fine with HuffPo's ad selling (and they're a hell of a lot bigger than us), but it still struck me as interesting to see the company so blatantly admit how it reacts to content their advertisers might think is "a mismatch."

from the good-for-him dept

When it comes to major media properties (and even quite a few blogs these days), it seems that "moderating" comments has become the norm. However, it's surprising (though, refreshing) to see a Washington Post editor speak up in defense of unmoderated and anonymous comments, which the Washington Post allows:

I believe that it is useful to be reminded bluntly that the dark forces are out there and that it is too easy to forget that truth by imposing rules that obscure it. As Oscar Wilde wrote in a different context, "Man is least in himself when he talks in his own person. Give him a mask, and he will tell you the truth."

Too many of us like to think that we have made great progress in human relations and that little remains to be done. Unmoderated comments provide an antidote to such ridiculous conclusions. It's not like the rest of us don't know those words and hear them occasionally, depending on where we choose to tread, but most of us don't want to have to confront them.

What's most impressive is that this comes from a guy who wasn't just opposed to such things originally, but was opposed to the whole concept of "blogging." When he finally relented to blogging, he was adamantly against unmoderated comments... but the more he's seen, the more he's realized the value in them:

I have come to think that online comments are a terrific addition to the conversation and that journalists need to take them seriously. Comments provide a forum for readers to complain about what they see as unfairness or inaccuracy in an article (and too often they have a point), to talk to each other (sometimes in an uncivilized manner) and, yes, to bloviate....

In fact, comment strings are often self-correcting and provide informative exchanges. If somebody says something ridiculous, somebody else will challenge it. And there is wit.... Comments also tell us that readers do not always agree with journalists about what is important.

We have always felt that way about comments. While they can be frustrating and ridiculous at times, they are also incredibly educational and entertaining. And, the most ridiculous stuff of all is quickly dismantled by others. That said, it doesn't mean that there aren't ways to improve the commenting experience without necessarily moderating or banning anonymous commenters. We're working on some things here that we'll be rolling out in the near future to hopefully continue to improve the overall commenting and discussion experience.