Apologies for downtime...http://www.everythinginmoderation.org/2004/10/apologies_for_downtime.shtml
This is basically just the most trivial / tiniest of pings to apologise to people for not having updated Everything In Moderation for a while. The main reason for my laxity has been that I've been working on a pretty hardcore BBC project for a while, but now I'm coming out of that headspace and so hopefully I'll be able to pick up some of the projects that got abandoned along the way. I'd also like to just make a micro-apology to the people currently attending the Online Community Report Summit in Sonoma California who have been over-exposed to the everythinginmoderation.org URL and have had nothing to see when they've got here. Apologies again and I can assure people that I'm going to be starting to post again more frequently in the next few weeks.]]>Tom Coates2004-10-08T17:04:46+00:00The lack of real-world physics makes online communities easy to abuse...http://www.everythinginmoderation.org/2004/03/the_lack_of_realworld_physics_makes_online_communities_easy_to_abuse.shtml
Matt Haughey puts his finger right on the pulse of the problems with online communities in a little post called Fixing the wheel all over again - that we don't think right at the beginning when we're developing new technologies about those solid foundational problems that plague each and every online communications mechanism - spam, abuse, denial of service, stable identities and the like. These are the things that the real world gives us by default - it gives us geography, it gives us limitations of time and it requires that actions take effort. We don't want to replicate those restrictions - if we did there would be no advantages to communicating online - but the core of any future online community/communications system will be the attempt to find new innovative mechanisms that have similar qualities except in different areas. More thoughts on that later in the day.]]>Tom Coates2004-03-30T19:22:17+00:00Bringing politics into online communities...http://www.everythinginmoderation.org/2004/01/bringing_politics_into_online_communities.shtml
I should start off with a round of apologies. It has now been well over a full month since I last posted to Everything in Moderation. This is far from my initial aspirations for the site and hopefully I'll be able to get back into a more regular schedule over the months to come. The other confession I have is that I've accidentally been writing about online community management over on my other weblog. If you're interested in online community management, then I'd recommend reading On Fires, String Quartets and the New Politics of Online Communities, which is an attempt by me to articulate some basic ideas about how to create functioning communities online that can self-organise in ways analogous to offline political activity. There's an awful lot more of this stuff I that I'd like to be able to get into at some point, but now is not the time. Here's a quote from the (short) article to whet your appetite:

Lesson one: the thing that keeps groups together can be a mutual passion, but a mutual activity will bring them together even more strongly. Lesson two: that intensively creative groups seem to be necessarily relatively small. And that's because - lesson three - there will always be tensions and forces within groups that will try to push them apart from one another. And here's where social software comes in to the fore - because lesson four is that those tensions can almost always be ameliorated or even totally removed by the careful implementation of mechanisms that institute some form of process, some kind of system - or even some kind of politics.

]]>Tom Coates2004-01-27T21:58:09+00:00A guide to Slashdot's Moderation Scheme...http://www.everythinginmoderation.org/2003/12/a_guide_to_slashdots_moderation_scheme.shtml
Probably the most significant evolution of moderation schemes over the last half-decade (and the best example of how seriously moderation schemes should be taken) was the emergence of Slashdot - an online community/weblog for the most overtly geeky part of the population. The community itself refers to the system that lies behind the site and keeps it on track as a kind of 'mass moderation' - which if you believed in my previous distinctions would probably be considered a form of distributed moderation.

In a nutshell it works like this - moderators are chosen via algorithm from the body politic. Each moderator's stint lasts for three days and they are given five points to use during this time to either promote or demote a post in importance. The rest of the users of the board, then, can decide at which threshold they wish posts to be hidden from their view. This means nothing is deleted and that the community gradually reaches some kind of equilibrium where posts that are agreed to be generally good or bad are made respectively more of less visible.

The whole process is made more effective by use of a karma system that records data about the posts you make, how you make them and how you use moderation posts (and how they are used upon you) to decide eligibility for moderation (and the like). As such Slashdot is based upon both content-rating models and upon an attempt to create an online version of the kinds of reputation economies that we use (at a much less abstracted and efficient - but more nuanced - fashion) everyday in the offline world.

For me the most interesting aspects of the Slashdot moderation model is that it attempts to create a political structure without overt heirarchy - with a view to creating self-running communities that don't need external intervention to keep them on track. Most online communities are pretty much despotic in structure (with oligarchies or monarchically governed rural fiefdoms being other common models). It's a model which has its benefits (decisions are final, the person who runs the community is also normally the person who must inevitably take responsibility for what's published on the site anyway) but also considerable failings. Communities are hard-work to maintain, prone to spats and arguments, can spiral out of control and don't always want to move in the same direction as the people who consider themselves 'in charge'. Distributing the power rather more creates the opportunity to help the community define itself. Finding good systems that allow this kind of action is very close to my heart (and related to some of the work that I've done with Cal on Barbelith.

But while Slashdot's system has a great many positive aspects, it's not without its short-comings. Firstly, the system seems more than a little arcane to idle users of the site and requires new users to get to grips with the concept of ratings and viewing thresholds immediately. It's hardly self-explanatory as a process and - as such - not particularly ideal for implementation in less technically savvy environments. This suits Slashdot perfectly well, of course, both because that's exactly the audience they're looking for and because the site's traffic is not without financial cost. It's also interesting how overtly it's based upon random judgments and how little control the users actually have in nuancing self-reflexively the system itself. It's also interesting how limited these distributed controls are. Posts cannot be collectively deleted, amended, fixed or moved. And most interesting for me is that its chosen to articulate the concept of reputation and evaluation as a kind of currency - as an economy. I wonder occasionally whether or not that's as a direct result of being American in origin, and whether or not a European developer would have tried to find more overtly political rather than economic models. Any thoughts around this would be very much appreciated of course.

You can read more about Slashdot's moderation scheme in their detailed guide.

]]>Tom Coates2003-12-12T15:22:17+00:00Incorporating listening into teen moderation...http://www.everythinginmoderation.org/2003/11/incorporating_listening_into_teen_moderation.shtml
Fiona Romeo has written an interesting piece on listening rather than profiling teen males and suggests that the same techniques might have utility for moderators of teen-centred communities. She cites William Pollack's "tips for listening to boys":

Honour a boy's need for "timed silence," to choose when to talk

Find a safe place, a "shame-free zone"

Connect through activity or play. Many boys express their deepest experience through "action talk"

Avoid teasing and shaming

Make brief statements and wait; do not lecture

I think there is particular resonance here for teen communities that are part-maintained through distributed moderation. Any thoughts?

]]>Tom Coates2003-11-09T16:36:51+00:00The effect of severity of initiation on liking for a group...http://www.everythinginmoderation.org/2003/11/the_effect_of_severity_of_initiation_on_liking_for_a_group.shtml
Thanks to Matt Webb, I've got my hands on an abstract for a paper about difficulty of initiation into groups. According to Aronson, E. & Mills, J. (1959). The effect of severity of initiation on liking for a group. Journal of Abnormal and Social Psychology, 59, 177-181. there's a direct correlation between how difficult the process of initiation is and how much people will like the group once they have entered it.

This clearly could have implications for the maintenance and creation of online groups - and I'd be particularly interested in seeing if this kind of approach could limit the problems that so often emerge in online communities. Perhaps simply dramatically raising initiation requirements could create stronger, more heavily-bonded communities that required less in terms of overt moderation. It's particularly interesting to me because as an approach it still means that membership is still effectively open. The particularly approach they took - however - may not be easily replicable (or desirable) online. Here's an excerpt from the abstract:

Participants were undergraduate women who volunteered to participate in a study on the psychology of sex. The study testing their hypotheses was an experiment. The conceptual independent variable was the degree of severity of initiation into a group discussion. Participants were either in a severe initiation condition where they had to read 12 obscene words to an experimenter, a mild initiation condition where they read five words related to sex but were not obscene, or a control condition where no initiation was required.

After undergoing either the severe, mild, or no initiation, participants listened to a discussion by the group that they anticipated that they would be joining. After listening to the group the dependent variable of liking for the group was assessed. The experimental dependent variable was their rating of the discussion group and their rating of the participants in the group on 14 different evaluative scales (e.g., dull-interesting, intelligent-unintelligent, etc.) on scales ranging from 0 to 15.

The results for the study indicated a general pattern such that people in the severe condition liked the group and participants more than those in either the mild initiation and no initiation condition.

The paper itself is not online (but there are a number of references to it online). Should anyone have a copy that they can send to me electronically or snail-mail to me, then I'd be extremely grateful.

]]>Tom Coates2003-11-03T17:43:41+00:00Kuro5hin's "Notes towards a Moderation Economy"...http://www.everythinginmoderation.org/2003/10/kuro5hins_notes_towards_a_moderation_economy.shtml
There's a fascinating post about mechanisms and economies for distributed moderation systems over at Kuro5hin at the moment: Notes towards a Moderation Economy. Here's an excerpt:

Whether you call it Mojo, Karma, "Standing," or something else, all content rating feedback systems have some sort of currency. While there are many different ways of acquiring and spending such capital, nobody seems to have implemented an economy varied enough to be robust. And this is the key to building a system which can be stable in the long term.

Speaking very broadly, any web rating system is trying to encourage certain behaviors and discourage others. Behaviors most e-community operators would want to encourage include:

Writing original stories and comments that are interesting or useful

Leaving "attaboy" or "rubbish" comments that are "correct"

Rating stories and comments "correctly"

]]>Tom Coates2003-10-29T09:08:45+00:00An old-school guide to Usenet Trolling...http://www.everythinginmoderation.org/2003/10/an_oldschool_guide_to_usenet_trolling.shtml
Just in case anyone thinks I'm paranoid about trolls and having techniques to manage and control extreme situations, then I'd very much recommend reading this extremely interesting and useful Usenet Anti-Troll FAQ which describes the extent of the problem, the level of organisation and dedication occasionally demonstrated and demonstrable fact that it often does not take very many trolls at all to completely cripple an otherwise vibrant community - unless you build in ways of dealing with these situations should they occur and make your community more resilient to external attack. Here are a couple of particularly pertinent quotes:

If anyone does anything which will interfere with the troll's ability to cause mayhem, they can become very nasty, posting from obviously incorrect variations of the name etc. insults, call them netcops, netnannies,
homosexuals. Various off usenet methods are also used to force the victim to stop posting: Subscribing the victim to hundreds of unwanted pornographic email newsletters, and sites. Complaining to employers about non existent misdemeanours. Sending garbage emails without indication of sender. Telephone calls at dead of night. Harassing the close relatives of victims.

If anyone does anything which will interfere with the troll's ability to cause mayhem, they also forge posts in that persons name and internet address and libel them on usenet. Both these are illegal.

The whole document is worth reading because - while it can be vaguely alarming reading - it really does investigate most of the options for handling highly problematic users in Usenet, and many of the solutions have web, gaming or mailing-list analogues.

]]>Tom Coates2003-10-28T23:00:06+00:00Tagging difficult users with infectious markers...http://www.everythinginmoderation.org/2003/10/tagging_difficult_users_with_infectious_markers.shtml
Following on from my earlier piece on Stealth Moderation I thought I'd talk a bit about a technique we've been using on Barbelith recently to deal with a particularly thorough and unpleasant troll-attack. But first I should recap on the specific situation that we're trying to resolve with this technique.

One of the great difficulties with looking after an online community is that it's generally almost impossible to ban a user from a site if they're dedicated to breaking in. The only circumstances you can ban them are when you require payments via credit cards, hard-to-obtain unique forms of real-life identification or when you're prepared to take the situation to the police. Otherwise all they have to do is sign up for a free e-mail account, and re-register on your site. Within ten minutes they can be back causing trouble, your ability to set the rules for your community space has been completely undermined and there's very little you can do about it.

And that's only one use of multiple user names. Many trolling users will maintain several concurrent accounts, which they will use to support the position of their prime identity - making all online battles seem larger and more significant than they actually are and obfuscating the fact that - at heart - it's just one troublemaker working quite hard to spoil the experience for all the others. These alternative user names are often known as "Sock-puppets" for vaguely obvious reasons. Typically a troll of this kind will use their sock-puppets to post self-supporting messages like, "Hey, why are you being so down on the guy. I think he has a point and you're all being really **** about it". I've seen people using these multiple user names to create identities that are almost identical to other user's self-representations (a duplicated character in the username - or sometimes just a space after their name, depending on the software) and then using that identity to suggest that their alternative usernames, "might have a point - maybe it's best not to wind them up any more", or even to suggest that their alternative trolling identity might have started investigating legal recourses. Even stopping new registrations won't necessarily stop this kind of activity as long as the e-mail addresses of long-dormant users are available to be contacted and appealed to. And there will always be one user who has two or more user names who believes any kind of ban is a de facto attack and will support a long-term troll, however obviously destructive (or even illegal) they might be...

Essentially it all boils down to one problem - that you can ban user names easily, but it's far from easy to ban real-life people. There are many approaches to this kind of problem, but one thing is clear - on occasion users do need to be banned - however much we may wish it otherwise.

One approach that we've been using recently with a fair amount of success (although it breaks my first and most important rules of what constitutes a long-term successful moderation strategy) is based around finding ways of demonstrating clear links between user-names - links that indicate that they are being used by the same real-life users or groups of users. We used cookies again, so it's only going to work on platforms where you are using either a web-based interface or write the client-side software, but it really has proven extremely useful.

A user who we wish to tag is marked as tagged in the user table of the database. When they next login, a cookie is placed upon the browser that they use. From that moment on, any other user-name that logs in via that machine will immediately and automatically be tagged in turn. If that latter user then moves to a different computer and logs in, that computer too will have a cookie on it that marks it as being 'used by trolling users' - and any subsequent logins on that computer by different user names will result in those user names also being tagged. At the individual level this can mean that each new user name can be directly and quickly identified as belonging to a troublesome user, but it gets even more useful when a group of users decide to share a new user name to cause trouble on a board. Everyone of them will be tagged next time they login.

In order to make the process more useful, you can find ways of adding more information to the cookie. One particularly useful piece of information is which tagged user-name triggered the site to leave a cookie on someone's computer. This information can be particularly useful if you're unlucky enough to have attracted the attention of semi-organised groups of long-term troublemakers, since it allows you to track the course of your tag through the community and - in turn - enables you to clearly see specific relationships between individuals.

What you choose to do with this data is another matter entirely. In order to avoid many of the fairly obvious ethical issues that surround tracking user information at this kind of level, we've operated on the basis of revealing to the user that they have been banned, placing the cookie immediately on their browser and then waiting for them to try other usernames which in turn will then automatically and immediately be banned. Obviously this approach is not without its problems - for a start it makes it easier to determine what is causing the bans (particularly for the more technically literate) and may help a dedicated long-term troll find workarounds - so you might want to obscure the issue a bit by triggering a user name ban after a random number of hours or posts so there is this perception of human agency behind the scenes. Either way, it's probably best not to name the cookie after the banning process, as that might give the game away...

]]>Tom Coates2003-10-28T22:28:16+00:00On building killfiles into your communities...http://www.everythinginmoderation.org/2003/10/on_building_killfiles_into_your_communities.shtml
One of the most commonly discussed (and employed) kinds of partly collaborative or distributed moderation is the simple killfile - a simple way of allowing users to choose to ignore posts by another that first emerged as a practice among early users of Usenet on Unix. The same technique can be employed (with caveats) in message-boards and mailing-lists and many consider it to provide satisfactory relief from troublesome users. If you want a comprehensive guide to setting up Usenet Killfiles, then the Killfile FAQ is the place to go.

But while they seem like an obvious solution to user-on-user fighting and troll-avoidance, killfiles (and other forms of 'ignore user' functionality) have considerable problems and by themselves are not particularly effective ways of helping a community self-manage. For a start they immediately and inevitably start fracturing the ways in which individuals see the community around them. If every user has a different killfile (or even if a substantial minority do) then each has a different view of the community around them, who has spoken, who is silent and what the gist of the current conversation might be. The consequences may not be catastrophic, but they are irritating - people start talking at cross purposes, individuals talk over one another, repeating suggestions, misinterpreting cues. In fact the only circumstances where killfiles work is where pretty much everyone on the community decides to killfile precisely the same people - or when the culture is strong enough that they simply won't be abused. These circumstances are ... rare ...

Fundamentally, in their devalued and abused form, killfiles are not about community at all, they're about individualism - they're about trying to find a way to minimise an individual's exposure to problems rather than (1) confronting and resolving the problem or (2) organising to minimise the community's exposure to problems. The clearest evidence of their basic redundancy as a structuring principle is what any community that has substantial killfile-usage looks like from the outside or to a new member - incoherent, fractured, troll-filled and consumed with infighting.

The killfile behavior, is simply put: "sweep-under-the-rug",
"bury-head-in-sand" kind of behavior. Imagine that in a gathering where if
everyone totally ignores other's voices except their own kind, then what
cacophony would result? Similarly, if we ignore the problem of crime by
simply using larger locks for our own doors, what consequence would result?

We are all human beings. Our surroundings are our organs and affects us
dearly. In newsgroups, inevitably there will be certain individuals with
foul breath at times. Killfile mechanism is a very good feature to battle
such annoyances. This is not a reason for falling for the convenience of
blocking your ears from dissenting voices or the nonconformists.

So is there any hope for killfiles, or have they been debased completely? In any community that allows moderation, then fundamentally full-kilter killfiles (or basic 'ignore this user' functionality) should be the last resort of the individual, and should be treated with suspicion by the person running the community. If the user is abusive enough to be a threat to the community as a whole or an overt harrasser, then community-wide measures should be taken. If they are simply annoying other individual users, then the community or the individuals concerned need to be encouraged to deal with the problem themselves. Providing spaces for this kind of quiet engagement can be useful, or it can make matters much worse. Another approach could be to institute a user rating scheme that individuals could feed into (along the lines of Slashdot's system).

If you're sure that you do want to try some form of user-on-user ignore functionality, then look specifically at what makes killfiles so damaging and try to mitigate those costs in some way. On the Barbelith Underground community that I run (currently closed to new members) we tried an cut-down approach to ignoring users that was designed to help the individual in the short-term without compromising the community in the long. Ignore functionality is readily available to all users, but you can only choose to ignore another user for seven days. During that week - from the blocking user's perspective - the troublesome presence is truncated but not removed. Instead of their post a message continually reminds the blocking user that "You have chosen to block this user. Click here to stop ignoring them".

The aspiration for this model was to that people would have a mechanism to defuse profoundly aggressive and socially dangerous situations but that they would be continually reminded of that decision, given the opportunity to change it and would be forced to renew it regularly should their aggression perpetuate. And the results? It's difficult to tell precisely, because of the level of information we chose to track, but the board certainly hasn't suffered from any fragmentation and the functionality still gets used each month...

]]>Tom Coates2003-10-27T21:04:49+00:00BBC DNA Moderation Guidelines...http://www.everythinginmoderation.org/2003/10/bbc_dna_moderation_guidelines.shtml
Apologies for the long-time absent - hopefully over the next few days I can catch up with the backlog of interesting links I've been sent, as well as write up some more guides to moderator techniques, technical or otherwise. But to start off with, I thought I'd draw people's attention to some of the published moderation guidelines for BBC's DNA-based sites. They are - for the most part - highly polished and thoughtful and manage to be clear about those circumstances where moderation might have to occur. More importantly they state precisely what will happen under those circumstances (which, for legal reasons, are hopefully rare) under which a post might be moderated. Here's one of the clearest expositions and statements of intent I've ever read:

For every single piece of new content, Moderators will do one of the following:

Pass it - When your content is passed, you won't notice a thing. A Posting that has been passed will be visible forever (unless a complaint is made about it and upheld - see below). An article that has been passed will remain visible until it is edited, or a complaint upheld about it, at which time it is flagged for moderation again, and the whole process is re-applied.

Refer it - In this case the Moderator is unsure about whether the content should be passed or failed, so they queue it, pending a decision by the Editors. Because the Editors are not available 24 hours a day, seven days a week, this act of referring has to hide the Posting or article while the decision is being made. This doesn't mean that your content has been removed; it just means that the content has been referred. Instead of the content, the site shows a message saying that it is currently referred.

Edit it - In some very specific cases the Moderators will edit content to fit the House Rules. Content will only be edited by Moderators in the following, very strict circumstances. In no way do we allow any other kind of editing, and if you think your content has been changed beyond these rules, please let us know by replying to the moderation email you receive, and we'll investigate. It's vital that Members do not feel that their content is being hacked around unnecessarily.

Swear-words will be ****'d out. The Moderators will **** out the entire word, except for the first and last letters.

Unsuitable URLs in Conversation Postings (not articles) will be removed, and replaced by [Unsuitable link removed by Moderator]. Broken links in Conversation Postings (not articles) will be removed, and replaced by [Broken link removed by Moderator]. We edit out the link rather than fail the Posting because Members cannot edit Postings and put them back up, and it's not fair to fail an entire Posting just because of an unsuitable URL or broken link. However, Postings which consist of nothing but unsuitable URLs will simply be failed. Please see the House Rules for information on what constitutes an unsuitable URL.

Personal addresses, telephone numbers and specific contact details (except for emails, instant messaging addresses and so on) will be removed, and replaced by [Personal details removed by Moderator].

Fail it - In this case the Posting or article will be hidden from view, as it breaks the House Rules sufficiently for us to remove it from the site (so it might be defamatory, plagiarised or something else serious). In the case of Postings, this means the Posting will be hidden forever and replaced by a message saying the Posting is removed (unless we later override the Moderator's decision - see below for information on contesting a Moderator's decision). Each article that fails moderation is hidden, but to fix this you can go to your Personal Space, pick out the relevant A number in the 'articles' section, click on the 'Edit' link, edit out the offending material, and reactivate it.

]]>Tom Coates2003-10-27T18:55:09+00:00On stealth moderation or "Blame the technology"...http://www.everythinginmoderation.org/2003/10/on_stealth_moderation_or_blame_the_technology.shtml
One of the biggest problems with finding ways to moderate users is how to handle the reactions of the people you moderate. If a user is banned or one of their posts is deleted, then - for the most part - it's a total fantasy that they'll look back at their actions with shame, accept that the response was justified and move on to other services on other people's sites where they will now have learned their lesson and operate more responsibly. For the most part, deleting posts and banning users is considered either "unfair", "excessive" or even an overt act of aggression against the user concerned - no matter what kind of appalling behaviour they've been undertaking. Some users genuinely believe that their activities online have no consequences and hence they cannot be held responsible for them.

If users believe themselves to have been 'unfairly attacked', then they'll respond in kind - a user who feels themselves to have been wronged will often use every mechanism at their disposal to make their position clear to the rest of the community - their aggressive actions will be stepped up, their contributions will become more confrontational and (if they've been banned), they'll try and find every possible way of regaining access, whether by reregistering with a different user name (often using a free e-mail address), using other computers or changing ISPs (to circumvent IP banning) or by harrassing other members of the community who they feel have been complicit with that action 'against them'.

Given that there are so many ways in which a user can cause problems for a community and given that it's extremely difficult to ban users outright, the question for people who run online communities has to be how to avoid causing situations in which users feel they have an axe to grind. One approach is purely social - and brings up the non-technical aspects of moderation. It's important to have a clear and explicitly stated set of rules that make it clear what's acceptable behaviour or not, a clear set of procedures that are undertaken when a user misbehaves and a clear path for appeal and rehabilitation that makes punishments easily understood and non-final. Having the patience to explain this process to users is a necessity under this process, and you're quite likely to find the discussion of that process a staple part of the community itself, which can be quite wearing and distract from the ostensible point of the community itself, but fundamentally it will save you considerable time in the long-term.

Another technique is purely technical and is based around finding ways to make users go away on their own, to leave your community without having to be banned. If it sounds duplicitous, it's because it is duplicitous, but it can work extremely well. The technique is well described by Philip Greenspun halfway through Chapter 15 of his Guide to Web Publishing:

I felt humiliated by the situation but for a variety of annoying reasons, it was taking me months to move my services to Oracle. Then it hit me: Sometimes a system that is 95 percent reliable is better than a system that is 100 percent reliable. If Martin was accustomed to seeing the system fail 5 percent of the time, he wouldn't be suspicious if it started failing all of the time. So I reprogrammed my application to look for the presence of "Martin Tai" in the name or message body fields of a posting. Then Martin, or anyone wanting to flame him, would get a program that did

The result? Martin got frustrated and went away. Since I'd never served him a "you've been shut out of this community" message, he didn't get angry with me. Presumably inured by Microsoft to a world in which computers seldom work as advertised, he just assumed that photo.net traffic had grown enough to completely tip Illustra over into continuous deadlock.

This approach works extremely well in a whole variety of circumstances. At a company I worked with we would mark particularly troublesome users with a flag on their user record, and then whenever they tried to use the website we'd put in a random delay between their request and the page being returned. After a while the site became functionality unusable for them and they'd simply leave. On the web this kind of functionality could be easily circumvented by signing in under a different user-name - so we built it in such a way that it would leave a cookie on their browser that wasn't attached to their user name but was set when that user-name logged in. The cookie would last as long as it was able and any user logged into the board via that browser would experience the same delays. The effects were dramatic and highly successful - bad users would leave as a result of frustration without causing a fight. The service simply wasn't particularly good as far as they were concerned.

One problem with this approach - of course - is that it goes against the nature of established brands and service-providers to purposefully break their service for some users. It's always possible that it might affect how their brand is perceived and generate negative word-of-mouth. But when you consider the alternatives - rogue users manipulating and posting on a board without regard for any rules and actively trying to destroy whatever community you've created - the value of stealth moderation techniques like this becomes clear...

]]>Tom Coates2003-10-18T14:09:25+00:00Moderation systems for kids?http://www.everythinginmoderation.org/2003/10/moderation_systems_for_kids.shtml
A few weeks ago the Washington Post put up an article called Cliques, Clicks, Bullies And Blogs all about how children and teenagers are using the affordances and limitations of social software and community spaces as mechanisms to help them assert their dominance (often through bullying) in schools' social shark tanks:

"The Internet has transformed the landscape of children's social lives, moving cliques from lunchrooms and lockers to live chats and online bulletin boards, and intensifying their reach and power. When conflicts arise today, children use their expertise with interactive technologies to humiliate and bully their peers, and avoid reprimand from adults or foes. As parents plead technological ignorance with a my-Danny-hooks-everything-up sort of pride and many schools decline to discipline "off-campus" behavior, the Internet has become a free-for-all where bullying and cruelty are rampant."

These unpleasant but intriguing situations are almost directly illustrative of some of Clay Shirky's points in his article A Group is its Own Worst Enemy. In the article he talks about the importance of structuring the space for social interactions online. He gives an example of a community that became over-ran with new members who didn't have any respect for the established patterns of behaviour that had evolved:

"The place that was founded on open access had too much open access, too much openness. They couldn't defend themselves against their own users. The place that was founded on free speech had too much freedom. They had no way of saying "No, that's not the kind of free speech we meant." But that was a requirement. In order to defend themselves against being overrun, that was something that they needed to have that they didn't have, and as a result, they simply shut the site down. Now you could ask whether or not the founders' inability to defend themselves from this onslaught, from being overrun, was a technical or a social problem. Did the software not allow the problem to be solved? Or was it the social configuration of the group that founded it, where they simply couldn't stomach the idea of adding censorship to protect their system. But in a way, it doesn't matter, because technical and social issues are deeply intertwined. There's no way to completely separate them."

That space - where technical and social issues are deeply intertwined - is what I consider to be the heart of the issue of moderation, which is almost as much about creating spaces where people have a purpose and an aim as it is about finding ways that those groups can effectively be managed or self-manage. Moderation systems are precisely designed to cool off people's worst excesses and to try - through continual pressure and effort (either manually or technological given) - to find processes and systems that make a community able to survive significant pressures and still achieve useful things.

The most interesting analogue between the two articles is in the specific kinds of behaviour that the groups are undertaking. Clay details some interesting chunks of Bion:

The first is sex talk, what he called, in his mid-century prose, "A group met for pairing off." And what that means is, the group conceives of its purpose as the hosting of flirtatious or salacious talk or emotions passing between pairs of members.

The second basic pattern that Bion detailed: The identification and vilification of external enemies. This is a very common pattern. Anyone who was around the Open Source movement in the mid-Nineties could see this all the time. If you cared about Linux on the desktop, there was a big list of jobs to do. But you could always instead get a conversation going about Microsoft and Bill Gates. And people would start bleeding from their ears, they would get so mad.

The third pattern Bion identified: Religious veneration. The nomination and worship of a religious icon or a set of religious tenets. The religious pattern is, essentially, we have nominated something that's beyond critique. You can see this pattern on the Internet any day you like. Go onto a Tolkein newsgroup or discussion forum, and try saying "You know, The Two Towers is a little dull. I mean loooong. We didn't need that much description about the forest, because it's pretty much the same forest all the way."

These three patterns - groups ostensibly about something that are instead actually obsessed with sex, cliques and "things that an individual associates their identity with" (you could probably make ome analogy with brands here without much trouble) - are familiar to all of us. But we're also more than aware that when we were teenagers we were more susceptible to that kind of behaviour than we are as adults. Which brings me to an intriguing question. Given children's uncanny ability to manipulate authority figures and to feel out the rules of any given situation and attempt to manipulate them - what kind of moderation process might both reduce the incidence of this kind of cliquey, pack-like bullying and implicitly educate the children in question about why that behaviour is counter productive? Would a distributed moderation system that put power in the hands of all the children be a useful or counter-productive approach? And if so, how should their power scale? Should a clump of twenty people together have radically more power in their community than five or one? Answers on a postcard please...

]]>Tom Coates2003-10-15T09:00:48+00:00Welcome to Everything in Moderation...http://www.everythinginmoderation.org/2003/10/welcome_to_everything_in_moderation.shtml
After a few days of not really being able to get anything useful done, a massive battle with the CSS evil of IE 6, and considerable pondering of older links about moderation that I never managed to post on plasticbag.org, I can now finally welcome you all to Everything in Moderation - a new weblog designed to collect links and commentary on both technical and social ways of managing online communities and user-generated content.

Online community development is one of my passions, and I have designed and/or managed social software "solutions" for organisations like UpMyStreet, EMAP and the BBC (often alongside Cal Henderson and/or Denise Wilton. Moderation systems are a particular subpassion of mine. In the abstract, people can think they sound bland, technical or intimidating, but fundamentally moderation is really about all those parts of an online community that stop it just being a place where people stand and shout randomly at each other. They're about finding the structures and the mechanisms, the techniques and the sensitivities which will help a community form out of a seemingly random clumps of individuals, which will help that community defuse unpleasant situations without killing each other and protect that community from attack.

A rather unfair representation of 'moderation' would be to say that it's about moderators with special powers who operate as 'police'. Moderators can fulfil a whole variety of roles and moderation systems too can be extended, experimented with and pushed in all kinds of interesting directions. If you push the systems in one direction they become gestural political systems and the people within them become arbitrators, social workers, jurors, voters, advocates and negotiators. If you push them in another way, and build them into the very functioning of the board itself then it's like you're almost building ways of interacting into the physics of the world you're creating (whether that world be a shared representation of an realm like Everquest or just a light shell of interactions that lie over everyday encounters and relationships between friends). Moderators can be seen as gardeners or janitors, emperors or Gods and the systems that are employed can be baroque and imposing, collaborative and pragmatic or ambient or practically invisible.

Hopefully some of this will become clearer over the next few months.
But in the meantime, there's a whole block of resources across the internet, articles and issues for me to think and talk about. i've been collecting interesting articles for ages but not finding much of a use for them until now, so you can expect there to be quite a lot of catching-up and scene-setting on the site over the next few weeks.

After that, what's the site here for? Fundamentally it's here to be a resource for people who work or play in fields like social software, user-generated content and online communities - whether the specific form be instant messaging, MMORPGs, social-network visualisers, message-boards, weblogs, user-reviews, peer-to-peer programs, wikis - anything where individuals or groups can veer out of control online and negate the experience and potential utility that other people might get from their group engagement. I'd like it to be a place where people threw in links that they thought were interesting so that they could share the information they've found with loads of other interesting and engaged parties from parallel disciplines - sharing and swapping techniques, ideas and thoughts of how to improve all the communities we work with and play in... So if you've got any tips for interesting or useful sites, then let me know by e-mail from the menu on the right...

]]>Tom Coates2003-10-15T00:12:20+00:00On four types of moderation...http://www.everythinginmoderation.org/2003/10/on_four_types_of_moderation.shtml
There are generally considered to be four major (rough) categories of post-level (rather than user-level) moderation systems operating on the net today. These categories are pre-moderation, post-moderation, reactive moderation and distributed moderation.

Pre-moderation
Because of legal anxieties, some sites and mailing lists operate on the principle that every piece of user-generated content that could go up onto a site should be checked by a moderator (or sometimes - in extreme cases - a lawyer) before it goes live. As a rule, this method of moderation is the death of an online community but there are times when (i) it's the best way of handling user-generated content that either isn't specifically community-based (for example Amazon's product reviews and IMDB's film reviews) or (ii) it is simply too dangerous to use any other kind of moderation scheme. One form of danger is concerned with liability: some message-boards - particularly those that concern themselves with topical issues or celebrities - are prone to libel and can be a source of legal anxiety for the organisation that hosts them (particularly if they're relatively large organisations with enough money to make them worth suing). Other kinds of danger are more overtly unpleasant - messageboards and mailing lists aimed at children are likely to require at least some forms of pre-moderation-based management. Under these circumstances the cost of pre-moderation (which is high) can be a significant disincentive to build online communities of these kinds.

Post-moderation
The big peril of pre-moderation is that it kills online communities stone dead. The immediacy that people want when they press their submit button is fundamental to all online communities and most sites based around user-generated-content. That's where post-moderation comes in. Post-moderation is based again on the assumption that - for security, legal reasons or behavioural problems - every piece of user-generated content needs to be checked, but rather than checking them all before they go live they are instead checked as soon as possibly afterwards. It's not as secure an approach as pre-moderation - after all dubious content will be live on your site - but it does give communities a space to breathe and users the instant feedback they need when they want to put something online. It's worth remembering, however, that every post still has to be read and checked - and that's still profoundly time-consuming and expensive.

Reactive-moderation
Reactive moderation is based on the assumption that if something bad is happening on a site, then the users will spot it quickly and can alert the moderators. This is becoming by far the most common form of moderation for message boards in particular, because the cost of maintaining pre- or post- moderation is so extreme and because the legal situation seems increasingly to be based around the responsibilities of community moderators to remove dubious content, rather than to prevent it being posted in the first place. It can also be more responsive than post-moderation as well, because only the trouble-generating content needs to be checked and because the your community can direct you straight to the problematic areas. You are - however - relying on that group of people who you want to see abusive content least to tell you when they've found it - and not all organisations are comfortable with that - particularly the highly brand conscious.

Distributed-moderation
Distributed moderation is - for the most part - not something that companies tend to rely on as yet. Fundamentally the principle that a community can self-moderate and collectively decide what's appropriate and innappropriate behaviour for themselves can seem a worrying jump in the dark for a company to make, so for the most part distributed moderation of any kind often consists of content rating schemes and is overlaid with aspects of the other moderation systems. Prime examples of this kind of distributed rating system are Slashdot and Kuro5hin.