There’s a lot of talk in web communities about “shaping the user experience,” by which they generally mean “you figuring out where to click the page to do the stuff you want.” There’s conferences devoted to user interfaces, and occasionally when you get to the next level, you find people talking about the emotional experiences they want people to have when they come to a website – a “professional” site, a “friendly” site, and so forth.

But in social media, nobody ever seems to think about what sort of experiences the users will have with their abusers.

What they think about is, “What will make us look bad?”

So they go to their lawyers, and they ask, “What are we legally culpable for?” And they go to their publicity departments (if they have one) and they ask, “What will make us look terrible?”

But to a company’s cold bottom line, a troll calling women names all day gets more advertising hits. He is a devoted user. And so they are loath to ban anyone, because these companies make money off of large user bases, and kicking someone off risks trouble.

If you look at what actually gets most people banned, it’s generally not anything that’s dampening the “user experience” of the site – it’s stuff that’s going to get the company itself in trouble.

So they’ll ban for pictures of women breastfeeding because God forbid the advertisers see boobs and the site gets marked as porn, but people emailing abusive emails in private, well, that’s a tougher case.

Over on FetLife, if you send someone an email telling them they’re a dumb bitch who deserves to be knifed, you will get banned for posting screencaps of those abusive emails with that person’s name. Because that opens them up to lawsuits. The stuff the abuser sent in private? Well, that’s bad, but…

…we’ll get back to you.

We don’t want to alienate anyone.

And so what happens is the inevitable flurry of flame wars and mobbing, where people a) achieve popularity on the site, b) get offended, and c) send their followers as an assault at someone they disagree with. Occasionally someone gets banned for something too outrageous, but the daily aggressions are seen as just a cost of doing business on the Internet. If you get popular, you’re going to get hatred.

If a site has moderators, well, moderation has never been a priority cashwise, and so they’re usually overwhelmed and only deal with the biggest cases. If they have blocking tools, those tools are usually not equipped to handle the devoted troll opening up a hundred new accounts, or a sudden influx of legit users send over from the latest popular user’s anger-dump, or a coordinated sealioning attempt.

And I think the next generation of social websites are going to have to start thinking about “the user experience” in terms of “What emotional experience do we want the user to have while they’re here interacting with other people?”

Or, perhaps more significantly: “What culture are we fostering here?”

Look, there’s nothing inherently wrong with a 4chan-style culture where everyone posts anonymously and savagery is the word of the day. That’s one way to do a website, and it has become somewhat of a default because it’s low-maintenance. You see Reddit moving closer to that – “Hey, we want free speech, so – talk about whatever you want.”

But Facebook has free speech, and it’s often this monstrously uncomfortable place where all your relatives go to yell their crazy conspiracy theories at you.

I think companies are going to have to start prioritizing what sort of culture they want to have, and look beyond “We want a friendly blue portal” and shift towards “We want to encourage reasoned arguments without name-calling” or “No personal attacks” or “If people start yelling, we need to calm that down.”

And I think, ultimately, it’s going to come down to defining “What is abuse?”

Because as someone who posts on the Internet a lot, I see a lot of different definitions of “abuse.” For some people, “abuse” doesn’t exist until someone comes to their house personally – insults are just a part of the fog of the Internet culture. On the other end of the spectrum, some think “abuse” is being called out for saying something questionable, no matter how gently that rebuttal is made.

But for the future, I think you’re going to be designing your website to last, you’re going to have to define abuse, and then work hard to prevent it.

Some of those preventions can be technical: forcing accounts to be linked to phone numbers would prevent a lot of abusive sockpuppet accounts. Finer-grained blocking tools, like blocking people based on the number of days it’s been since they’ve created their account, or blocking based on their number of posts, or blocking based on a percentage of your friends who have blocked them, would also be helpful.

And technical analysis can probably have content filters scanning for potential hotspots and alerting moderators to them – I’m certain there are analyses of text that could find abusive threads – but ultimately, it’s going to come down to companies starting to say “Abuse cannot be defined as just what will get us in trouble, but as ‘What sorts of unpleasantries will make people leave our site?'”

That will be an uncomfortable day, because it will limit your audience. Take me, for example; whatever site I’m on, I am prone to reacting to other people’s posts. If I’m on Twitter and someone says something I consider dumb, I’ll link to it with some snarky commentary. If I’m on FetLife and someone does something unwise, I often write a reaction post to that.

And telling me that part of your site’s culture is “We don’t allow reaction posts that name other people specifically,” well, I won’t make an account at that site. I won’t post. I’d find that extreme. Likewise, if you decide “no breasts anywhere” and the pro-breastfeeders show up, well, keeping them away will limit your audience.

But here’s the trick: if you define a civil experience correctly, other people will want to have that experience.

Even if those people are Not Me. Especially if those people are Not Me. Because part of any good gathering is defining who’s not welcome, and encouraging those who stay to follow the guidelines.

And when you define abuse in a way that’s clear about how you’re defining civility, and enforce it properly, I think you’ll ultimately find greater use retention. Because yeah, maybe that heavy-use troll I referenced earlier is creating a lot of ad hits by emailing fifty people a day to call them names, and is generating revenue…

…but the bottom line is not just composed of what you have, but also your costs. That troll is costing you all the other people s/he is driving away. And whether you like it or not, that troll has to be a part of how you’re designing your website’s user experience, because the first step to containing trollish behavior is in defining “What a troll is,” and companies thus far have largely defined trolls as “People who get us, the company, into trouble.”

It may take another couple of decades, but I think eventually corporations are going to start defining trolls as “Customers who drive away the customers we actually want to keep around.” And when that happens, I think we’ll see a much better Internet.

2 Comments

This is really perceptive. I often wonder if companies think about the thousands of users they drive away by coddling troll and allowing harassment. For example, I never visit Reddit for this reason unless it’s a “safe” section of the site devoted to cute animal pictures. I never play online games, even though I love video games. I just don’t need jerks in my life. It’s sad.

Why would anybody online prevent abuse, harassment and so on, if they do not even bother in real life?
Victimblaming is much easier than actually going to the source of the problem.
On the internet it is even easier, fake account et voila.
I have gotten shit by somebody for talking about a diva cup, so I must be a dirty b**** who should be hanged.
Technology is moving so fast, lawregulation can’t keep up, so anythong goes.

Just my two cents, from a breastfeeding mother of a 13month old son :p