Thoughts on ‘Hack Harassment’ and the challenge of ending abusive online behavior

After a week of product announcements from the annual Consumer Electronics Show in Las Vegas, I find the only one that has really left an impression on me is the Hack Harassment campaign.

Yesterday, Intel, Vox Media, Re/code, and Lady Gaga’s Born This Way Foundation said they had launched a Hack Harassment organization that would hold a series of hackathons with the goal of identifying and curbing abusive behavior on the Internet.

Upon hearing about it, I had two immediate and opposite gut reactions: Applause for calling out this massive problem; disappointment over the plan of action.

I’ll admit, it feels kind of crappy to even hint at criticism of the initiative. It’s like being critical of someone who starts the Society Against People Setting Cats On Fire. I mean, of course we’re all against it. Nobody is likely to start a pro-harassment campaign.

Certainly the group’s identification of the problem is spot on. The days when we thought the Internet might be some great democratizing tool are long since passed. It has devolved into a cesspool of hatred, further enabled by the rise of social media services that have not done nearly enough to protect users from this tsunami of hatred.

I would completely agree with the group’s statement: “Today’s tech and media leaders have a collective responsibility and capability to identify solutions that can help reduce different forms of online harassment.”

But…it’s also a very Silicon Valley mentality to believe that the answer to a problem with technology is more technology.

I’m just not sure that hackathons are the solution or that there’s a lack of awareness of the issue. I don’t need a gaggle of programmers and engineers coughing up new algorithms to tell me that Twitter (to just pick on one obvious target) has become a sewer of harassment.

Twitter, which has acknowledged the problem, has more than enough internal tools for tracking this stuff. Indeed, the company has been on a crusade of sorts over the past year to track down trolls and weed them out.

If that’s not enough, there has been growing political pressure on the U.S. Department of Justice to be more aggressive in prosecuting online bullying and harassment on Twitter.

And yet, as I write this, my use of Twitter continues to decline because, well, who needs the abuse? And whatever annoyances I suffer, they pale in comparison to those of most women I know who routinely still face blowback for offering even the most modest of opinions on Twitter.

Has Twitter done enough, then? And who gets to decide?

This leads to the challenges facing the Hack Harassment campaign:

First problem: Define harassment

“Harassment” is often, though not always, a subjective thing. There are many cases where this veers into the extreme, and it becomes obviously harassment. Most of us, for instance, would likely agree that the Florida professor who lost his job after writing posts on Facebook to parents claiming they are lying about their son being killed at Sandy Hook crossed a bright line of harassment. And much of the GamersGate stuff went way far over that line, to be sure.

But many other times, it falls into a surprisingly gray area. For instance, on a friend’s Facebook thread recently, some commenters were joking about someone shooting President Obama. Hilarious, right? I reported the comments through Facebook’s official system. A couple of days later, I got an email back saying the comments didn’t violate Facebook’s terms of service or community standards.

Really? So, what does then?

Second problem: Authority

The group is notable for the credibility of the players (Intel, Re/Code, The Verge, BTRW). And again, I don’t doubt for a moment their sincerity or intentions.

But…how much weight will anything done by this group really carry? Sure, it’s a noble effort. But do I think someone tweeting abusive comments about women or minorities is really going to stop because Intel or The Verge says what they’re doing is wrong? Probably not. What’s probably going to happen (as Re/Code’s Kara Swisher says in her own post) is that calling out someone’s public abuse is likely going to amp up their rhetoric, rather than tamp it down.

And if they get knocked off one service, they’ll just jump up under another name, or on another service.

But beyond that, what happens when it comes time to start pointing fingers and making lists and putting names on those lists? Not only does this potentially raise some tricky legal issues for those involved, but it’s not clear the broader base of Internet users will grant this (or any) coalition, however well-intentioned, the moral authority to do such a thing.

Even in the case of the Florida professor who lost his job, I saw plenty of people on Facebook outraged, claiming that he had been the victim of a witch hunt by liberals and the media. Will a public campaign squelch such people, or just make them martyrs to some crazy cause?

Third problem: Now what?

Assuming that tools allow users to identify and call out companies that are failing to do enough, or make it possible to identify habitual online abusers, well…then that? Using the example of Twitter again, will a group of digital detectives seek to figure out the real identity of the bad actors and publicly shame them? And would public shaming actually curb the behavior? (I’m not convinced. This already happens on Facebook, where people often make questionable or outright offensive comments under their real identities).

Who will curate the public list of shame? Will we create maps of Internet abusers like some states have for sex offenders?

And on the corporate side…will we see the day when the CEO of Intel stands up and publicly denounces another Silicon Valley company for failing to do enough to fight abuse?

All of this is to say that I think the solutions are likely already obvious…but also harder.

If we want to stop individuals harassing people, there need to be greater legal consequences, via stricter law enforcement and prosecution. And where necessary, legal reform to make the rules clearer. I just don’t believe that the people engaging in this behavior are susceptible to peer pressure.

On the company side, the solution is economic: Boycott.

If you are critical of a service but continue to use it, you are giving your tacit economic support for it to continue with the status quo. The goal becomes not to fight trolling, but to do just enough from a PR perspective to give the impression that something is being done.

All the groups involved in the announcement this week have massive social media followings. Imagine if they had stood up on stage in Las Vegas and declared that they were pulling the plugs on their accounts until things changed or immediate actions were taken.

How many fans would follow Lady Gaga if she pulled the plug on her Twitter account?

I think that would have a greater impact than putting some hackers in a room and issuing a report in a few months with proposals for next steps.

Ultimately, the only message companies will listen to is one that hits them in their wallet. And we don’t need an app or a browser plugin to do that.