Tag: abuse

Millions of American children were placed in orphanages. Some didn’t make it out alive.

Finally getting back to read the second half of this harrowing article. People can be horrid, but it really pains me that the Catholic Church could have been this violently horrible. This makes the Inquisition look like a field day.

"I am really pleased to see different sites deciding not to privilege aggressors' speech over their targets'," Phillips said. "That tends to be the default position in so many online 'free speech' debates which suggest that if you restrict aggressors' speech, you're doing a disservice to America—a position that doesn't take into account the fact that antagonistic speech infringes on the speech of those who are silenced by that kind of abuse." ❧

What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects.

This may be one of the must read articles of the year. It describes just a small microcosm of what is happening on the internet that needs to be fixed. It seems innocuous, but it’s long term effects will be painful.

Sooner or later, enough people I like are going to abandon the service, and the pain-to-pleasure ratio will tip unfavorably. I don't know how Twitter will survive 2017 without making some drastic changes to its service. Maybe it's already too late.

There are potential solutions to the recent News Genius-gate incident, and simple notifications can go a long way toward helping prevent online bullying behavior.

There has been a recent brouhaha on the Internet (see related stories below) because of bad actors using News Genius (and potentially other web-based annotation tools like Hypothes.is) to comment on websites without their owner’s knowledge, consent, or permission. It’s essentially the internet version of talking behind someone’s back, but doing it while standing on their head and shouting with your fingers in their ears. Because of platform and network effects, such rude and potentially inappropriate commentary can have much greater reach than even the initial website could give it. Naturally in polite society, such bullying behavior should be curtailed.

This type of behavior is also not too different from more subtle concepts like subtweets or the broader issues platforms like Twitter are facing in which they don’t have proper tools to prevent abuse and bullying online.

A creator receives no notification if someone has annotated their content.–Ella Dawson

Towards a Solution: Basic Awareness

I think that a major part of improving the issue of abuse and providing consent is building in notifications so that website owners will at least be aware that their site is being marked up, highlighted, annotated, and commented on in other locations or by other platforms. Then the site owner at least has the knowledge of what’s happening and can then be potentially provided with information and tools to allow/disallow such interactions, particularly if they can block individual bad actors, but still support positive additions, thought, and communication. Ideally this blocking wouldn’t occur site-wide, which many may be tempted to do now as a knee-jerk reaction to recent events, but would be fine grained enough to filter out the worst offenders.

Toward the end of notifications to site owners, it would be great if any annotating activity would trigger trackbacks, pingbacks, or the relatively newer and better webmention protocol of the W3C which comes out of the IndieWeb movement. Then site owners would at least have notifications about what is happening on their site that might otherwise be invisible to them. (And for the record, how awesome would it be if social media silos like Facebook, Twitter, Instagram, Google+, Medium, Tumblr, et al would support webmentions too!?!)

Perhaps there’s a way to further implement filters or tools (a la Akismet on platforms like WordPress) that allow site users to mark materials as spam, abusive, or “other” so that they are then potentially moved from “public” facing to “private” so that the original highlighter can still see their notes, but that the platform isn’t allowing the person’s own website to act as a platform to give safe harbor (or reach) to bad actors.

Further some site owners might appreciate gradable filters (G, PG, PG-13, R, X) so that either they or their users (or even parents of younger children) can filter what they’re willing to show on their site (or that their users can choose to see).

Consider also annotations on narrative forms that might be posted as spoilers–how can these be guarded against? For what happens when a even a well-meaning actor posts an annotation on page two which foreshadows that the butler did it thereby ruining the surprise on the last page? Certainly there’s some value in having such a comment from an academic/literary perspective, but it doesn’t mean that future readers will necessarily appreciate the spoiler. (Some CSS and a spoiler tag might easily and unobtrusively remedy the situation here?)

Certainly options can be built into the annotating platform itself as well as allowing server-side options for personal websites attempting to deal with flagrant violators and truly hard-to-eradicate cases.

Note: You’re welcome to highlight and annotate this post using Hypothes.is (see upper right corner of page) or on News Genius.

Do you have a solution for helping to harden the Internet against bullies? Share it in the comments below.