We tested Secret’s anti-bullying system… and it failed

Anonymous messaging app Secret has been under fire almost since the day it launched, with some critics arguing that it enables cyber-bullying. Company CEO David Byttow responded earlier this week with a Medium post about how Secret is working hard to curtail such abusive behavior via a community flagging system and a team of real-life post moderators. Plus an algorithm that uses “keywords and patterns” to flag posts that may violate Secret’s guidelines.

Several of my Fortune colleagues favorited the post, and contributed their own negative comments about “Sophie R.” One colleague also flagged the post for bullying, approximately one hour after it first appeared. The post immediately disappeared from her app stream, although it remained in everyone else’s stream (and online via its permanent URL).

The post remained active last night, so I emailed a Secret spokeswoman for more information on the company’s internal standards for response time to flagged posts (without disclosing our test). I also wanted to know how many human moderators the company — which recently raised $25 million in new venture capital funding — employs.

The spokeswoman replied that moderation is a “major area of investment and focus for our company… we are ramping up our moderation team continuously, and every week we roll out moderation platform enhancements that allow us to react faster and smarter to the influx of new content being posted.” As for response times, she wrote: “We don’t share our standards publicly, but we’re always working on getting faster, and of course the team works on priority flags with higher urgency, a best practice from other social sites.”

When I woke up this morning, the post was still there. It also had been commented on by a couple of non-Fortune employees who most certainly do not know “Sophie R.” Apparently they were Secret users who just wanted to pile on.

The post remained active for 24 hours. Then we informed Secret of the situation via an email to its spokeswoman, and it almost immediately disappeared. We also received the following email from CEO David Byttow:

The post you referenced was detected by our automated system, but not fully taken down fast enough. There are several proprietary factors at play here too (e.g. location of post, reputation of author, etc) that are taken into account. In fact, we prioritize high schools closely based on location and network.

Nevertheless, this post should be taken down based on content (and was) as it’s a clear violation of our guidelines.

As far as speed is concerned, we’ve experienced our a huge spike in growth (10-20x) in Brazil and Israel. Our team is scaling up across the board to handle it.

I also had a brief phone conversation with Byttow, who said that he could not discuss any details of the company’s “proprietary” moderation system or current response times (although he did say that the eventual goal was to achieve near-instant responses). He added via a subsequent email that Secret had “found a moderation bug in this specific case and are chasing it down… Will be fixed today.”

Obviously, this all is extremely troubling. We posted a message that was, by all objective accounts, an example of cyber-bullying. Were Sophie a real person using Secret, chances are that she would have seen it within 24 hours — given how it circulated among her “friends.” And Byttow isn’t claiming that Secret realized we were gaming the system. Instead, he acknowledged that the system failed. Moreover, the person who flagged the post had every reason to believe it had been removed, since it disappeared from her stream.