This afternoon, Twitter suspended the account of noted troll and former pharma CEO Martin Shkreli. Over the weekend Shkreli aggressively directed the attention of his widely followed Twitter account to Lauren Duca, a freelance journalist, after seeing her on television.

In the span of a few days Shkreli 1) direct-messaged Duca to invite her to be his date at the inauguration, 2) changed his Twitter bio to read “i have a small crush on @laurenduca (hope she doesn’t find out),” 3) created a collage of images of Duca as his Twitter header, 4) changed his profile picture to a doctored image of Duca and her husband, where Shkreli’s face is photoshopped over Duca’s husband’s. Duca, who has over 130,000 Twitter followers, posted Shkreli’s bio and images around 11 a.m. Sunday. They went viral instantly and Shkreli was banned in just over two hours. “The Twitter Rules prohibit targeted harassment, and we will take action on accounts violating those policies,” a Twitter spokesperson told BuzzFeed News.

To Twitter’s credit, the company responded quickly to Duca’s plea and the subsequent tweets about Shkreli’s behavior. But Twitter’s vague, one-sentence justification for the suspension — the result of its long-stated policy not to comment on individual accounts for the privacy of its users — highlights a broader concern for the company in 2017: Twitter, despite its attempts to police its platform, appears unwilling to engage in the necessary transparency surrounding the harassment of its users.

Part of what makes online harassment such an intractable problem is that it is difficult to pin down with a tidy definition. Which is precisely why more radical transparency surrounding abuse suspensions is crucial. Shkreli’s behavior appears equal parts creepy, stalkerish, and targeted. While the photos and messages are not explicitly threatening, to an outsider there’s implied harassment. Certainly Duca appears to have viewed the actions that way. She responded publicly to a direct message from Shrkeli inviting her to be his date to the inauguration with, “I would rather eat my own organs” before reporting his behavior to Twitter.

From a Twitter abuse perspective however, Shkreli’s tweets occupy a fraught gray area of behavior that is morally objectionable, but perhaps not always enforceable. Twitter’s rules — well known among the platform’s trolls — are reasonably specific, but still open to interpretation. In Shkreli’s case, Twitter interpreted his Duca photoshops and messages to her as targeted harassment. But it might have just as easily 86’d Shkreli for photoshopping his head on Duca’s husband had it interpreted that tweet a violation of its impersonation rule.

Or Twitter could have interpreted Shkreli’s tweets as nonthreatening altogether. Historically, Twitter has allowed photoshopped images of ISIS beheadings to stay up for days without banning users. The company was also slow to root out attempts by trolls to disenfranchise black and Latino voters with misinformation before last year’s election. Just last month, Twitter chose not to take action against Mike Cernovich after he repeatedly insinuated (with zero substantive evidence) that online comedian Vic Berger IV was child molester. After a Twitter fight with Berger, Cernovich — a popular blogger loosely associated with the alt-right (and, early on, a frequent #Pizzagate tweeter) — implied via Twitter and Periscope that Berger was involved in an online pedophilia ring. Simply put, Twitter has allowed users to stay on its platform for far more flagrant behavior.

For Twitter — which has historically aimed to intervene as little as possible in the affairs of its users — each suspension sets a precedent. But these precedents are largely unknown to a big portion of the company’s users (indeed, 90% of respondents in a BuzzFeed News survey of 2,700 Twitter users said Twitter didn’t do anything when they reported abuse). Some form of transparency — specific tweets a user was suspended for, how Twitter chose to interpret a specific guideline in its rules, for example — could simultaneously deter trolls and act as a manual for users to report violations with more clarity. Most importantly, it would allow users, journalists, and anyone else to hold Twitter accountable for its seemingly inconsistent enforcement decisions.

Greater transparency is arguably in Twitter’s best interest, too. Take the example of Richard Spencer, a prominent white nationalist and leading figure in the alt-right movement, who was suspended back in November during a crackdown on alt-right accounts. Twitter’s move was criticized by some as an example of overly aggressive censorship. While Spencer might be controversial, they argued, he didn’t appear to have violated Twitter’s abuse rules. It turns out those critics were right. A month later, when Spencer's account was reinstated, Twitter revealed that he had been banned on a technicality — for violating the company’s multiple accounts rule.

Twitter’s “no comment on individual accounts” policy, no matter how well-intentioned, can sometimes make enforcement appear even more arbitrary than it already is. An alternate defense by Twitter — and other tech companies combatting abuse — is that a lack of transparency makes it harder for trolls to exploit the rules. But opaque and seemingly inconsistent enforcement opens Twitter’s rules up to exploitation by bad actors — indeed an effective trolling tactic is to use Twitter’s harassment reporting infrastructure and tools against those who are fighting or being trolled.

In late December, when Twitter CEO Jack Dorsey tweeted out an open call for suggestions for improving the platform, a number of users asked that the company be more vigilant and consistent on abuse. Dorsey tweeted, “we definitely need to be more transparent about why and how. Big priority for this year.” He noted as well that the company was “working to better explain and be transparent and real-time about our methods.”

Just a week into 2017, Dorsey and Twitter had a chance to do just that, but chose not to. Twitter’s response then, similar to its harassment rules, is open to interpretation by its millions of users. One interpretation is that Shkreli’s suspension is a promising sign of faster, more vigilant enforcement to come. The other? That Twitter reacted to mollify the viral outrage of the harassment of a prominent journalist by a prominent troll — a quick and easy band-aid on a high-profile wound.

Charlie Warzel is a Senior Technology Writer for BuzzFeed News and is based in Missoula, Montana