The European Union is still not satisfied with how leading tech companies are handling the removal of illegal content online. On Thursday, it released a new set of guidelines for Facebook, Twitter, YouTube, and Microsoft, giving them just months to shape up—or face unspecified future regulation.

“With the surge of illegal content online, including online terrorist propaganda and xenophobic and racist speech inciting violence and hatred, online platforms carry an increasing societal responsibility in terms of protecting users and society at large and preventing criminals from exploiting the online space,” the EU wrote in a statement on Thursday.

The statement noted that there has been an increase in the removal of illegal hate speech—from 28 percent to 59 percent—but that 28 percent of the removed content was only taken down after more than a week had passed, signaling a persistent lack of urgency.

A few suggestions the EU listed in the latest guidelines included creating tools that make it easy for users to report illegal content, developing automated tech to target repeat offenders and illegal content, and cooperating with authorities, among a number of other directives.

Some tech companies are already adopting some of the aforementioned tools into their arsenal of online harassment tech. Facebook, for instance, announced in June that it was using an artificial intelligence system alongside human moderators to help identify extremist content and users on the platform. YouTube has a tool that identifies potential terrorist recruits. But while tech companies may have the means to develop tech or strategies that can better get a handle on illegal content online, it doesn’t mean that they are eager to deploy them. Platforms like Facebook and Twitter have proven hesitant to rigorously police users at the risk of its worst offenders crying censorship.

Advertisement

And Thursday’s set of guidelines isn’t the first time the EU has urged tech companies to crack down on online harassment. In December of last year, the Commission announced that YouTube, Facebook, Microsoft and Twitter were not adequately adhering to a code of conduct they voluntarily signed in May 2016 which asked them to deal with illegal hate speech on their platforms within 24 hours. It remains to be seen whether threats of legislation will scare tech companies enough to ramp up their removal of illegal content. The Commission will meet in December to evaluate the results of the proposed guidelines and decide how to proceed.