The European Union has demanded that social media companies take more active measures to spot and remove hateful and illegal content online, strongly encouraging to adopt automated solutions.

The European Commission has released a series of “guidelines and principles” that urge tech giants to building systems that can automatically detect, remove and prevent the re-upload of certain content. In particular, the document focused on doing away with posts promoting terrorism and extremism, child abuse, hate speech, besides copyrighted content. (Fake news, while considered a serious problem, is not the intended target of these guidelines.)

“The increasing availability and spreading of terrorist material and content that incites violence and hatred online is a serious threat to the security and safety of EU citizens,” the Commission explained in a press release.

“Given their increasingly important role in providing access to information, the Commission expects online platforms to take swift action over the coming months, in particular in the area of terrorism and illegal hate speech — which is already illegal under EU law, both online and offline.”

The request follows last’s year Code of Conduct—voluntarily adopted by Facebook, Twitter, YouTube and Microsoft—with which the companies had agreed to remove hateful content within 24 hours. Still, the EU noted how about one third of the illegal content (28 percent) published on those platforms remains online for up to a week before being taken off.

Hence the new, more stringent, guidelines, which could be followed by legislative measures if the situation does not progress over the next six months.

However the EC’s proposals today on tackling illegal content online appears to be attempting to pass guidance across a rather more expansive bundle of content, saying the aim is to “mainstream good procedural practices across different forms of illegal content” — so apparently seeking to roll hate speech, terrorist propaganda and child exploitation into the same “illegal” bundle as copyrighted content. Which makes for a far more controversial mix.

The EU’s document revolves around three main principles: “detection and notification”, that is, tech companies should create automatic tools to detect illegal content, and work closely with national authorities, thus making it easier for them to flag up illegal content; “effective removal”, namely, bad content should be taken down swiftly ; and “prevention of reappearance”, that is, the adoption of automatic tools to detect and take down illegal posts which had been previously removed.

Criticism has been voiced about the document’s lumping together terrorist propaganda and copyrighted content and the general vagueness of the guidelines. It remains to be seen whether tech giants will be able to quickly devise and implement such automated anti-hate mechanisms.