Silicon Valley was once again on the spot in Europe last week. French president Francois Hollande said on Tuesday that Google and Facebook should be treated as “accomplices” of hate speech if they fail to block “extremist” content. A day later, the European Union’s counter terrorism chief said it was up to governments to flag “terrorist-related” videos on YouTube.

All this talk, as well as the disturbing proliferation of terrorist propaganda online, has raised questions about how sites like YouTube can screen what users upload.

At the moment, a lot of this process is user-based. “This is a very human moment, where people look at something and say, 'That is completely inappropriate for our community,'” says Karen North, Director of the Annenberg Program on Online Communities at USC Annenberg. It’s our responsibility as a community to alert YouTube, she says.

Among the challenges of policing YouTube's content: the sheer volume of daily uploads — YouTube says 48 hours worth of video is uploaded every minute. There's also the fact that YouTube is all about user-generated content. Given this, North says it’s impractical to expect Google to monitor each upload, and then decide whether it's appropriate or not.

This is not a new debate. YouTube was in a similar soup back in 2012, when it was alleged that a video on the site sparked violence in the Middle East. There were calls back then for Google to curate its content far more.

There have also been suggestions of governments being more involved in this process. But North says such a development, especially in the U.S., would only result from “a long, complex negotiation.”