That's the main thrust of a report commissioned by the French government, which argues that big social media companies like Facebook, Twitter and YouTube need to be regulated like the financial sector, with regular audits and more transparency on their internal processes for handling harmful content and hate speech.

The report — whose conclusions are non-binding but could feed into legislation being debated in France — comes after a monthslong experiment during which 10 French officials inspected Facebook's internal processes for managing content, and as Chief Executive Mark Zuckerberg sat down with Emmanuel Macron in his office in Paris on Friday.

The Facebook boss, whose company faces a litany of investigations on both sides of the Atlantic, including the threat of a fine as high as $5 billion in the United States, welcomed the French initiative — even if his lieutenants later argued with its finer points.

“Hopefully, this regulatory framework won’t be just a national model for France, but it can be worked as a framework for the EU overall as the next European Parliament comes,” Zuckerberg told a small group of journalists in Paris after his meeting with Macron.

"Globally, the French approach is one of the most future-looking approaches we have seen" — Richard Allan, Facebook's chief lobbyist

France's proposed approach, which is laid out in a 32-page report authored by Benoit Loutrel, a former Google employee and telecoms regulator official, emphasizes flexibility and does not call for penalties over individual failures to police content by the platforms. Instead it urges much more regular oversight by regulators, focusing mainly on the processes and resources used to identify and remove hate speech.

In that sense, the French idea embraces some measures of self-regulation by large platforms — even if the report maintains that a social media company could face fines of up to 4 percent of its global annual revenue in the event of serious and repeated breaches of a yet-to-be-voted on hate speech law.

"Today's self-regulation approach is worth looking into because it shows that the platforms can be part of the solution for the problems," the report's introduction reads.

But, the report adds: "It lacks credibility ... The public response needs to be a balance between a repressive policy, necessary to fight efficiently against authors of abuse, and the idea of more responsibility for social media platforms."

On the front line

The effort to draw up a new roadmap for policing internet companies is the latest offensive against Silicon Valley from Macron, who came to power vowing to make France a "startup nation" but has become better known for his efforts to rein in big tech firms.

He kicked off the Facebook embedding experiment, the first of its kind, in November following an initial meeting with Zuckerberg in Paris last year, when the French president chided him and other tech entrepreneurs for not doing more good for society.

In addition to legislation on fake news, France has championed a digital tax on the European stage and set it in motion nationally following the move's failure at the EU level. A member of Macron's party is also pushing a law against online hate speech that would force platforms to bring down flagged posts within 24 hours — one that is likely to be informed by Friday's report.

Facebook executives reacted to the French report with cautious optimism. During a briefing in Paris at which Zuckerberg issued a short statement, top executives lined up to praise the report while qualifying some of its sharper recommendations.

"Globally, the French approach is one of the most future-looking approaches we have seen," Richard Allan, Facebook's chief lobbyist, told a small group of journalists. "Other models are very much more focused on punishing companies for content posted by individuals, which is backwards-looking.”

One point of contention was the 4 percent fine for serious violations. Allan said that it would make more sense to tie the fine to a "national framework rather than a global one” — which would fraction the amount of revenue concerned by a sanction.

On transparency requirements for the regulatory oversight, Allan said: “We don’t object to the principle but between the report and how it's written into law, we’d want to understand exactly what information the regulator wants. ”

The next step is likely to be Macron proposing some version of the recommendations at the level of the G7 group of industrialized nations, whose digital ministers are gathering in Paris.

Macron and New Zealand Prime Minister Jacinda Ardern are both behind the so-called Christchurch call to remove terrorist content from social media, in the wake of an attack that killed more than 50 Muslim worshipers and was streamed live on Facebook.

While France's initiative focuses on hate speech, French officials suggested it might be widened to other forms of harmful content including terrorist propaganda and misinformation, according to Secretary of State for Digital Affairs Cédric O.

Tackling hate speech online is a growing challenge for tech companies and governments alike. While the European Commission has refused in the last five years to go beyond a voluntary code of conduct signed by the major tech companies, Germany adopted a law forcing platforms to remove illegal content within 24 hours.

How France's recommendations will interact with the French future hate speech legislation remains unclear. The draft text will be on the agenda of the National Assembly in the coming months. The prime minister's office and the legislation's rapporteur Laetitia Avia will work to assess which recommendations could be added in the text.

Facebook "hopes the conclusions will be taken into account in the debate around the Avia law," the company told Le Monde.