In the report, they used examples like MPs receiving antisemitic abuse online, Facebook hosting sexualised images of children, and YouTube hosting terrorist recruitment and neo-Nazi videos.

Social media companies, they said, should help fund the Metropolitan Police's online counter-terrorism unit to find extremist content online on their behalf. That unit is currently funded by UK taxpayers, and flags hateful content to Facebook, Twitter, and Google.

"Football teams are obliged to pay for policing in their stadiums and immediate surrounding areas on match days. Government should now consult on adopting similar principles online— for example, requiring social media companies to contribute to the Metropolitan Police's CTIRU [counter-terrorism internet referral unit] for the costs of enforcement activities which should rightfully be carried out by the companies themselves."

The MPs also proposed "meaningful fines" if the tech giants didn't take down illegal content in a short time, and quarterly reports which showed how much hate speech they had removed from their platforms.

Committee chair Yvette Cooper added:

"The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe."

At the moment, it doesn't look like the government will change the law to force tech giants to take hate speech more seriously. According to the report, MPs have pressured the trio to do more in a series of meetings. Last month, the three firms promised to develop new tools to identify terrorist propaganda online after meeting with home secretary Amber Rudd.

Facebook, Twitter, and Google did not immediately respond to a request for comment.