Why are we seeing spikes in Facebook content marked as spam, or YouTube videos being pulled down?

The Covid-19 pandemic is leading to a worldwide period of social distancing, with as many people as possible asked to work from home. This helps slow the spread of the disease, hopefully easing the pressure on health services.

That means staff from the social networks are being sent home, including content moderators. Most content moderators work in call centre-like facilities where social distancing is not feasible.

Because, in many cases, the content moderators cannot work from home, the platforms are relying on machine learning-trained algorithms to police content.

Why can't the moderators work from home?

There are very serious privacy concerns about letting people working from home have access to other people's private posts. Remember the fuss about contractors having access to people's recordings from voice speakers? Very similar concerns apply here.

As a result, the moderation systems are built on the assumption that the moderators will be coming into work. They are locked down and monitored so that staff can't share things they see in people's private accounts. It appears that these systems have been built without remote access in mind, because the privacy concerns and lack of ability to supervise moderators made it seem like too much work for too little reward.

So, why are the automated systems struggling?

This is a really hard problem to solve.

The systems have to recognise patterns of misleading content and pull them down. That requires a whole bunch of insight into how people phrase things, about which sites can be relied on, and so on. And some of those are judgement calls. That's why a combination of both AI tools and humans work so well. The AI handles the easy stuff, allowing humans to deal with the rest.

In the absence of the human element, the idea seems to be to turn up the strength of the AI element, risking false positives in the process. As Google said about YouTube:

As we do this, users and creators may see increased video removals, including some videos that may not violate policies. We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews.

So, even when the systems are functioning well, we should expect to see more false positives and more misinformation slipping through. The general quality of moderation will go down.