Business

Twitter's 'LasVagas' hashtag fail shows the worst part of algorithms

On Tuesday, Twitter provided us with the perfect example of just how bad algorithms are at the news: #LasVagasShooting.

A close look reveals a simple misspelling: "Vagas" instead of "Vegas." It's not on purpose. "Las Vagas" offers no deeper meaning, no Spanish translation that makes sense. It's just a mistake made by a few people that is now being repeated by thousands more thanks to Twitter's automated system.

That system ended up pushing #LasVagasShooting to be a top trending hashtag on Tuesday morning and the top trend about the Sunday night mass shooting in which a gunman killed at least 59 people and wounded more than 500 others during a country music festival.

This may seem like a typo that can be brushed off, but it's a symbol of a larger problem. Tiny mistakes can lead to big misinformation problems when tech companies' powerful algorithms are involved—and even cause the system to help spread easily disproven information.

Twitter's errant hashtag comes after Google and Facebook had their own issues with quality control around news related to the Las Vegas shooting. Google surfaced misleading information from 4chan and Facebook put propaganda in its Safety Check feature. Of course, the companies would say they didn't do that on purpose—it's just how the algorithms work.

Twitter's "Vagas" hashtag is a prime example of why this is a problem. The first instances on Twitter of #LasVagasShooting come from regular users who appear to have simply made a mistake. At some point, those mistakes were evidently common enough to trigger Twitter's algorithm to pick up on the hashtag.

A Twitter spokesperson said this sometimes happens, and that it is due to its users misspelling words. They also pointed to this passage in Twitter's "help center": The number of Tweets that are related to the trends is just one of the factors the algorithm looks at when ranking and determining trends. Algorithmically, trends and hashtags are grouped together if they are related to the same topic. For instance, #MondayMotivation and #MotivationMonday may both be represented by #MondayMotivation.

Twitter does have some human oversight of its trends, but those people have little power to make changes—even correcting a misspelling, according to a former Twitter employee familiar with the process. This is done to prevent human bias from influencing the trends, which is something Twitter is routinely still accused of.

So conceivably, Twitter's system looked at the various Las Vegas shooting-related hashtags and chose the misspelling for whatever reason. And the people involved couldn't do anything about it.

Image: Twitter screenshot

This is bad enough on its own. Algorithms alone are not yet good enough to determine when something is a mistake or on purpose. Twitter's system couldn't tell that "Vagas" was a mistake rather than a pun or a purposeful reference. That's not really the algorithm's job.

It's job is to read the signals of how people are interacting and using the hashtag and promote it. Enough people used "Vagas" and didn't notice the mistake that it became easy to default to the misspelling.

The hashtag even tripped up journalists. One TV reporter that asked that their name not be included tweeted the hashtag and said Twitter had autofilled to the misspelling before they noticed (that reporter's tweet has since been deleted and doesn't appear in this article).

Twitter's "Vagas" example is innocuous. It's not a piece of malicious Russian propaganda or the work of 4chan trolls. Nobody (or, conceivably, at least very few people) now think Las Vegas is spelled "Las Vagas." It's not something that will launch a discussion about how algorithms influence the news.

But it should. This is how easy it is for simple mistakes to end up disseminated to millions of people. It is a simple and egregious example of why major platforms that rely on automation are so susceptible to making mistakes around news events. These systems are not yet capable of understanding human error, let alone more advanced efforts to purposefully game their systems. Failure is inevitable.

The answer is simple: more human editors. This is something that tech companies have long resisted—adding a bunch of headcount to do repetitive tasks is antithetical to these organizations—but are slowly warming up to. Facebook has made two different announcements about adding people to vet content, though not necessarily for the news. Other companies including Snapchat and Apple have embraced human editors and seen their news offerings benefit from it.

Serious change, however, is lacking. And considering the major roles that Google, Facebook, and Twitter now play in the lives of millions of people, their already irresponsible reliance on error-prone algorithms borders on being socially reprehensible. Companies that make billions of dollars in profit every quarter have to do better.

Mashable
is a global, multi-platform media and entertainment company. Powered by its own proprietary technology, Mashable is the go-to source for tech, digital culture and entertainment content for its dedicated and influential audience around the globe.