Category: Fake News

A coalition of internet giants have decided to have a meeting to discuss cybersecurity and misinformation during November’s US mid-term elections, but the government didn’t make the invite list.

It isn’t often the worlds tech giants all get along, but this seems to be an area which they can all agree on. Something needs to be done to remove a repeat of the controversy which has constantly stalked Donald Trump’s Presidential win, and it isn’t even worth bothering listening to the opinions of the government.

According to Buzzfeed, Nathaniel Gleicher, Facebook’s Head of Cybersecurity Policy, called the meeting, inviting twelve other organizations but the government was not on the list. The snub seems to follow a similar meeting in May, where each of the invitees left feeling somewhat disappointed with the government contribution. We can only imagine Department of Homeland Security Under Secretary Chris Krebs and Mike Burham from the FBI’s Foreign Influence Task Force simply sat in the corner, one holding a map and the other pointing to Russia shouting ‘we found it, we found it, look, they don’t even do water sports properly’.

“As I’ve mentioned to several of you over the last few weeks, we have been looking to schedule a follow-on discussion to our industry conversation about information operations, election protection, and the work we are all doing to tackle these challenges,” Gleicher wrote in an email.

The meeting will take place in three stages featuring the likes of Google, Twitter, Snap and Microsoft. Firstly, each company will discuss the efforts they have been making to prevent abuse of the platform. Second will be an open discussion on new ideas. And finally, the thirteen organizations will discuss whether the meeting should become a regular occurrence.

While interference from foreign actors has proved to be a stick to poke the internet giants in the US, criticism of the platforms and a lack of action in tackling misinformation has been a global phenomenon. European nations have been trying to hold the internet players accountable for hate speech and fake news for years, but Trump’s Presidential win is perhaps the most notable impact misinformation has had on the global stage.

With the mid-term elections a perfect opportunity for nefarious characters to cause chaos the internet players will have to demonstrate they can protect their platforms from abuse. Should abuse be present again, not only would this be a victory for the dark web and the bottom dwellers of digital society, but it will also give losing politicians an opportunity to shift the blame for not winning. While this meeting is an example of industry collaboration, each has been launching their own initiatives to tackle the threat.

Facebook most recently revealed it scored users from one to ten on the likelihood they would abuse the content flagging system, and has been systematically taking down suspect accounts. Twitter has algorithms in place to detect potential dodgy accounts and limits the dissemination of posts. Microsoft recently bought several web domains registered by Russian military intelligence for phishing operations, then shut them down. Google has also been hoovering up content and fake accounts on its YouTube platform.

Whether the internet giants can actually do anything to prevent abuse of platforms and the spread of misinformation remains to be seen. That said, keeping the bundling, boresome bureaucrats out of the meeting is surely a sensible idea. Aside from the fact most government workers are as useful as a bicycle pump in a washing machine, Trump-infused politically-motivated individuals are some of the most notable sources of fake news in the first place.

Attack is sometimes the best form of defence, and with Facebook’s credibility being heavily question, the social media giant has decided to start tracking the trustworthiness of users.

Some might find the concept of being evaluated by Facebook somewhat uncomfortable, especially considering recent events which have made CEO Mark Zuckerberg and his cronies as trustworthy as a child-snatcher in a playground, but it is a necessary step to clean up the platform. In a sense, Facebook is building the foundations to crowdsource its fight against fake profiles and misinformation.

While Facebook does now employ a team of reviewers to judge whether posts fall outside the platforms rules, the battle against misinformation and hate speech starts with the user flagging content which they deem inappropriate. Of course, people’s standards vary, which is the main difficulty in judging what should be appropriate for the world and what shouldn’t, but the credibility score seeks to identify those who are trying to abuse the system.

According to the Washington Post, users will be scored between one and ten dependent on the reliability of their feedback when flagging content as inappropriate. Details on how this are done are thin on the ground right now, this is done intentionally, but the aim is to find those who are intentionally flagging content as inappropriate when it isn’t. Political opponents for example, or perhaps those who would benefit financially from market confusion.

There are of course those who just find enjoyment in trolling others, and ideological warriors who simply don’t want to accept certain truths, or promote lies. After introducing the flagging feature in 2015, Facebook noticed there were certain people abusing the system, flagging content which they simply didn’t agree with. Disagreeing with an opinion is fine, that is the users choice, but that users opinion should not impact the credibility of the post when the judgement is not based on hard fact. By identifying those who are flagging content as inappropriate when it is not, the fact-checking team in Facebook can become much more efficient.

Unfortunately for Facebook, the task is much more complicated as there will be some who simply promote or flag content incorrectly who do not fall into the standard fake news profile. Take eco-warriors who are trying to save the planet by attacking the reputation of oil companies. They might promote content which is inappropriate or flag something simply because the company does not sit well with their principles. While they might be doing it for what they consider good reasons, it is still misinformation and in the same category as more nefarious means. Fake news is fake news, there is no such thing as justification.

Such a strategy from Facebook just shows how complicated it has become to battle against misinformation and maintain credibility. The algorithm will aim to identify these individuals and assess the risk associated with their activities. Twitter already does this to a degree, assessing the risk of a profile factors into how much the posts should be spread across the platform. It seems the algorithm will be used to aid Facebook’s reviewers assessments of flagged content, but also containing the risk of nefarious actors.

As mentioned before, how the algorithm actually works is hazy right now. While this might make people uncomfortable, not knowing how they are being judged, it is completely necessary. If Facebook publicises the rules and how it is coming to such conclusions, the same nefarious actors will find a way to beat the system, making it completely redundant.

Although the idea of having human fact-checkers will make Joe and Jane Bloggs feel safer on the platform, it is completely unpractical. As the tsunami of misinformation continues to grow, artificial intelligence is increasingly looking like the only option to keep such platforms honest and trustworthy.

In years gone governments used the idea of defending national security as a means to justify invasions of foreign lands, nowadays its turning into a free pass for rule-makers to do whatever they want.

George W. Bush used the idea of defending national security to hunt terrorists abroad, President Trump has somehow been managing to use it to impose tariffs on steel and maple syrup imports and now the Indian government is seeking to use the concept of defending its citizens to limit the use of social media.

Under Section 69A of the 2000 IT Act, the Indian government is investigating how it can block social media sites in the country, specifically targeting Facebook, WhatsApp, Instagram and Telegram. According to The Economic Times, the Department of Telecommunications (DoT) does have valid ambitions in mind, tackling fake news and child pornography, though limiting means by which citizens can express themselves is a questionable way to go about it.

The letter to Indian telcos “requested to explore various possible options and confirm how the Instagram/Facebook/WhatsApp/Telegram and other such mobile apps can be blocked on internet,” which was sent on July 18.

As you can imagine, there has been resistance to the idea. The Associated Chambers of Commerce and Industry of India has said “proposed measure to evolve mechanisms to block applications as a whole at the telecom operator level is excessive, unnecessary, and would greatly harm India’s reputation as growing hub of innovation in technology,” while the Supreme Court of India is also against the idea of social media monitoring. Last month, Chief Justice Dipak Misra and Justices AM Khanwilkar and DY Chandrachud warned a government plan to set up the hubs for monitoring online data risked the creation of a ‘surveillance state’ and ‘sheer intrusion into privacy’.

What is worth noting, blocking such apps will only be used in the most extreme circumstances. There have been several circumstances where fake news has been the cause of mob lynching in the country, while the government is also concerned it could influence elections due to take place next year. These are all incredibly valid reasons, though accountability and justification are two words which need to play a significant role here.

Recently the Indian government released a paper to address the inadequacies in data protection and privacy legislation. The desire to update the regulatory and legislative environment in light of societal changes is commendable, though it has not been presented in the best manner. While there will be protections for the consumer in terms of data collection and processing, these rules can be lifted for purposes relating to the government.

We understand there are extreme circumstances where extreme actions need to be taken, though lack of clarity surrounding accountability and justification leaves the process open to abuse. Considering the idea of defending national security is a very varied definition dependent on your personality, experience and content, as many grey areas as possible need to be abolished. This is in defence of a citizen’s right to privacy and the same should be said for freedom of speech.

While many look down on social media, it is a platform for expression. Many are buoyed and empowered by such platforms, therefore the government needs to tread carefully to ensure the processes in suspending privacy and freedom of speech are fully justified.

As mentioned before, extreme circumstance often require extreme actions, though there needs to be a process to ensure there is no other possible course of action which can be taken as an alternative, to maintain these rights. It should be the last possible option. With today’s haphazard use of national security to justify any actions, we worry whether this is becoming forgotten.