Really, the shifting media landscape makes perfect sense â€“ it used to be that we were reliant on mainstream media outlets to let us know what was happening in the world, but in the modern, connected age, weâ€™re all now able to share news and updates with each other just as fast,Â and we instinctively place more trust into information shared by those we know. This, inevitably, also means that some beliefs and movements are gaining more momentum, because theyâ€™re able to generate widespread reach,Â while the increased emphasis on digital news content has also put more pressure on traditional outlets to come up with more sensationalized, divisive content to fuel clicks.

That, in turn, further solidifies and justifies such movements. So yes, Facebook can, and does, empower politicized groups, no question. Now to work out what we do to stop it.

This is one of several key questions Facebookâ€™s looking to examine in a new series theyâ€™re calling â€˜Hard Questionsâ€™.

â€œAs more and more of our lives extend online, and digital technologies transform how we live, we all face challenging new questions â€” everything from how best to safeguard personal privacy online to the meaning of free expression to the future of journalism worldwide. We want to broaden that conversation. So today, weâ€™re starting a new effort to talk more openly about some complex subjects.â€�

Among the topics Facebookâ€™s looking to address with this new series are:

How should platforms approach keeping terrorists from spreading propaganda online?

After a person dies, what should happen to their online identity?

How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide whatâ€™s controversial, especially in a global community with a multitude of cultural norms?

How can we use data for everyoneâ€™s benefit, without undermining peopleâ€™s trust?

How should young internet users be introduced to new ways to express themselves in a safe environment?

These are definitely some serious considerations, and itâ€™ll be interesting to see just how much Facebook is willing to probe each, particularly given that some focus on the methods which directly contribute to how the platform generates revenue â€“ most notably, the questions around data collection and usage.

In the first instalment, Facebook has outlined some of the key elements in how they tackle terrorism and extremist content on their platform, including their latest advanced in artificial intelligence and machine learning which have been designed to detect and weed out questionable content.

Facebookâ€™s summary is surprisingly open, providing overviews on the strengths, and limitations, of their systems to detect such behavior.

â€œWeâ€™ve been cautious, in part because we donâ€™t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do.â€�

Facebook notes that they have a team of more than 150 people who solely focus on detecting and removing terrorism and extremist-related content, along with their advancing machine learning efforts, which are constantly evolving. Through this, theyâ€™re hoping to make Facebook â€œa hostile place for terroristsâ€� and eliminate misuse. As the platform expands, so too do the challenges, but itâ€™s an interesting insight into Facebookâ€™s perspective on this key area.

â€œWeâ€™re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy Tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative, or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.â€�

Such efforts could extend beyond just those bots used for political manipulation, with many Twitter users still buying followers, Likes and retweets. There are also apps like Thunderclap which have gained momentum of late â€“ Thunderclap enables users to sign up to share a specific tweet or post at an assigned time of day, which helps boost promotion, and could, potentially, manipulate Twitterâ€™s Trending Topics i.e. a heap of people tweeting about the same things all at once indicates a trend, which gets on the â€˜Trendingâ€™ list, boosting promotion, etc. Â Â Â

Itâ€™s difficult to know just how far Twitterâ€™s efforts might extend, but all such uses of their systems could come under increased scrutiny â€“ worth considering for those who are employing such tactics.

While the impacts of both Facebook and Twitterâ€™s efforts wonâ€™t be clear for some time, it is interesting to note how the major networks are looking to address such issues, and to consider their flow-on effects and how they can counter misuse. Facebook initiallyÂ played down their influence over public opinion, but mounting pressure has forced them to act, and hopefully, through this, we see new measures which enable all platforms to weed out questionable behaviors and enable free expression without also fueling anti-social and destructive elements.

But really, the right balance is virtually impossible to strike. Every effort on this front should be supported and encouraged, but itâ€™s difficult to have a platform that facilitates global, real-time expression within any set of defined parameters around what that means.

The discussions, however, are important, and are worth putting forward.Â