Should Social Media Platforms Be Regulated?

Yes, they should be. The more difficult question is how to do this effectively while differentiating between their various forms. Concerns about the public regulation of social media platforms emerged after the 2016 presidential elections in the U.S. and the U.K. with the Brexit referendum, and have been further articulated by public critics such as Roger MacNamee, Renée DiResta, Dipayan Ghosh, U.S. presidential candidate Elizabeth Warren, or even comedian Sacha Baron Cohen. I will synthesize a number of their arguments and add my perspective.

Platforms are business models that create marketplaces to match different parties with complementary interests, relying on what economists call ‘indirect network effects.’ Dating sites, eBay, Facebook, YouTube, and operating systems such as Android and iOS are all different types of platforms. Social media platforms connect consumers with digital content creators and typically monetize their interactions through advertising revenue. Since platforms do not generally create their content, they contend that they are not responsible for what users produce and are thus exempt from the libel, defamation, and other laws and regulations that govern traditional media like newspapers and television. In other words, they are platforms for free speech and assume no responsibility for what their users communicate.

This claim is correct to the extent that they create little of their own content (this varies). It is incorrect, however, to claim that they do not exercise editorial control over the content. Traditional television and newspapers are what we call broadcast journalism, meaning that they provide the same content to a broad, general audience. Social media platforms, by contrast, are ‘narrowcasters.’ Given their ability to pinpoint who you are, their algorithms choose content exclusively for what they think you want to hear and see, making frequent, personalized editorial decisions based on your browsing behavior on their platforms, other websites (e.g. if you use Facebook or Google to login), and geolocation information taken from your cell phone.

Social media platforms are also what economists call ‘natural monopolies,’ though this does not mean that they are monopolies like utility companies providing universal services. Instead, all parties benefit from the increased aggregation of supply and demand, liquidity and lower search costs when activities are concentrated in a few, large platforms. For example, if you want to sell a piece of rare memorabilia, you are better off if all the potential buyers are on a single platform. A similar logic applies if you are buying, posting, or sharing content. As such, when platform businesses such as Facebook, Twitter or eBay first begin, fast growth is all-imperative in a winner-take-all competition. Explicit rules about what is or is not allowed on these platforms are implemented only when necessary, as they can constrain its expansion and are expensive to implement. Remember the early days of YouTube when they allowed users to post any type of music, TV show or film? Only after significant legal threats from the media industry did the online video streaming platform begin to impose restrictions on copyrighted material.

A complicating factor is that platforms choose this content based upon maximizing user time (i.e. ‘stickiness’) on their site. There is an adage in the media world: ‘If it bleeds, it leads.’ This refers to the fact that sensationalist, violent, or other scandalous content provokes more emotions and simply sells more newspapers or advertising. Hence, there is an acknowledged tendency for social media to show emotionally explosive content that speaks to convictions concerning politics, religion, or other prickly topics. This leads users to share content within their networks in exchange for likes and additional shares as a currency of status and self-affirmation. Some have termed the hyper-personalization bias of the platform’s algorithms as ‘filter bubbles’ or ‘echo chambers,’ and the fact that users are more likely to like and share the more polarizing topics has been called the ‘amplification effect.’

While it might be unfair to hold Facebook or Twitter fully responsible for the recent electoral results in the UK and the US, their effects on the rise of populism and fringe movements, as well as the divisive, tribalistic behavior we often see online, are a topic of serious concern for sociologists and political scientists. Diversity of opinion is certainly positive and has to be celebrated. But, when platforms like Facebook are not held responsible for the accuracy of the content they present, there is no incentive for them not to show you the most outrageous or fake. Excessive social polarization is undesirable as it erodes the democratic institutions that protect free speech and other basic rights. Without some basic consensus on the common objectives of social welfare, democracies weaken and become dysfunctional or corrupt. Just like a chemical company that has to abide by environmental regulations, the social cost associated with social media platforms should be controlled to mitigate its worse effects.

The final concern about social media platforms is that, by collecting so much demographic and behavioral data from our online activities, they can create a very precise digital model of who we are with significant predictive accuracy. They then sell these profiles, our digital twins or avatars, to advertisers both in and outside their platforms. They do this with little explicit knowledge or consent from their users. Moreover, users have no rights over their meta-data. It is a competently asymmetric relationship; a Faustian bargain where, in exchange for carrying out searches, networking and taking advantage of geolocation services, we as users allow these platforms into the most intimate corners of lives with little understanding of how or which of our secrets they sell. This paradox will surely increase as the adoption of wearable technologies grows, and the boundaries between regulated healthcare and unregulated consumer sports sectors blur. Here, the need for public conversations about additional regulation will most certainly increase.

Most public regulation we know in the West emerged during the Great Depression, when poverty and social malaise spread like a virus and, as a result, new concepts of basic public health and social welfare emerged. Universal access to sanitation, clean water, electricity, telecommunications, education, and culture, lift everyone in society to a more prosperous life; if my neighbor is healthier, more sanitary, better educated, and economically secure, in all likelihood, so am I. The consequences of social media platforms that function on a quasi-monopolistic scale are just now beginning to be understood. They offer us many outstanding services that we could not imagine living without today. But, like many industries, there are undesirable consequences that work against the great social welfare. Serious conversations on how social media platforms should be regulated to minimize their social costs are critically needed.