Internet intermediaries typically frame their opposition to legislation as protecting freedom of expression, but no right of speech is absolute. This debate is a battle over the control over information: What data should be collected and sold, what content should be permitted online, and who should decide?

Last week the U.S. Senate Intelligence Committee grilled senior legal counsel for Facebook, Google and Twitter about their roles in Russian efforts to influence the U.S. election.

From left: Facebook’s general counsel Colin Stretch, Twitter’s acting general counsel Sean Edgett and Google’s senior vice-president and general counsel Kent Walker are sworn in for a Senate Intelligence Committee hearing on Russian election activity and technology on Capitol Hill in Washington, D.C., on Nov. 1.(AP Photo/Jacquelyn Martin)

The executives’ testimony starkly reveals that the companies have few controls on the advertisements and so-called “fake news” that they accept on their platforms.

These candid admissions from the social media giants focus much-needed attention on the serious problems inherent with big intermediaries’ data-intensive business models.

Few safeguards, financial interest in status quo

As the U.S. election scandal shows, big social media platforms not only have few safeguards to prevent the deliberate manipulation of information, but they also have financial interests in maintaining the status quo. Unfettered flows of information and unconstrained advertising revenue are key to their business models. And this model is tremendously profitable.

Given the seemingly intractable challenges of regulating social media platforms, what can be done? In the United States, three members of Congress have proposed a bipartisan response, the Honest Ads Act, that would require platforms to publish information about their advertisers and maintain a public archive of political advertisements.

U.S. Democratic Sen. Richard Blumenthal discusses an online ad falsely depicting actor and comedian Aziz Ansari urging people to vote in the 2016 election by posting to Twitter at a Senate judiciary subcommittee hearing on crime and terrorism on Capitol Hill in Washington, D.C., on Oct. 31.(AP Photo/Andrew Harnik)

This is a step in the right direction, and the Canadian government should consider similar measures in advance of the 2019 federal election. Facebook has already announced a program, the Canadian Election Integrity Initiative, to counter the spread of misinformation that focuses on media literacy and training.

Deliberate misinformation

While these projects appear useful, they likely will do little to address the underlying problem: The bad-faith spread of online misinformation. That’s because the fundamental problem lies with Facebook’s business model. Efforts to constrain the flow of information, especially information that generates advertising revenue, are contrary to their business model.

While the big platforms can afford to take a financial hit to restore their reputations and work to get rid of the worst offenders, they can’t fully solve the problem without fundamentally changing how — and with whom — they do business.

Facebook ads linked to a Russian effort to stir tensions on divisive social issues during the U.S. election campaign.(AP Photo/Jon Elswick)

In response to what has become a fundamental challenge to the survival of liberal democracy, Facebook, Twitter and Google have all committed to voluntarily implementing measures to address the spread of misinformation and targeting accounts that troll other users with often bigoted, racist content.

Google, for example, is creating a public database of election advertising content that appears on its services. These companies prefer self-regulation to legislation and they’ve lobbied the U.S Federal Election Commission in the past to have online political advertising exempt from disclosure. It’s only the political pressure from the Senate inquiry that is forcing these platforms into action.

Internet companies, algorithms are black boxes

However, while Google, Facebook and Twitter are all creating algorithms to, in the words of Zuckerberg, “detect bad content and bad actors,” these algorithms operate as so-called “black boxes.” This means that the criteria the algorithms use to make decisions are off-limits to public scrutiny.

Facebook and Instagram ads tied to a Russian bid to influence the 2016 U.S. election, released by the U.S. House Intelligence committee in Washington, D.C., Nov. 1.(AP Photo/Jon Elswick)

Is “trust us” a good enough response, given the problem? With so much at stake, it may be time for a fundamental rethink of how these indispensable 21st century companies are regulated and what they’re allowed to do.

At the very minimum, governments and citizens should reconsider whether the lack of oversight into how these companies shape our speech rights is really in the public interest.

Social media platforms “are an enabler of democracy,” says Margrethe Vestager, the European Union’s Commissioner for Competition, but we’re seeing that “they can also be used against our very basic beliefs in democracy.”