After barring Logan Paul earlier today from serving ads on his video channel, YouTube has now announced a more formal and wider set of sanctions it’s prepared to level on any creator that starts to post videos that are harmful to viewers, others in the YouTube community, or advertisers.

As it has done with Paul (on two occasions now), the site said it will remove monetization options on the videos, specifically access to advertising programs. But on top of that, it’s added in a twist that will be particularly impactful given that a lot of a video’s popularity rests on it being discoverable:

“We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next,” Ariel Bardin, Vice President of Product Management at YouTube, writes in a blog post.

The full list of steps, as outlined by YouTube:

1. Premium Monetization Programs, Promotion and Content Development Partnerships. We may remove a channel from Google Preferred and also suspend, cancel or remove a creator’s YouTube Original.

2. Monetization and Creator Support Privileges. We may suspend a channel’s ability to serve ads, ability to earn revenue and potentially remove a channel from the YouTube Partner Program, including creator support and access to our YouTube Spaces.

3. Video Recommendations. We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next.

The changes are significant not just because they could really hit creators where it hurts, but because they also point to a real shift for the platform. YouTube has long been known as a home for edgy videos filled with pranks and potentially offensive content, made in the name of comedy or freedom of expression.

Now, the site is turning over a new leaf, using a large team of human curators and AI to track the content of what’s being posted, and in cases where videos fall afoul of YouTube’s advertising guidelines, or pose a threat to its wider community, they have a much bigger chance of falling afoul of YouTube’s rules and getting dinged.

“When one creator does something particularly blatant—like conducts a heinous prank where people are traumatized, promotes violence or hate toward a group, demonstrates cruelty, or sensationalizes the pain of others in an attempt to gain views or subscribers—it can cause lasting damage to the community, including viewers, creators and the outside world,” writes Bardin. “That damage can have real-world consequences not only to users, but also to other creators, leading to missed creative opportunities, lost revenue and serious harm to your livelihoods. That’s why it’s critical to ensure that the actions of a few don’t impact the 99.9 percent of you who use your channels to connect with your fans or build thriving businesses.”

The moves come at a time when the site is making a much more concerted effort to raise the overall quality of what is posted and shared and viewed by millions of people every day, after repeated accusations that it has facilitated a range of bad actors, from people peddling propaganda to influence elections, to those who are posting harmful content aimed at children, to simply allowing cruel, tasteless and unusual videos to get posted in the name of comedy.

The issue seemed to reach a head with Paul, who posted a video in Japan in January that featured a suicide victim, and has since followed up with more questionable content presented as innocuous fun.

As I pointed out earlier today, even though he makes hundreds of thousands of dollars from ads (the exact amount is unknown and has only been estimated by different analytics companies) removing ads was only a partial sanction, since Paul monetizes in other ways, including merchandising. So it’s interesting to see YouTube adding more details and ways of sanctioning creators, that will hit at their very virality.

As in the case of Paul, YouTube makes a point of the fact that the majority of people who post content on its platform will not be impacted by today’s announcement because their content is not on the wrong side of acceptable. These sorts of sanctions, it said, will be applied as a last resort and will often not permanent but will last until the creator removes or alters content. It will be worth watching how and if this impacts video content overall on the platform.

In an effort to regain advertisers’ trust, Google is announcing what it says are “tough but necessary” changes to YouTube monetization.

For one thing, it’s setting a higher bar the YouTube Partner Program, which is what allows publishers to make money through advertising. Previously, they needed 10,000 total views to join the program. Starting today, channels also need to have 1,000 subscribers and 4,000 hours of view time in the past year. (For now, those are just requirements to join the program, but Google says it will also start applying them to current partners on February 20.)

This might assure marketers that their ads are less likely to run on random, fly-by-night channels, but as Google’s Paul Muret writes, “Of course, size alone is not enough to determine whether a channel is suitable for advertising.”

So in addition, he said:

We will closely monitor signals like community strikes, spam, and other abuse flags to ensure they comply with our policies. Both new and existing YPP channels will be automatically evaluated under this strict criteria and if we find a channel repeatedly or egregiously violates our community guidelines, we will remove that channel from YPP. As always, if the account has been issued three community guidelines strikes, we will remove that user’s accounts and channels from YouTube.

Muret also described changes planned for the more exclusive Google Preferred program, which is supposed to be limited to the best and most popular content. Vlogger Logan Paul was part of Google Preferred until the controversy over his “suicide forest” video got him kicked out last week — a story that suggests some of the limitations to Google’s approach.

Moving forward, Muret said the program will offer “not only … the most popular content on YouTube, but also the most vetted.” That means everything in Google Preferred should be manually curated, with ads only running “on videos that have been verified to meet our ad-friendly guidelines.” (Looks like all those new content moderators will be busy.)

Lastly, Muret said YouTube will be introducing a new “three-tier suitability system” in the next few months, aimed at giving marketers more control over the tradeoff between running ads in safer environments versus reaching more viewers.

The basic idea is simple, and sound: someone searching for “help quitting pain pills” or something like that should be connected with the appropriate resources, and ostensibly that’s what help lines like those investigated by the Sunday Times’ undercover crew do.

But profit-oriented private clinics, which charge tens of thousands of dollars for their services, have begun offering huge referral rewards for sending patients their direction. And the referrers, in order to snag these people in need before the competition, have begun paying more and more for prime Google placement.

The report shows that referrers were paying as much as £200, around $270, for a single clickthrough. But that’s just a drop in the bucket if they successfully refer someone to a clinic, earning ten or twenty grand. Plus it bought them consultation with Google representatives who reportedly helped keep them at the top of the results.

It may be that the people looking for help did eventually find it. But naturally, they were not informed of any of these financial arrangements.

Might be nice to know that the ostensibly objective help line you’re calling is earning huge commissions from the places it refers you to, right? That’s why “patient brokering,” as it’s sometimes called, is banned in much of the US. And why Google doesn’t allow these kinds of ads here; it banned the whole category in September.

In a statement, Google said that it had today decided to make that ban apply to the UK as well.

Substance abuse is a growing crisis and has led to deceptive practices by intermediaries that we need to better understand. In the US, we restricted ads entirely in this category and we have decided to extend this to the UK as we consult with local experts to update our policy and find a better way to connect those that need help with the treatment they need.

One needn’t be too much of a cynic to find a few things worth asking. If it’s a question of medical ethics, why were the ads allowed in the UK at all? Why not extend the ban globally? Why did it take an investigative report to cause Google to “decide” to change its policy when presumably it had the tools to identify these problems itself?

There are, of course, major differences in how these clinics are regulated and allowed to operate between the US and UK, with (as you might expect) less regulation in the former. So a one-size-fits-all ban would be premature and possibly even harmful to those looking for help. Consulting with experts is a good start.

Yet one would hope that, having found pervasive slimy tactics in a business in one major market, Google would have been more proactive about looking into the presence of those tactics elsewhere. After all, it may be a niche but this wasn’t chump change: we’re talking about millions of dollars here.

This appears to be just another entry in the log of internet companies making money from both good actors and bad, only cutting off the bad when someone else points it out. They’re happy to apologize and change the policy afterwards, but seem to have remarkably little foresight when it comes to finding such things on their own.

It’s no secret that ad blockers are putting a dent in advertising-based business models on the web. This has produced a range of reactions, from relatively polite whitelisting asks (TechCrunch does this) to dynamic redeployment of ads to avoid blocking. A new study finds that nearly a third of the top 10,000 sites on the web are taking ad blocking countermeasures, many silent and highly sophisticated.

Seeing the uptick in anti-ad-blocking tech, University of Iowa and UC Riverside researchers decided to perform a closer scrutiny (PDF) of major sites than had previously been done. Earlier estimates, based largely on visible or obvious anti-ad-blocking means such as pop-ups or broken content, suggested that somewhere between 1 and 5 percent of popular sites were doing this — but the real number seems to be an order of magnitude higher.

The researchers visited thousands of sites multiple times, with and without ad-blocking software added to the browser. By comparing the final rendered code of the page for blocking browsers versus non-blocking browsers, they could see when pages changed content or noted the presence of a blocker, even if they didn’t notify the user.

As you can see above, 30.5 percent of the top 10,000 sites on the web as measured by Alexa are using some sort of ad-blocker detection, and 38.2 percent of the top 1,000. (Again, TechCrunch is among these, but to my knowledge we just ask visitors to whitelist the site.)

Our results show that anti-adblockers are much more pervasive than previously reported…our hypothesis is that a much larger fraction of websites than previously reported are “worried” about adblockers but many are not employing retaliatory actions against adblocking users yet.

It turns out that many ad providers are offering anti-blocking tech in the form of scripts that produce a variety of “bait” content that’s ad-like — for instance, images or elements named and tagged in such a way that they will trigger ad blockers, tipping the site off. The pattern of blocking, for instance not loading any divs marked “banner_ad” but loading images with “banner” in the description, further illuminates the type and depth of ad blocking being enforced by the browser.

Sites can simply record this for their own purposes (perhaps to gauge the necessity of responding) or redeploy ads in such a way that the detected ad blocker won’t catch.

In addition to detecting these new and increasingly common measures being taken by advertisers, the researchers suggest some ways that current ad blockers may be able to continue functioning as intended.

One way involves dynamically rewriting the JavaScript code that checks for a blocker, forcing it to think there is no blocker. However, this could break some sites that render as if there is no blocker when there actually is.

A second method identifies the “bait” content and fails to block it, making the site think there’s no blocker in the browser and therefore render ads as normal — except the real ads will be blocked.

That will, of course, provoke new and even more sophisticated measures by the advertisers, and so on. As the paper concludes:

To keep up the pressure on publishers and advertisers in the long term, we believe it is crucial that adblockers keep pace with anti-adblockers in the rapidly escalating technological arms race. Our work represents an important step in this direction.

The study has been submitted for consideration at the Network and Distributed Systems Security Symposium in February of 2018.

For decades technology companies have enjoyed a near-unbroken run of great publicity. Products and services are lauded as shiny and covetable. Adoption is couched as inevitable. Direction goes unquestioned. Engineering genius is assumed. And a generous margin is indefinitely applied to gloss over day-to-day errors (‘oh, just a few bugs!’) — allowing problematic functioning to be normalized and sanctioned in all but a handful of outlier instances.

The worst label these companies have generally had to worry about is being called ‘boring’. Or, at a push, overly addictive.

Tech giants have been given space to trumpet their products as revolutionary! Break through! Cutting-edge agents of mass behavioral change! To, on the one hand, tell us their tools are actively restructuring our societies. Yet also fade into the background of the conversation the moment any negative impact gets raised.

When forced, they might put out a blog post — claiming their tools are impartial, their platforms neutral, their role mere ‘blameless intermediary’.

The not so subtle subtext is: The responsibility for any problems caused by our products is all yours, dear users.

Or at least that was the playbook, until very recently.

What’s changed is that lately the weight of problems being demonstrably attached to heavily used tech services has acquired such a gravitational and political pull that it’s becoming harder and harder for these businesses to sidestep the concept of wider societal and civic responsibilities.

Libertarians are unlikely to object to any of this, of course, but it really is long overdue that the rose-tinted glasses came off the liberal view of tech companies.

The warning signs have been there for some years now. Few apparently cottoned on.

The honeymoon is over

Silicon Valley’s creativity may have been seeded in the 1960s by hippy counterculture but the technological powerhouse its community constructed has graduated from hanging around in communes to churning out some of the most fervent capitalists in human history.

Growth is its icon, now. Power the preferred trip. And free love became voyeuristic data capture.

You may champion capitalism and believe, of all available systems, it alone delivers the best and widest societal benefits — albeit trickle down economics is a desiccating theory still in dire need of a flood… (And that’s before you even start to factor in advancing automation destroying lower skilled jobs).

But the messages tech giants have used to sell their services have hardly amounted to an honest summary of their product propositions. That would require their marketing to confess to something more like this: ‘Hi! We’re here to asset strip your personal data and/or public infrastructure to maximize our revenues and profits any way we can — but at least you’re getting to use our convenient service!’

Instead they’ve stood behind grand statements about making the world more open and connected. Organizing information and making it universally accessible. Living like a local. Having a global mission. And so on and on.

They’ve continued to channel hippyish, feel good vibes. Silicon Valley still stuck on claims of utopianism.

This of course is the slippery lie called marketing. But tech’s disingenuous messages have generally been allowed to pass with far less critical scrutiny than gets applied to companies in all sorts of other industries and sectors.

And as a consequence of, what? On some level it seems to be the result of an uncritical awe of gadgetry and ‘techno-newness’ — coupled with a fetishization of the future that’s greased by ‘association attachment’ to sci-fi themes that are in turn psychologically plugged into childhood nostalgia (and/or fueled by big Hollywood marketing budgets).

On the other hand it may well also be a measure of the quantity of VC funding that has been pumped into digital businesses — and made available for polishing marketing messages and accelerating uptake of products through cost subsidization.

Uber rides, for example, are unsustainably cheap because Uber has raised and is burning through billions of VC dollars.

You don’t see — say — big pharma being put on the kind of pedestal that tech giants have enjoyed. And there the products are often literally saving lives.

Meanwhile technologists of the modern era have enjoyed an extended honeymoon in publicity and public perception terms.

Perhaps, though, that’s finally coming to an end.

And if it is, that will be a good thing. Because you can’t have mature, informed debate about the pros and cons of software powered societal change if critical commentary gets shouted down by a bunch of rabid fanboys the moment anyone raises a concern.

Money for monopolizing attention

The long legacy of near zero critical debate around the de-formative societal pressures of tech platforms — whose core priority remains continued growth and market(s) dominance, delivered at a speed and scale that outstrips even the huge upheavals of the industrial revolution — has helped entrench a small group of tech companies as some of the most powerful and wealthiest businesses the world has ever known.

Indeed, the race is on between tech’s big hitters to see who can become the first trillion dollar company. Apple almost managed it earlier this month, after the launch of its latest iPhone. But Alphabet, Facebook, Amazon and Microsoft are all considered contenders in this insane valuation game.

At the same time, these companies have been disrupting all sorts of other structures and enterprises — as a consequence of their dominance and power.

Like the free Internet. Now people who spend time online spend the majority of their time in a series of corporate walled gardens that are ceaselessly sucking up their input signals to order to continuously micro-target content and advertising.

Social media behemoth Facebook also owns Instagram, also owns WhatsApp. It doesn’t own your phone’s OS but Facebook probably pwns your phone’s battery usage because of how much time you’re spending inside its apps.

The commercially owned social web is a far cry from the vision of academically minded knowledge exchange envisaged by the World Wide Web’s inventor. (Tim Berners-Lee’s view now is that the system is failing. “People are being distorted by very finely trained AIs that figure out how to distract them,” he told The Guardian earlier this month.)

It’s also a seismic shift in media terms. Mass media used to mean everyone in the society watching the same television programs. Or reading news in the same handful of national or local newspapers.

Those days are long gone. And media consumption is increasingly shifting online because a few tech platforms have got so expert at dominating the attention economy.

More importantly, media content is increasingly being encountered via algorithmically driven tech platforms — whose AIs apparently can’t distinguish between deliberately skewed disinformation and genuine reportage. Because it’s just not in their business interests to do so.

Engagement is their overriding intent. And the tool they use to keep eyeballs hooked is micro-targeted content at the individual level. So, given our human tendency to be triggered by provocative and sensationalist content, it’s provocative and sensationalist content the algorithms prefer to serve. Even if it’s fake. Even if it’s out-and-out malicious. Even if it’s hateful.

An alternative less sensationalist interpretation or a boring truth just doesn’t get as much airplay. And easily gets buried under all the other more clickable stuff.

These algorithms don’t have an editorial or a civic agenda. Their mission is to optimize revenue. They are unburdened by considerations of morality — because they’re not human.

Meanwhile their human masters have spent years shrugging off editorial and civic responsibilities which they see as a risk to their business models — by claiming the platform is just a pipe. No matter if the pipe is pumping sewage and people are consuming it.

Latest Crunch Report

Traditional media has its own problems with skewed agendas and bias, of course. But the growing role and power of tech platforms as media distributors suggests the communal consensus represented by the notion of ‘mass media’ is dissolving precisely because algorithmic priorities are so singleminded in their pursuit of engaged eyeballs.

Tech giants have perfected automated, big data fueled content customization and personalization engines that are able to pander to each individual and their peculiar tastes — regardless of the type of content that means they end up pushing.

None of us know what stuff another person eyeing one of these tech platforms is seeing in any given moment. We’re all in the dark as to what’s going on beyond our own feeds.

Many less techie people won’t even realize that what they see isn’t the same as what everyone else sees. Or isn’t just the sum of all the content their friends are sharing, in the case of Facebook’s platform.

But now some politicians are talking openly about regulating the Internet — apparently emboldened by growing public disquiet. That’s how bad it’s got.

After the love is gone

If we allow social consensus to be edited out by a tiny number of massively dominate content distribution platforms which are algorithmically bent on accelerating a kind of totalitarian individualism, the existential question that raises is how can we hope to maintain social cohesion?

The risk seems to be that social media’s game of micro-targeted fragmentation ends up ripping us apart along our myriad fault lines — by playing to our prejudices and filtering out differences of opinion. Russian agents are just taking what’s there and running with it — via the medium of Facebook ads or Twitter bots.

Were they able to swing a vote or two? Even worse: Were they so successful at amplifying prejudice they’ve been able to drive an uptick in hate crime?

Even if you set aside directly hostile foreign agents using tech tools with the malicious intent of sewing political division and undermining democratic processes, the commercial lure of online disinformation is a potent distorting force in its own right.

This pull spun up a cottage industry of viral content generating teens in Macedonia — thousands of miles away from the US presidential election — financially encouraging them to pen provocative yet fake political news stories designed to catch the attention of Facebook’s algorithm, go viral and rack up revenue thanks to Google’s undiscriminating ad network.

The incentives on these platforms are the same: It’s about capturing attention — at any cost.

Another example where algorithmic incentives can be seen warping content is the truly awful stuff that’s made for (and uploaded at scale to) YouTube — with the sole and cynical intention of ad display monetization via children’s non-discerning eyeballs. No matter the harm it might cause. The incentives of the medium form content into whatever is necessary to generate the click.

In the past decade we even coined a new word for this phenomenon: ‘Clickbait’. Bait meaning something that looks tasty when glimpsed, yet once you grab it you’re suddenly the thing that’s being consumed.

Where algorithmic platforms have been allowed to dominate media distribution what’s happened is the grand shared narratives that traditionally bring people together in societies have come under concealed yet sustained attack.

Both as a consequence of algorithmic micro-targeting priorities; and, in many cases, by intentional trolling (be that hostile foreign agents, hateful groups or just destructive lolzseekers) — those agents and groups who have got so good at understanding and gamifying tech platforms’ algorithms they’ve been able to “weaponize information” as the UK Prime Minister put it earlier this month — when she publicly accused Russia of using the Internet to try to disrupt Western democracies.

And tech platforms gaining so much power over media distribution seems to have resulted in a splintering of public debate into smaller and angrier factions, with groups swelling in polarized opposition over the dividing lines of multiple divisive issues.

Some of the heated debate has been fake, clearly (seeded on the platforms by Kremlin trolls). But the point is fake opinions can help form real ones. And again it’s the tech pipes channeling and fueling these divisive views which work to fracture social consensus and undo compromise.

Really the result looks to be the opposite of those feel-good social media marketing claims about ‘bringing people closer together’.

Cashing out

A few massively powerful tech platforms controlling so much public debate is not just terrible news for social cohesion and media pluralism, given their algorithms have no interest in sifting fake from real news (au contraire). Nor even in airing alternative minority perspectives (unless they’ve divisively clickable).

It’s also bad news if you’re an entrepreneur hoping to build something disruptive of your own.

Unseating a Google or a Facebook is hardly conceived of as a possibility in the startup space these days. Instead many startups are being founded and funded to build a specific feature or technology in the explicit hope of selling it to Google or Facebook or Amazon or Apple as a quick feature bolt-on for their platforms. Or else to flash them with relevant talent and encourage an acquihire.

These startups are effectively already working as unpaid outsourcers within tech giants’ product dev departments, bootstrapping or raising a little early funding for their IP and feature idea in the hopes of cashing out with a swift exit and a quick win.

But the real winners are still the tech giants. Their platforms are the rule and the rulers now.

If Facebook had not been allowed to acquire additional social networks it might be a different story. Instead it’s been able to pay to maintain and extend its category dominance.

Just last month it acquired a social startup, tbh, which had got a little bit popular with teens. And because it already owns or can buy any potentially popular rival network, network effects work to seal its category dominance in place. The exception is China — which has its own massively dominant homegrown social giants as a consequence of actively walling out Western tech giants.

In the West, the only shade darkening the platform giants’ victory parade is the specter of regulators and regulation. Google, for example, was fined a record-breaking $2.73BN this September by the EU for antitrust violations around how it displays price comparison information in search results. The Commission judged it had both demoted rival search comparison services in organic search results, and prominently placed its own.

In Europe, where Google has a circa 90 per cent share of the Internet search market, it has been named a dominant company in that category — putting it under special obligation not to abuse its power to try to harm existing competitors or block new entrants.

This obligation applies both in a market where a company is judged to be dominant and in any other markets it may be seeking to enter — which perhaps raises wider competition questions over, for example, Alphabet/Google’s new push, via its DeepMind division, into the digital health sector.

You could even argue that the overturning of net neutrality in the US could have the potential to challenge tech platform power. Except that’s far more likely to end up penalizing smaller players who don’t have the resources to pay for their services to be prioritized by ISPs — while tech giants have deep pockets and can just cough up to continue their ability to dominate the online conversation.

Even the European Commission’s record-breaking antitrust fine against Google Shopping shrinks beside a company whose dominance of online advertising has brought it staggering wealth: Its parent entity, Alphabet, posted annual revenues of more than $90BN in 2016.

That said, the Commission has other antitrust irons in the fire where Google is concerned — including a formal investigation looking at how other Google services are bundled with its dominant Android mobile OS. And it has suggested more fines are on the way.

European regulators’ willingness to question and even attempt to check tech platform power may be inspiring others to take action — earlier this month, for example, the state of Missouri launched an investigation into whether Google has broken its consumer protection and antitrust laws.

Meanwhile Silicon Valley darling, Uber, got a big shock this September when the local transport regulator in its most important European market — London — said it would not be renewing its license to operate in the city, citing concerns about its corporate behavior and its attitude to passenger safety. (A decision that’s since been validated by the news which broke this month that Uber had concealed a massive data breach affecting 57M of its users and drivers for a full year.)

Next year incoming European data protection regulation will bring in a requirement for companies to disclose data breaches within 72 hours — or face large fines of up to 4{36b96a1f853ebc4400789db5a6750c1e4fd7f7ac668ceaf8c757e025080de237} of their annual global turnover (or €20M, whichever is greater).

These much stiffer penalties for data mishandling are intended, by European lawmakers, to rework how digital businesses view information generated by and around their users.

The change they’re seeking is for digital data to no longer be seen as a limitless resource to be siphoned off, stored and ceaselessly data-mined; but as a potential liability to be collected sparingly, applied carefully and deleted the moment it’s no longer needed — because the financial risk of holding on to it has suddenly, drastically inflated.

The General Data Protection Regulation (GDPR) can really be seen as a reaction to the kinds of behaviors that have enabled and entrenched a few tech giants at the top of the pile.

Another example: Google’s AI division, DeepMind, has had its fingers burnt in the UK this summer after a data-sharing partnership it inked with a London National Health Service Trust was judged to have violated data protection law.

The rules around sharing medical data are already subject to multiple layers of regulation, ethics and information governance. Nonetheless DeepMind was handed access to the non-anonymized medical records of 1.6M patients without their knowledge or consent, and under loosely defined contract terms that failed to firmly lock down what the company might be able to do with highly sensitive medical data.

Was the NHS Trust suffering from an overly shiny perception of technologists’ intentions when it agreed to hand millions of people’s medical records to an ad-targeting giant in exchange for a little free app development assistance and a few years’ unpaid access to the resulting service? You really do have to wonder.

It had been DeepMind’s intention to apply AI to the same data-set. Before it presumably grasped the extent of legal minefield it was careening into — and apparently backed away from feeding the medical records to its AI models (it has claimed the AI research it had intended to conduct on this data-set “has not taken place”).

In the wake of this controversy, the data-sharing contracts DeepMind has been able to ink with other NHS Trusts (most recently Yeovil) have been considerably tighter than the terms applied to its initial data grab. And none of these data-sharing arrangements enable DeepMind to use its data access to develop AI. (A research partnership with Moorfields Eye Hospital is the sole exception.)

The long and short of all that is that — at least in Europe — regulators are responding to tech giants’ moves into sensitive spaces rather faster than they might have done in years past. (Even if still a little after the fact.) And as the sheen comes off tech’s marketing, it’s arguably getting easier for politicians to challenge and stand up to big tech’s unchecked claims.

More and more politicians are also wondering aloud how society can place checks and balances on tech platform power. And as the next sets of rules get forged, they are being written with eyes opened to the disruptive power of software.

Think different

It may be that we’ve reached the end of the road for tech businesses being able to get such an easy and uncritical ride — and that means tech businesses of all stripes, large or small, established or just starting up.

If so, it’ll be because a few tech giants got so good at selling their one-sided stories they were able to crush competition and entrench themselves as the default. For search. For social. For shopping. For media. For capturing and monopolizing attention.

And because no one thought to question what would happen if — for example — a dominant communication platform allows anyone with a few dollars to spend to micro-target any kind of advertising they choose. Anyone — or any government or hostile actor with a divisive agenda to push.

Facebook has defended its failure to anticipate misuse of its tools by saying ‘we just didn’t think of that’. Which suggests the company’s management was either drunk on its own Kool Aid. Or deliberately chose not to be responsible for anything outside the narrow scope of business growth.

But if they’re not taking responsibility for negative civic and societal impacts, regulators are going to have to make them act with more care.

No one also apparently thought to question what would happen to the quality of information being surfaced and made mainstream if a dominant tech platform’s information sorting algorithms were chained directly to its revenue generating mechanism.

Pulling up the most clickable stuff might be great for business growth but it’s really not so edifying for the eyeballs and minds encountering the base stuff your tech ends up pushing.

A lot of people have spent a lot of time warning about what’s lost when you allow hateful speech to dominate and drown out constructive opinions.

While people in popular tourist destinations have increasingly been asking what happens to local communities and neighborhoods if you throw open the door to short term rental opportunism without considering the impact on long-term residents and rents.

You still won’t read anything about those downsides in Airbnb’s marketing, though.

And what about the people who fought for better background checks for the self-employed contractors that were being placed in positions of trust on ride-hailing platforms just because it lowered the barrier to faster business expansion if they didn’t have to run fingerprint checks.

Whose fault is it that software has been given such a free ride to eat the world no matter the wider human and societal costs?

It’s especially egregious when you consider that just a tiny bit of critical thought could have helped steer off so many of these bad outcomes.

A little more critical thinking, for example, and it would have been obvious that requiring tech platforms to disclose who’s paying for digital political ads is a good idea. TV and print ads already do it. Why on earth should digital ads be any different?

Had that rule been in place last year, Russian agents would at least have had to be a bit more creative about their election disruption. As it was they were able to pay in Rubles — and using their known ‘troll farm’ name.

It beggars belief how the power of tech platforms has passed under the political radar for so long. And likely speaks to how tech illiterate so many politicians still are.

Critical analysis of technologies to consider their wider impacts — on politics, on relationships, on emotional and mental wellbeing, on local and global environments — is what’s been sorely lacking in the tech narrative for years.

So really it’s hardly surprising that civic and societal considerations have been systematically de-prioritized by tech algorithms and omitted from the one-sided conversations technologists love to have.

But maybe the backlash is finally coming.

And instead of mindless cheerleading accompanying every startup pitch we’ll get a lot more Bodega fury wake-up calls.

Certainly we will if tech entrepreneurs keep making ‘disruption’ a clarion call for destruction and exploitation.

So founders, ask yourselves this: What might break because of what I’m trying to make?