What centralization? The web isn't centralized. Consumer culture is centralized. Consumer culture on the web is what's failing. Facebook and Google and similar services are not "The Web" and they are not "The Internet" and the people who can't exist outside of those bubbles will burn down with them, so what? The internet will survive it, the web will survive it. These things happen in nature all the time, it's interesting to see it happens online, a non-natural man-built ecosystem, as well.

We have "too-big-to-fail" companies running all of the services that most sites and users depend on, which they can shut down at any time with little-to-no repercussions (Google in particular is notorious for this). We've got governments erecting their own firewalls to shut down opposing views. We've got ads, trackers, and biased algorithms that help ensure you stumble upon their sponsored content.

Of course this sort of stuff should arise "naturally" by man, but we can do better.

There's no such thing as "Too Big To Fail" on the internet. There was a period of time during the earlier days when infrastructure was centralized but those days are behind us I've read, and if one major backbone provider were to fail the system could adapt.

SaaS providers like Google and Facebook don't matter at all. Their importance has been exaggerated to epic proportions. If Google/Alphabet were to fail tomorrow the Internet would barely notice outside of social conversation. It'd be a curiosity, but the system would adapt. "Some People" might be without E-mail for a few hours, and a lot of E-mail providers would see surges in sales.

We can help encourage the survivability by teaching new users of alternatives for search, blogging, following friends and so on.

A large part of the internet economy operates on the assumptions that services like AWS and Google and Facebook remain available. Businesses are built on top of these services, and it's not clear that other SaaS providers could fill the gaps, were the big guns to fail.

Tens of thousands of businesses would lose their primary source of income (google ads).
Tens of thousands of businesses would lose their workplace services (G Suite).
Analytics systems that are running 24/7 for multibillion dollar corporations would shut down. Who has the capability to fill that void?
Millions of creators would lose their platform (Youtube)

You can definitely claim that "Too Big To Fail" is not a true concept, but the significance of the failure is not to be taken lightly.

Advertising is dead? Come on "<x> is dead" gets said all the time about a wide range of subjects and is almost universally bullshit. If advertising is dead why are companies still shelling out millions for 30 second TV spots during the super bowl? How are some kids are paying off their student loans by making YouTube videos? I agree that a revenue model that's based on advertising needs to be thought long and hard about, but 'dead' -- nah. I doubt that advertising will ever truly die, unfortunately they've figured out how to hack human brains and emotions with it.

"will somehow contact the millions of new users that come online each year and persuade them to"

That's literally how the internet exploded, why can't the same process work for improving people's ability to use it to serve their best interests. The process is helped along by certain public organizations like the EFF or celebrities like R. Stallman encouraging people to move away from corporate controls.

The greatest feature of the internet is decentralization. Facebook and Google aren't going to change that, even tho every corporation would like to. As long as we don't fall asleep at the wheel as users.

Don't take personal offense, the idea that you need some corporation to handle your E-mail is an ignorance, one that's reinforced by those very corporations' marketing departments.

I've been operating my own E-mail servers for two different domains for pennies per day for years now. It's cheap, it's (relatively) easy, and it's _better_ when you control it yourself. If Gmail were to fail, it would be one of the best things to happen to the world economy.

If net neutrality dies, the web absolutely becomes centralized. Facebook and Google aren't the web, but they will be the web if they're the only ones that can pay to have their data passed over the wire.

Net neutrality would be nice to have, but the way it's been overhyped is silly. The internet developed without it. We've never really had it -- people like to cite a few years before the recent FCC decision as a period of net neutrality, but it had so few effects that the average consumer did not notice it being implemented and did not notice it going away. It certainly didn't prevent anything Google or Facebook wanted to do.

If you're worried about centralization, net neutrality won't help much. Facebook and Google are large enough to crowd out other players with or without net neutrality.

We did have it. Defacto net neutrality, enforced by a competetive ISP market. It worked. And once we started to lose that, consumers absolutely did feel the impact. Remember when Comcast throttled bit torrent?

>If you're worried about centralization, net neutrality won't help much. Facebook and Google are large enough to crowd out other players with or without net neutrality.

Sure, they can still dominate under net neutrality. It's just significantly easier without it.

>...a few years before the recent FCC decision as a period of net neutrality, but it had so few effects that the average consumer did not notice it being implemented.

The whole point of net neutrality is that you shouldn't notice it, because you should just be able to use your network connection for whatever you want.

In 2005 a Republican appointed FCC commissioner under a Republican president ordered an ISP to stop blocking VOIP services on their network. This is a real thing that happened and I think plenty of people have noticed how useful VOIP and related services like Skype, Facetime, Hangouts, Discord are even if they aren't aware that the ability to use them has only been enabled by regulatory intervention.

One of the original reasons that Bittorrent clients started sending encrypted data was to get around ISP packet inspection that tried to throttle torrents. That worked for a while, until ISPs started throttling based on torrent-like transfer patterns.

That wasn't a single regional ISP; it was a pretty common behavior. It's also something that would be fixed by either having a range of different ISPs available in a region, or enforcing net neutrality (in the sense of treating data as an opaque payload to be delivered).

> Facebook and Google are large enough to crowd out other players with or without net neutrality.

"crowd out" is a vague term. Maybe they make the most money, but money isn't the only thing that matters. The forums I frequent and the DuckDuckGo are doing just fine, but I'm not sure they would be if net neutrality falls apart completely.

I'm in full support of net neutrality (as it seems everyone but the telecom companies is yet here we are, but I digress) but I like to think if this "doomsday" scenario were to happen then the outcry would finally be swift and effective in tearing that down for good. People won't accept it because they have already experienced what the open web can be.

The internet is communication technology, and communication technology is network-effect-heavy. The value of "The Web" and "The Internet" is more than proportional to the number of people using it. Email is useful only when the person you are messaging has email.

So... If much of the web (by usage) exists on Facebook, much of its value exists on Facebook. That value is under Facebook's control, logorithmically to scale.

This depends on how you define value. If you're talking about advertising revenue, then yes (and bizarrely other services let you authenticate through them, which I still don't understand). But if you're talking about the exchange of practical information, then Facebook's pretty meager and sites with technical documentation or Q&A are much better (for software developers, anyway).

I don't believe network effect is one-size-fits-all. Facebook is fine for keeping up with casual acquaintances. But if you want to find a local group with shared interests, then you may have better luck on Meetup. Or if you want to share Q&A with other developers, Stack Overflow is probably better. Each one has its own domain within which it's king.

Yep, that's great too. There is a challenge there, which is finding that information on personal blogs in the first place. That's been solved with search engines so far. I'm curious if a more distributed, network-oriented approach might work. Basically the Bacon number applied to information.

If people have a means to extract their Facebook posts and publish their content on their own blogs, then that value isn't really under Facebook control. It's a symbiotic relationship, and it's a safe bet that if Facebook were to make the wrong moves, the users would simply leave. Indeed that's been happening in recent months over privacy concerns and other things.

If we would think in terms of regular, non tech-savvy consumers, it is true, they are pretty much synonyms of the internet. Google is especially ubiquitous in this regard and reminds me a Microsoft in pre-firefox age.

If we think of it, the amount of lockdown to google is scary - it's browser, search, mail, navigation, advertisement, all photos and documents and now with AMP to a certain degree the web content itself.

If you ask them to describe the world with their own vocabularies, then yes, those things are "the internet". But if you take away the rest of the internet, these users will absolutely notice, even if they might struggle to correctly articulate what happened.

I was going to say something like this, except without the assumption that what we have today would exist. For all the concerns about walled gardens, I can still link to a Facebook post in a Slack message and vice versa. The hypothetical non-web-world would probably consist of Compuservs and MSNs, with very limited notions of links.

Has there been any effort to objectively measure the supposed centralization of the web over time? Although I've no doubt it is happening, the article doesn't seem to provide any hard evidence for the claim. Interesting to think about how to do it, e.g. you could measure clustering in the global page rank graphs over time, but that wouldn't incorporate info about Facebook etc.

For the most part, I think the conclusion that "a regulator is needed" is usually reached by people who have very little experience with one.

Regulators are more flexible than legislatures, but are still not very flexible in practice. The difference is mostly that they can enforce rules more effectively, but the rulemaking itself is not generally very good.

Causation runs at least partially the other way, but think of the current industries that are heavily regulated: banking, medicine, tobacco...

First, they regulate relative to very legible rulesets. Even if they start with broad mandates, once enforcement gets going, a specific ruleset emerges. A common goal is "better inform the consumer." Pharmaceutical side effects, banking fees, how we use cookies. This nearly always becomes a joke. The company does not want to highlight information, but they have too. Regulators chase, fine, and sanction. Eventually, we get popups, small print, and quickfire lists of "disclosure". All that stuff becomes essentially mandatory.

Second, "compliance" becomes a power word, within companies. The lawyers and bureaucrats get much more powerful.

Also, regulating industries is almost always big firm and incumbent friendly. Calling these businesses hard to enter is an understatement. You basically can't start a tobacco company or bank unless you already are one, or are well connected and wealthy.

Idk what the solution is, but I doubt it is this. We'll see how GDPR plays out, but I'm not holding my breath.

The problem (imo) is that the internet is a communication technology. These have network effects, and centralisation tendencies.

In a perfect world, the solutions to the problem (I agree with tbl about the problem) are protocols, browsers and the like.

If email wasn't a protocol/standard, it'd have been invented as a proprietary service. But, since the standard existed and was popular...it lives. Building onto emails, in an open way, generally failed. So, email lists gave way to proprietary services.

There are still parts of the internet that are open like this. Podcasts is an interesting one. It gained a lot of ground recently in the mature web era.

On the business end, podcasts seem to be much better for the content makers than YouTube or other distribution methods. Interesting. They seem like an echo from an older internet.

A lot of things could have beeen standards, app stores, social networks.. a lot of things could have been solved by the browser, including half the stuff "regulators" are trying to fix now. Ultimately, online tracking works because browsers & standards let it happen.

It's just hard, and the incentive to grab a little internet fief is too strong. In some ways, it's surprising the web is as open as it is.

This reads like an unbearable collection of news headlines, combined with political slogans that mean absolutely nothing.

One example:

I remain committed to making sure the web is a free, open, creative space – for everyone.

That vision is only possible if we get everyone online, and make sure the web works for people.

The web is not free - I have to pay an ISP to access it. Who is 'everyone' in this instance? I don't know what 'open' means exactly, and creative how? Because I can post pictures of cats on reddit? What is creative about 'the web' specifically?

What 'vision'? Why is it only possible if 'we', who is we? get everyone online? Why is that a prerequisite? 'Make sure the web works for people'? What does that even mean?

Among the things I care about are clock skew issues in distributed systems.

Over and over again I've had to explain to people that they don't need their hand-rolled solution because a better solution has been in HTTP since 1.0 for purposes of cache invalidation. It's as accurate as NTP without opening up an attack vector (hijacking NTP or NTP DOS).

Any time you contact a server your User Agent tells it what time it is. Any time the server responds it does the same. Both parties can calculate offsets just fine with just the HTTP headers. The only consensus that matters is between the hub and one spoke.

When the server says "It's 12 noon and this response document was generated at 11:58:50" you know it was 70 seconds ago. Nevermind that your browser thinks it's currently 4:30 pm. The age of the file is 'now' - 70 seconds. If you PUT a file at 4:35 the server will say it was created at 12:05.

Anyway, this is my favorite thing about the HTTP protocol. Human readable from traffic dumps was #2.