The Internet Archive discovers and captures web pages through many different web crawls.
At any given time several distinct crawls are running, some for months, and some every day or longer.
View the web archive through the Wayback Machine.

Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org.

Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.

My web app died from performance bankruptcy

There’s a widely-used piece of DOM API called addEventListener. Almost every web site or web app that does anything dynamic with JS probably depends on this method in some way.

Up until 2016 the convention was that you just pass an event type, a callback and an optional “useCapture” boolean flag:

target.addEventListener(type, listener[, useCapture]);

Then Google came along and decided that this API is not extensible enough (which is true). What if one wanted more options? Surely, there must be a map of options, not just a single positional boolean argument. To which, again, I can’t agree more. So they added a second form:

Which means you can’t practically use the new form without feature detection. At all. Never ever. Old browsers can’t be made to understand options form. Period.

But that’s fine. That’s all right. That’s why we have feature detection.

DOM APIs aren’t meant to be used

Ok, so there must be some sort of feature detection API accompanying this change, right? Well, if you thought so, you clearly have never worked with web APIs. Even though web developers are supposed to always use feature detection, they’re also supposed to rely on a complex, brittle and accidental effects to check for it.

Basically, you’re constructing a special object with a side-effect-producing getter and hope for the browser to access it when you install a fake event listener. Surely, what could go wrong?

To be fair, there’s an open discussion for adding better feature detection around this. But the timing is as messy as the API itself. If feature detection will ever be implemented, we’d have three browser classes:

the ones that don’t support options at all,

the ones that do support it but don’t support feature detection for it (so you’ll have to resort to the getter+fake event hack anyway),

and the ones that support both feature detection and the API.

Think about it: a feature detection API that itself needs to be detected ¯\_(ツ)_/¯.

Making Chrome fast

But that’s not the end of the story. Chrome team proposed the API change to add passive option because it allowed them to speed up scrolling on mobile websites.

The gist of it: if you mark onscroll/ontouch event listener as passive, Mobile Google can scroll your page faster (let’s not go into details, but that’s how things are). Old websites continue to work (slow, as before), and new websites have an option to be made faster at the cost of an additional feature check and one more option. It’s a win-win, right?

Turned out, Google wasn’t concerned about your websites at all. It was more concerned about its own product performance, Google Chrome Mobile. That’s why on February 1, 2017, they made all top-level event listeners passive by default. They call it “an intervention”.

Now, this is a terrible thing to do. It’s very, very, very bad. Basically, Chrome broke half of user websites, the ones that were relying on touch/scroll events being cancellable, at the benefit of winning some performance for websites that were not yet aware of this optional optimization.

This was not backward compatible change by any means. All websites and web apps that did any sort of draggable UI (sliders, maps, reorderable lists, even slide-in panels) were affected and essentially broken by this change.

Yet, if things become faster, they can always praise Mobile Chrome for the improvement. And if something breaks, people would probably blame website anyways. RByers (a Google Team engineer who advocated for the intervention) commented on Jun 16:

Our data suggests we made the right trade-off for the web platform as a whole and for Chrome as a product. I understand that your perspective is the opposite and I’m sorry about that - I really wish there was a way to make everyone happy, that’s just not reality.

Also, notice how harsh timeline on this update was. The passive option was released on June 1, 2016 (Chrome 51). Passive made default was out on February 1, 2017 (Chrome 56). That’s just 8 months! They couldn’t even agree on feature detection API in that time! Before June 2016 you didn’t even have an API for marking listeners passive! And just 8 months later your app is already silently broken and punished for not using new API that others browsers barely started to roll out!

We really don’t have more than anecdote (and our metrics) on the “support” side, and no precise way to quantify the breakage. I’d love to have a more quantifiable way to make these sorts of trade offs.

But in Chrome we’re fundamentally unwilling to allow the mobile web to continue to die from performance bankruptcy. Other browsers are less aggressive, and people who prefer to be more conservative (preferring maximal compatibility over being part of moving the web forward aggressively) should prefer to use a more conservative browser.

As a user, I certainly do not care about “being part of moving the web forward aggressively”. Why should I? I like my stuff working, not broken. Nobody ever wants it the other way around.