In defense of the modern web

I expect I'll annoy everyone with this post: the anti-JavaScript crusaders, justly aghast at how much of the stuff we slather onto modern websites; the people arguing the web is a broken platform for interactive applications anyway and we should start over; React users; the old guard with their artisanal JS and hand authored HTML; and Tom MacWright, someone I've admired from afar since I first became aware of his work on Mapbox many years ago. But I guess that's the price of having opinions.

Tom recently posted Second-guessing the modern web, and it took the front end world by storm. You should read it, or at the very least the CliffsNotes. There's a lot of stuff I agree with to varying degrees:

There is a sweet spot of React: in moderately interactive interfaces ... But there’s a lot on either side of that sweet spot.

It's absolutely the case that running React in the client for a largely static site is overkill. It's also true that you have to avoid React if your app is very heavily interactive — it's widely understood that if you want 60fps animation, you will likely have to bypass the React update cycle and do things in a more imperative fashion (indeed, this is what libraries like react-spring do). But while all this is true of React, it's much less true of component frameworks in general.

User sessions are surprisingly long: someone might have your website open in a tab for weeks at a time. I’ve seen it happen. So if they open the ‘about page’, keep the tab open for a week, and then request the ‘home page’, then the home page that they request is dictated by the index bundle that they downloaded last week. This is a deeply weird and under-discussed situation.

It's an excellent point that isn't really being addressed, though (as Tom acknowledges) it's really just exacerbating a problem that was always there. I think there are solutions to it — we can iterate on the 'index bundle' approach, we could include the site version in a cookie and use that to show actionable feedback if there's a mismatch — but we do need to spend time on it.

It’s your startup’s homepage, and it has a “Sign up” button, but until the JavaScript loads, that button doesn’t do anything. So you need to compensate.

This is indeed very annoying, though it's easy enough to do this sort of thing — we just need to care enough:

But I'm not sure what this has to do with React-style frameworks — this issue exists whatever form your front end takes, unless you make it work without JS (which you should!).

Your formerly-lightweight application server is now doing quite a bit of labor, running React & making API requests in order to do this pre-rendering.

Again, this is true but more React-specific than anything. React's approach to server-side rendering — constructing a component tree, then serialising it — involves overhead that isn't shared by frameworks that, for example, compile your components (hi!) to functions that just concatenate strings for SSR, which is faster by a dramatic amount. And those API requests were going to have to get made anyway, so it makes sense to do them as early as possible, especially if your app server and API server are close to each other (or even the same thing).

The dream of APIs is that you have generic, flexible endpoints upon which you can build any web application. That idea breaks down pretty fast.

Minor quibbles aside, Tom identifies some real problems with the state of the art in web development. But I think the article reaches a dangerous conclusion.

Let's start by dissecting this statement:

I can, for example, guarantee that this blog is faster than any Gatsby blog (and much love to the Gatsby team) because there is nothing that a React static site can do that will make it faster than a non-React static site.

With all due respect to those involved, I don't think Gatsby is a particularly relevant benchmark. The gatsby new my-site starter app executes 266kB of minified JavaScript for a completely static page in production mode; for gatsbyjs.org it's 808kB. Honestly, these are not impressive numbers.

Leaving that aside, I disagree with the premise. When I tap on a link on Tom's JS-free website, the browser first waits to confirm that it was a tap and not a brush/swipe, then makes a request, and then we have to wait for the response. With a framework-authored site with client-side routing, we can start to do more interesting things. We can make informed guesses based on analytics about which things the user is likely to interact with and preload the logic and data for them. We can kick off requests as soon as the user first touches (or hovers) the link instead of waiting for confirmation of a tap — worst case scenario, we've loaded some stuff that will be useful later if they do tap on it. We can provide better visual feedback that loading is taking place and a transition is about to occur. And we don't need to load the entire contents of the page — often, we can make do with a small bit of JSON because we already have the JavaScript for the page. This stuff gets fiendishly difficult to do by hand.

Beyond that, vanilla static sites are not an ambitious enough goal. Take transitions for example. Web developers are currently trapped in a mindset of discrete pages with jarring transitions — click a link, see the entire page get replaced whether through client-side routing or a page reload — while native app developers are thinking on another level:

This is what I've had in mind for the web with React Router. We say these kinds of animations are "good for phones but not desktop".

My iPad pro is as big as my laptop and these apps are shopping/content (most of the web).

These transitions are such a great UX.

16:06 PM - 22 Oct 2019

20
197

It will take more than technological advancement to get the web there; it will take a cultural shift as well. But we certainly can't get there if we abandon our current trajectory. Which is exactly what Tom seems to be suggesting.

I'm not aware of any other platform where you're expected to write the logic for your initial render using a different set of technologies than the logic for subsequent interactions. The very idea sounds daft. But on the web, with its unique history, that was the norm for many years — we'd generate some HTML with PHP or Rails or whatever, and then 'sprinkle some jQuery' on it.

With the advent of Node, that changed. The fact that we can do server-side rendering and communicate with databases and what-have-you using a language native to the web is a wonderful development.

There are problems with this model. Tom identifies some of them. Another major issue he doesn't discuss is that the server-rendered SPA model typically 'hydrates' the entire initial page in a way that requires you to duplicate a ton of data — once in the HTML, once in the JSON blob that's passed to the client version of the app to produce the exact same result — and can block the main thread during the period the user is starting to interact with the app.

But we can fix those problems. Next is doing amazing innovation around (for example) mixing static and dynamic pages within a single app, so you get the benefits of the purely static model without ending up finding yourself constrained by it. Marko does intelligent component-level hydration, something I expect other frameworks to adopt. Sapper, the companion framework to Svelte, has a stated goal of eventually not sending any JS other than the (tiny) router itself for pages that don't require it.

The future I want — the future I see — is one with tooling that's accessible to the greatest number of people (including designers), that can intelligently move work between server and client as appropriate, that lets us build experiences that compete with native on UX (yes, even for blogs!), and where upgrading part of a site to 'interactive' or from 'static' to 'dynamic' doesn't involve communication across disparate teams using different technologies. We can only get there by committing to the paradigm Tom critiques — the JavaScript-ish component framework server-rendered SPA. (Better names welcomed.)

The modern web has flaws, and we should talk about them. But let's not give up on it.

Honestly my biggest issue with the current state of the web is the current state of how complex the tools or build process is.

I miss the days of some html, add a bit of JavaScript to the page and you were done. You didn't worry about going and making sure you had a complete build or bundling process to get everything good. You spent your time worrying about how to develop your app. Which in my opinion was better for the users. There was less bugs. We actually spent more time making sure the application worked rock solid.

Nowadays to get anything cool to be be ready for production you have spend needless time on the configuration. Did you do this? Did you configure it to do that? Oh you can't do that unless you eject it that build tool and use this build tool. That's where the current state of the web is failing.

Sure we have taken steps forward in some areas. But we have to be honest with ourselves. We took major steps back in other areas of the web. Sometimes I wonder if we took too many steps back.

I don't get it. Nobody's stopping you from firing up an HTML doc in Notepad and FTPing the result to a web server, if that's all you need. But other people have more complex needs (in the "good ole days" there were no smart phones, to mention just one thing) and therefore we need more complex tools. Why do you want to force me to party like it's 1999?

My issue is with all the moving parts with a modern web application. Fighting if I can use certain language features. Oh I can't, now I have to bother with setting up and making sure I can use Babel or something to do it properly, or even loading in polyfills (cool just added another dependency the browser has to load before the application can be used).

That still hasn't covered bundling everything. What's the correct way bundle this? Well crap I now have to go and correctly configure WebPack. Unless you're one of the VERY FEW experts on that ,that will take time to figure out to get right. Ok so I think it bundles right, well now how should I lazy load this code for the user? What should be lazy loaded? What should i use to set it up?

That's just the beginning. I could go on and on about the other complex moving parts with a modern web application.

I am not asking anyone to make applications like it is 1999. All I'm asking for is a more modern and simpler process that is a STANDARD.

Maybe it is because I come from compiled language background. Where I have to worry about that one single binary. I only have to worry if the compiler supports the language version I'm using. Where I can just pass in a single flag to the compiler for the optimization level I want.

Yes I will be the first to say that in some areas of modern web application development we have taken steps forward. I just wonder if we have taken steps back in some areas to take those steps forward.

All I'm asking for is a more modern and simpler process that is a STANDARD ...
Maybe it is because I come from compiled language background ...

Well I am sorry, but web development is a complete different kettle of fish. We download all the parts that make up an application asynchronously, on a wide variety of devices, with different specs and rendering capabilities - all things which the app, once downloaded, need to adapt to. We have progressive rendering and respond to all sort of sensors. All while ensuring boot up time for the app is in the microseconds range and security for both you who download the code and the server.

Your expectations are simply unrealistic, sorry.

Also, we always had polyfills, even "back in the day". Except that back then everyone had to bake their own. In fact, we had browser wars and appalling browser (IE5 for the Mac, anyone??!?) and it was a real pain in the neck.

Great frameworks hide complexity and enable developers to create new things quickly. For example, look at the impact that Ruby on Rails had when it was introduced. What took hours to setup and code could be boiled down in to a few commands:

The problem with "modern" web development is that the frameworks introduce lots of complexity and make developing web applications a time intensive endeavor. I'll equate this to the pre-Rails world.

If I could have the framework of my dreams, it would allow me to focus on writing excellent HTML and CSS and the framework would fade in to background. It would exist, but hide its complexity. It would allow me to focus on HTML with a sprinkle of JavaScript. Hooks would be available to tie in to the framework that could allow for complex functionality, but often repeated functionality would be baked in.

This is what Rails did when it was introduced to the world. It hid the complexity of databases, queries, mvc frameworks, etc. We need to have a similar approach in client side frameworks.

No we don't... if the The One True Omniscient God Framework approach that Rails developers are constantly pining for was really the best one, then Rails would be ruling. It isn't; it peaked in 2012 or thereabouts, knocked off its perch by Node, among other technologies. It's a single point of failure and it cannot adapt quickly enough to the bewilderingly fast changing environment in which we operate.

Alternatively, you can pick one of the two frameworks that follow the model you advocate: Ember (actually inspired by Rails) or Angular (more of a C# flavour), both of which strive to be a nanny that remove as much as possible the need to (god forbid!) make your own decisions or (the horror!) having to learn new tools

If I could have the framework of my dreams ... It would allow me to focus on HTML with a sprinkle of JavaScript.

You may want to consider stimulusjs.org/, which comes from the Rails universe (it was created by no other than David Heinemeier Hansson, the Rails Superstar) and does exactly what you want

Can confirm that the Stimulus framework works great for that exact use case. But if you're used to the more common frameworks, you might quickly miss the declarative "give me data, I give you DOM" approach they provide.

I haven't tried it myself yet, but Alpine.js gained some steam lately and might be a good middle ground between Stimulus and the "top dogs".

The One True Omniscient God Framework approach that Rails developers are constantly pining for was really the best one, then Rails would be ruling. It isn't; it peaked in 2012 or thereabouts, knocked off its perch by Node

Node is a runtime. Rails framework. Rails was knocked knocked off it perch when the world decided that web application had to operate like mobile applications and "JavaScript everywhere" became a dream.

In my opinion, JavaScript/Node hasn't lived up to the "one easy language for everything web" approach. Frameworks such as React come with tooling that makes compiling from "modern javascript" to something the browser can use transparent to the developer, but writing an Express API requires "old javascript" unless you want to go through the hassle of setting Babel up. Maybe some people like to spend half the day setting up their environment or writing half their project in "new javascript" and "old javascript", I do not.

In reality we've increase the development time, cost and complexity of a web applications exponentially in trying to eliminating the page refresh. What could be accomplished in a few lines of code now takes hours. Being a person that has launched products within a startup organization, speed to market is incredibly important in order to start making money and hold off competition. I've never met a React or Angular project that hasn't gone over budget and was delivered on time.

The point is that these frameworks add a complexity that isn't worth the cost in many cases.

writing an Express API requires "old javascript" unless you want to go through the hassle of setting Babel up

Not sure what you mean here. You can use very recent JS features in Node. We have async/await, async iteration, ES module imports, and more now. I've never felt the need to set up Babel in an Express project.

Bingo, the complexity today is a must, at least because of the growing smartphone usage. The simple reason why "Native Apps didn't kill the web even with all their superior capabilities" (Ryan) is simply UX. As internet usage often starts with searching for something having in mind, that all-app-thinking interrupts exactly this flow, leaving at least 30% of all traffic untouched. For what? For 5-minutes-crafts skyrocketing their YouTube traffic? Although not everything needs server side complexity plus having API's and webhooks easily integrated using browser capabilities not existing in the god old days, the architecture underneath is not the real issue.

I'm pretty solidly on the opposite side of this debate, but I don't want to trash SPAs outright either. Every approach comes with tradeoffs, and which ones are worth it or not will vary greatly between projects, teams, and the backgrounds of various people.

I do want to say one thing though. The fact that the web is, at a fundamental level, nothing more than a collection of hypertext documents is in my opinion not a bug but a feature. And a really, really, really important feature at that. The decades-long comparison between web and native apps often glosses over this distinction, even making a de-facto claim that native UX is always better than "web document"-based UX.

I want to challenge that assertion. I think hypertext documents are actually superior to any native app for a good number of use cases and contexts. Take this exact website within which I am typing a comment. There is absolutely no way I am ever going to go to an App Store, download a Dev.to app, run that, navigate to this article, click a button, and type a comment. Nothing about a "slick" native UX would make that desirable. Instead someone posted a link in a chat somewhere, I clicked it, instantly saw an article, read it, clicked a button to write a comment, and here we are. Everything about this process is perfectly fine, and virtually none of it actually requires anything particularly sophisticated. A giant wad of Javascript, or slick animations, or analytics-based determination of other articles to read, etc, etc. is simply not required at all. None of that would make click link -> read article -> write comment any better. In fact, in some ways it would make that worse.

Now obviously we're talking about a blog post here. But extrapolate that to an event listing, or a product to buy, or a video to watch, or an educational library. For a great many content-driven contexts and purposes, the web long has and still does have a major leg up on native apps for very unique and hard-to-dismiss reasons. And if that weren't the case…well, we'd all be reading native iPad app-based newspapers right now. :)

I don't think it's necessarily true that only document navigation can provide a fast and install-less experience. I think from the beginning, distributing binaries and running them in browsers was not believed to be secure and there was rightfully no trust that different platforms would compile/interpret these complicated instructions the same way, so distributing documents was an easier path and it gained momentum.

Now however, you can expect browsers to support WASM, which is faster, lighter, has more secure packages, and really I think the reason that any applications are distributed as documents with scripts attached is because of the massive momentum differences. I think we will slowly see the abandonment of the web as we know it as more applications are distributed as binaries (not necessarily complete ones send at once, they can be requested too and render requested content text + images too) and the boundary of browser and OS blurs. I can't wait :)

Just because you "can" send essentially a <body></body> shell along with a WASM app doesn't necessarily mean you should. For things like games or other interactive multimedia tools, sure, it's nifty. Those probably wouldn't even use HTML but write directly to pixel buffers. But that's not a "website" in any meaningful sense of the word, so I'm not sure what your conclusion means relative to the original article…?

Well a sever theoretically could send different WASM depending on the request path and request more content as "links" are pressed, not sure what you can call a website but the benefits of a website can be reaped without actually making a document now. I can imagine soon, if it doesn't already exist, server side rendering of HTML can instead output WASM + WebGL and speed up clients further. The point being that these "native" apps don't necessarily need installing and can still have the benefits of web apps and native apps combined :)

Some interesting thoughts, in the original article and your response. Certainly good for generating discussion :)

It will take more than technological advancement to get the web there; it will take a cultural shift as well. But we certainly can't get there if we abandon our current trajectory.

The problem as I see it is that the web -- in terms of client-side executed markup and code (HTML/CSS/JavaScript) just wasn't designed with these kinds of interactions in mind (the interactions that Ryan Florence highlighted). It's the wrong foundation for it.

I don't think that "getting there" should be through bending original web technologies, but rather with something else entirely -- perhaps WebAssembly, or a return to actual native applications much like we've come to depend on on our tablets and phones. We already can do nice animations like that iPad demo with native desktop applications. There's many advantages to websites over native desktop apps, with the standout one being that they don't require someone to install an application on their machine. Maybe WebAssembly can give us a compromise -- desktop apps built with technology designed for that job, but available in a way that doesn't require any local installation. Then we can use tools that were purpose built for those kinds of tasks, instead of trying to wrangle the foundations of the web to fit something they were never built for.

When I think back to how we used to build websites back in the day, and how we build sites with technologies in tools like React now, I'm honestly not sure that we've gained anything of great enough value relative to the simplicity we've lost. In short, I'd be happy to abandon our current trajectory and put our hope in alternative solutions for those situations where what we really need is a native application. Leave the web for what it's good at.

That's interesting. I mean I was there too, and the extent of what I can build with a modern library so extends what I could reasonably do before. And the reason for that is JavaScript. When I was building sites in the late 90s so many things weren't possible and you had no choice but to pay these costs for interactivity. Sure I view sourced a few cool tricks.. image maps etc, but with my table based layouts and a little JavaScript I could do very little. Part of it was the available DOM api's. My lack of understanding of the language but I think we look at the past with rose colored glasses.

It's more interesting to me that native platforms have borrowed concepts from libraries like React in terms of declarative views etc. I'm not saying React invented this stuff but that the trend would suggest the contrary. These trends could be mistakes but popularity has as much stake in the course of events as innate value.

I think this comes down to really this hope of a Silver Bullet. It doesn't exist. Instead we have momentum of a ubiquitous platform that just is constantly aiming to improve. It's not only that the other approaches have already lost, we're getting to a point where any new approach will have to atleast acknowledge the current web. At this point a competitor isn't going to change the tide, but something from within. It's more likely that the web addresses those fundamental shortcomings than people moving to a different platform. Native Apps didn't kill the web even with all their superior capabilities. And on a similar note it is going to take monumental change for this to fundamentally change. Even with React's reach, it wasn't alone in a movement that started years before. This is bigger than any particular library or framework.

There is absolutely no doubt that you can do incredible things with modern web, and the power unlocked by having a programming language in the browser has been amazing. However, when I think about a great deal of the websites I interact with, and ones I build, there would not be much lost if they significantly simplified the tech stack they work with to be closer to the core technologies of the web (including a splash of JavaScript).

What I'm trying to convey is that I think our move to make everything a web app has been a mistake, for multiple reasons. You are absolutely right to say that this tide is going to be very hard (perhaps impossible) to turn, and I absolutely agree there is no silver bullet (I've looked), but that doesn't mean we haven't taken a worse path and shouldn't lament for what could have been.

The advantages of web apps are compelling for developers -- they are easy to distribute, update, control, make cross platform, when compared to native apps. It is not hard to understand why and how we have ended up where we are. I just wish we were somewhere else -- but it would take a lot of work to create that 'silver bullet'.

That whole app environment was basically triggered by one need, and one need only: bandwidth. No idea if anyone remembers when they launched this WAP protocol, making the internet available on smartphones, what failed for two reasons. The providers made it easy to access on their own portals aka AOL strategy at beginning of the web, leaving the rest of the market untouched, secondly the site performance was ridiculous. Apple stepped in, put a bunch of intelligence into the app, dramatically reducing the load time, did not convert so much traffic on their own apps and let developers do what they do best. By 2008 more than 50% of the whole mobile internet traffic was iPhone traffic.

Bandwidth is no longer the breaking point.

Google has for years campaigned the untapped goldmine local traffic, experience flows that often start at the search for getting something done in time, monetising on micro moments as more than half of all mobile searches have a local intend. I posted an example how something to get done interferes cross-applicational from the perspective of a user. With an app-only scheme this is getting impossible. Sure it is understandable that nobody pays so much attention. Most search engines run a change and heavily invested in AI to optimise traffic at the client's frontend. But even this is not enough, when it comes to intends other than something very individualised. I wouldn't care so much about the user, but more about what the user wants to achieve.

The problem as I see it is that the web -- in terms of client-side executed markup and code (HTML/CSS/JavaScript) just wasn't designed with these kinds of interactions in mind (the interactions that Ryan Florence highlighted). It's the wrong foundation for it.

CSS has media queries and all sort of crazy and wonderful stuff. HTML has tags for responsive design. JS has (in the browser) access to apis like mutation observer and orientation and speech recognition. It's TOTALLY built for that. Enough of the "get off my lawn" negativity

There's a lot to agree with in both of these takes. However, one criticism of this one (and I realise we're getting into hugely subjective territory) is the assumption that the goal should be to 'build experiences that compete with native on UX'. It strikes me that the web never was and never will be anything like any other platform for building applications. Whether or not someone wants to turn it into that is their prerogative and any success they have probably will help others build better things. But it ultimately depends on what you want to build and why, and emulating a native application might not always be the best option. I think the key is to allow ourselves to question the assumptions we all hold which seems to be exactly what both of you are doing with different approaches, reaching different conclusions. So I would say; keep doing that and keep doing your thing, but just don't expect everyone to agree and do things the same way. The world is a better place when we allow and encourage people not to homogenise in their thinking.

It strikes me that the web never was and never will be anything like any other platform for building applications. Whether or not someone wants to turn it into that is their prerogative and any success they have probably will help others build better things. But it ultimately depends on what you want to build and why, and emulating a native application might not always be the best option.

Looking at the criticism made about some recent redesigns (reddit, facebook, etc) it seems that the main complaint (from a user point of view, at least) is how slow it feels compared to the previous version or to a "simple" site with HTML, CSS, and a little bit of javascript.

I haven't checked it recently, but when the new reddit UI was released, scrolling was terrible on my $2500k laptop. It felt slow. Everything was sluggish. It also downloaded a few MB of javascript on pages that essentially had a thumbnail, a title, and some comments (not everyone lives in a big city or is connected to a super fast internet connection!). When the page was loaded, I was seeing more scrolling lag because they lazyload some content. I mean, it's hard to like this "modern web".

As a user, I don't care about how things work behind the scenes. I don't mind some javascript on a page, after all we don't need a page reload just to preview a comment or "like" something. You can even use javascript for everything, I don't mind. But it needs to be fast and it needs to work well... and right now some "modern" sites provide a worse experience than before.

Good response, opinions are always worth having, as long as you are prepared to discuss/defend/change them! I had a few thoughts on this topic last year in response to your good self and web components that I think is worth referring back to: dev.to/phlash909/comment/cghl

In essence, the DOM and page model of HTML is a constraint around the outside of applications that might be better inverted: I'd like to see browsers that support HTML/DOM content if required, but do not constrain devs to always go though it. Let's stop thinking of them as HTML rendering engines, and start thinking of pre-installed runtimes with excellent web-based application management!

We'll want to retain the benefits of dynamic software running on the client (native UX, easy architecture changes, no user-install step, etc.), while leveraging the effort that goes into making that a portable & safe thing to do (common APIs, sandboxes, browsers as the standard runtime for client-side applications). We are nearly there with PWAs perhaps? WASM then expands the available languages for the runtime, allowing common client/server languages and development processes to ease developer adoption. As/when a document needs rendering, then HTML/DOM/CSS is there to perform it's proper function, however many apps many be better off with a UX library (eg: SDL) or widget set (eg: wxWidgets) atop the runtime bindings.

Tom identified the main weakness of modern web development correctly: It's based on javascript. He points out the sluggishness and accessibility problems that stem from that attitude, and hes absolutely right to do so. Static pages (as a paradigm, not as Gatsby in particular) are maybe not the cure for that, but they are definitely a step into the right direction.
I just rewrote my website with hugo and specifically without a major JS framework, but settled for a vanilla approach rooted in a progressive enhancement mindset. I didn't even run into the problems that are inherent to React, Vue and Angular.
Sure, you can serve HTML with SSR, but you still have to wait for the bundle in order to have a functional site, plus a truckload of ssr-related complexity. And that is utter rubbish. JS should never be a requirement, it needs to be put in its place as the cherry on top of the cake.
Progressive enhancement needs not only to be a major approach to webdevelopment again, it needs to be out default mode of operation.
This is also the core of Toms article. And I agree with him.

That said, nothing stops you from adding features like nice transitions. But please be sure not to break standard web features while doing so.
Transitions need to add to hyperlinks, not replace them.

I feel like you didn't quite read the article closely enough ;) Of course progressive enhancement needs to be the default — that's why there's so much focus on SSR in all major meta-frameworks.

Sure, you can serve HTML with SSR, but you still have to wait for the bundle in order to have a functional site

This isn't true! Take sapper.svelte.dev, which is a site built with Sapper and baked out as static pages. It works just fine without JavaScript, you just don't get client-side routing. It's a small site, but the same thing applies to larger ones like Gatsby's homepage — no JS? No problem.

It certainly is true for standard implementations of React-, Vue- and Angular- based sites. There may be a focus on SSR, but it still isn't close enough to deserve to be the default. It needs to come without extra complexity. Only then we're ready to take the step to fully embrace those solution.

And this is my point. You need to take one step after the other. It was a mistake to build everything with javascript first and then try to look if it still works without afterwards. The damage to the web has been done.

I didn't take a look at Sapper/Svelte. Though, in comparison, those are still a nieche product.

JS certainly caused a lot of damage on many websites. But now - finally - we're getting to the point of having frameworks which can, by default, offer client-side routing with SSR and hydration without shipping large bundles of JS. Up to this point the frontend devs should have been more conservative and they should have thought twice whether they should really go all-in into an SPA. But now's the time when it's really starting to be a viable option. Next, Nuxt, Marko and Sapper are finally technologies highly worth pursuing, even for content-based websites.

The future I want — the future I see — is one with tooling that's accessible to the greatest number of people (including designers), that can intelligently move work between server and client as appropriate, that lets us build experiences that compete with native on UX (yes, even for blogs!), and where upgrading part of a site to 'interactive' or from 'static' to 'dynamic' doesn't involve communication across disparate teams using different technologies. We can only get there by committing to the paradigm Tom critiques — the JavaScript-ish component framework server-rendered SPA. (Better names welcomed.)

I like this takeaway because it's heavy on acknowledging the process behind the technology which is often disparate and confusing. Even the most well-intentioned and organized dev team is going to get out of wack if succeeding with the tooling is experts only. We need to be able to achieve great performance, UX and accessibility even under conditions where designers do some work, devs pop in some hotfixes here and there, old devs leave with some of the knowledge, priorities change, etc.

The HashiCorp approach is to focus on the end goal and workflow, rather than the underlying technologies. Software and hardware will evolve and improve, and it is our goal to make adoption of new tooling simple, while still providing the most streamlined user experience possible. Product design starts with an envisioned workflow to achieve a set goal. We then identify existing tools that simplify the workflow. If a sufficient tool does not exist, we step in to build it. This leads to a fundamentally technology-agnostic view — we will use the best technology available to solve the problem. As technologies evolve and better tooling emerges, the ideal workflow is just updated to leverage those technologies. Technologies change, end goals stay the same.

We can learn this lesson from Biology, that the best way to avoid excessive optimization is the ultimate flexibility: Keeping the option on the table to throw out the code, or well abstracted parts of it, and rewrite that code or product from scratch.

It's a little known "dark" pattern of Software Design, from the shadows of Agile, called "Sacrificial Architecture."

it's widely understood that if you want 60fps animation, you will likely have to bypass the React update cycle and do things in a more imperative fashion (indeed, this is what libraries like react-spring do)

Probably aside from the article, which I largely agree with, but I think this is an odd point to try and make. This is true of any component library that has overhead above imperative JS. But there's nothing stopping you from using CSS in React, which is all Svelte does anyway.

I'm not so sure about that... IMO, so many single-page apps get this wrong. The native browser loading indicator is still a lot better than many SPAs, and unfortunately it's something that can't be triggered from JS (apart from ugly hacks like infinitely-loading iframes)

With the advent of Node, that changed. The fact that we can do server-side rendering and communicate with databases and what-have-you using a language native to the web is a wonderful development.

Server-side JS was a thing looooong before Node. I remember using Microsoft's version of JavaScript ("JScript") server-side via Classic ASP in the early 2000s. The issue back then was that JavaScript just wasn't quite that popular yet. ES5 wasn't around yet (it was all ES3), and there was a lack of good third-party libraries. But still, it was absolutely in use long before Node even existed.

What Node did do was cross-platform support, introduce a module system (CommonJS) as standard, and introduced the concept of easily obtaining third-party packages via "npm", taking ideas from similar systems like Perl's CPAN.

So one thing I wondered after reading Tom's article and now yours is how you feel about modern UI being so tied to node or javascript run times on the backend specifically. For instance if you were to be running something in Go and Phoenix you could dynamically render HTML on the back end and serve it up quite a bit faster that the current SSR environments based in Next or Nuxt or Sapper? So essentially somewhat the way Stimulus JS works where you can send over static HTML, rendered anywhere, by any type of server and the frontend could just hydrate that and build components from it.

Universal Javascript is a need for only Juniors out of Bootcamp. And the cost is great. Just learn about parsers and you'll never look at a framework or template library ever again. Good on you for jumping on how terrible Gatsby is.

The primary thing that React brought to the table over the technologies before it was declarative UI. I get to think about what I want the HTML to be, not how to make it so. Most everything else that has was mentioned in the article has to do with the overhead of the implementation (of React or the web itself).

I think that the ideas behind Svelte hold promise for a more minimal implementation of declarative UI. But in either case, our tooling for web dev is still a gigantic spaghetti mess.

It's funny because you never hear developers in other industries complain that their tools do too much, or that they aim too high. You will probably never convince a game developer to go back write by hand the engine for every one of their projects. Some developers think "we shouldn't compete with native apps" but reality is that they don't get to decide, stakeholders do. Users want native-like performance, features and interactions, and for those bundle sizes and supporting older browsers anyway, the modern web is doing amazing (you need 100x the space on your device for simple apps compared to bundles even from Angular). Yeah, it's hard. I like it that way. I can't stand huge scripts reinventing the wheel for every single project, that end up never being properly maintained and soon broken. The web is the most democratic platform there is and have been forever, and surrendering it because we could all ship apps for Windows and Android sucks balls.

This article is perfectly reasonable …except for that first paragraph, which is honestly one of the most disjointed, off-kilter, and oddly phrased things I've read. The tone of it is in complete contrast to what follows, like it was written by a different person.

It's like someone is about to serve you a nice meal but not before saying "You'll probably hate this, what with you being the kind of person you are."

But, like I said, if I can purge that antagonistic first paragraph from my mind, I can take on board the points in the article. I just wish I didn't have to do that kind of retroactive adjustment.

FYI, for those like me with motion sickness, those '1 page apps' tend to be less desirable than old-school 'new page per content'. Usually 1 full page transition is less stressful than all the constant transitions 1 page apps like to do.

it's widely understood that if you want 60fps animation, you will likely have to bypass the React update cycle and do things in a more imperative fashion (indeed, this is what libraries like react-spring do)

I don't see why bypassing the update cycle is bad though. It might be not idiomatic, but for library authors I think it is OK. And it also depends on where we expect these 60fps. If your daily routine as a React developer is to animate 10000-polygon balls then yeah, you might be in trouble.

I think it is more use the right tool for the job. Some websites you can get away with mostly static with just a little bit of JS. Let's say you have a blog. Just some statically rendered HTML pages should be fine with some JS for the dynamic parts and then add in some instant.page functionality to preload a page. Maybe you could even add some transitions if it is preloaded, I have no idea.

Then for a business site maybe something like intercoolerjs could work.

Then when you want something that is fairly static but has some dynamic parts then you could use something like Svelte or some other lightweight framework.

I'm more of a back end dev. But I like to follow the front end world and experiment on the side. I'm building an offline first web site and using Stage0 for the more dynamic side of things. I'm using mostly vanilla js and small libs for everything else and it really hasn't been that bad of an experience.

The one thing that has been annoying with the way I'm architecting my site is that my templates are separate from my data. One of my goals was to keep the build steps minimal which has led me this way. But with something like Sinuousjs/Svelte/Sapper I could keep a small run time and get a better developer experience. So, for something that was more oriented for a team development environment I would probably lean towards those. But for my personal project it is fun learning all the native APIs and learning how to keep things lean on my own.

During the Second World War, the Germans pioneered jet engines, but those aircraft lagged behind the established fixed-propeller aircraft because jet engines were at the bottom of their performance curve, while the established fixed-propeller were climbing towards their plateau. Now, we live in a world dominated by aircraft powered by jet engines.

We're seeing something similar here, and in time hand-built HTML websites and such should shift into the domain of training and hobbyist pursuits, while JavaScript frameworks become the established state of things.

The Lighthouse result for Gatsby is unfortunate and a bit unfair. Even Lighthouse points out at the bottom of the image that values may vary. I think the performance tab of Chrome shows a better picture of what happens during page load.

running server-side code is expensive compared to just serving files but I am sure Amazon and Google love to sell you the cpu cycles.

Modern browsers auto update and are fairly compliant to HTML5. I don't write extra code to support the wonky Apple Safari or Opera. So I don't use poly fills. Basically, if your IE6 doesn't work that is not my problem.

Don't resort to npm if you need to know if a variable is an array. In other words, learn to code or at least learn to search for answers to your specific code problem.

Desktop computers and phones are very capable computers able to generate and render their own HTML. For dynamic interaction nothing is faster then a lean purpose build piece of code running on the client.

Stop using bullshit fantasy languages such as coffee script or typescript. Loose typing is a feature of JavaScript and works really well provided you write your own code and don't rely on the thousands of files pulled from npm.

The above will get rid of packages, most build script tasks and other black box bloat that breaks the moment some script kiddy is bored and deletes his one liner idiot script.

now you're down from several hundred dependencies to just 10 files or less you will find using deferred loading of the JavaScript perfectly manageable.

Client side UI libraries should run client side, generate their own HTML and bring along their own default css without dependencies. They should not force you into using any npm shit, complex build processes or use of modules.

In the old days of win32 API coding the UI classes were provided by the OS. Meticulously crafted to provide consistent visual feedback, keyboard support, indication for shortcut keys and so on.

I am working on just such a UI library unit which is about 1000 bytes in size capable to produce dialogs, pop-up menus, combo selectors, drop down and slide in menus looking as modern as those from Google without the need of any additional resources.

I tried many other so called libraries but there is always something missing. No arrow key support or the pop-up won't avoid window boundaries or its absolute bloat like Googles material thingy contraption.

The current state of the web is littered with opinionated framework developers who are competing with each other for market and mind share. The end result is a fragmented industry. Instead if converging on standards, we're diverging. This is my number one complaint.

Using opinionated frameworks does nothing for Developer Experience (DX) if it means having to learn a DSL and adopting a completely new paradigm in order to use it. This is one of the biggest misconceptions about the modern web right now. Frameworks !== DX.

I've built traditional MPA's, hybrid MPA/SPA's, and now I'm working on a full SPA. The reason I chose to implement my latest project as an SPA was driven purely by requirements. It needed to have real-time, integrated video conferencing while allowing the user to navigate and use the rest of the app. That's it. I didn't choose to build an SPA because it was the hot and sexy thing to do. I made the decision to go pure vanilla JS, CSS on the front end because of the ongoing framework wars the don't seem to have an end. In hindsight, I made the right decision.