Rendering and interaction have become a lot more consistent across browsers in recent years. It’s still not perfectly uniform, however, and a lot of small issues can trip you up. Add on top of these issues the variables of different screen sizes, language preferences and plain human error, and we find a lot of small things to trip up a developer.

Images are critical. Whether it is marketing banners, product images or logos, it is impossible to imagine a website without images. Sadly though, images are often heavy files making them the single biggest contributor to the page bloat. According the HTTP Archive’s State of Images report, the median page size on desktops is 1511 KB and images account for nearly 45% (650 KB) of that total.

That said, it’s not like we can simply do away with images. They’re too important to the overall user experience. Instead, we need to make our web pages load really fast with them. In this guide, we will cover all of the ins and outs of

Scalability is a hot topic in tech, and every programming language or framework provides its own way of handling high loads of traffic.

Today, we’re going to see an easy and straightforward example about Node.js clustering. This is a programming technique which will help you parallelize your code and speed up performance.

“A single instance of Node.js runs in a single thread. To take advantage of multi-core systems, the user will sometimes want to launch a cluster of Node.js processes to handle the load.”- Node.js Documentation

We’re gonna create a simple web server using Koa, which is really similar to

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like <span> and how they differ from block ones like <div>. This will save you a huge amount of headache when fiddling with your CSS code.

Our latest release includes an important performance bugfix for React.lazy. Although there are no API changes, we’re releasing it as a minor instead of a patch.

Why Is This Bugfix a Minor Instead of a Patch?

React follows semantic versioning. Typically, this means that we use patch versions for bugfixes, and minors for new (non-breaking) features. However, we reserve the option to release minor versions even if they do not include new features. The motivation is to reserve patches for changes that have a very low chance of breaking. Patches are the most important type of release because they sometimes contain critical bugfixes. That means patch releases have a higher bar for reliability. It’s unacceptable for a patch to introduce additional bugs, because if people come to distrust patches, it compromises our ability to fix critical bugs when they arise — for example, to fix a security vulnerability.

A few years ago, I wrote about how we can use css to style broken images. The technique leveraged on the fact that any styling to the ::before or ::after pseudo-elements on the <img> element will only be applied if the image doesn’t load. So, we could style those elements and they would only display if the image was broken.

Here’s an example of how I’ve styled broken images on this site:

There are pros and cons to handling broken images this way. One limitation is browser support, as this technique doesn’t work in some major browsers like Safari.

Having recently done a lot of work with service workers, it occurred to me that we could use the service worker to handle broken images in a different way. Since the service worker can tell if an image file isn’t able to be fetched, we can handle that condition by, for example, serving a different image to the browser.

This is a feeling I’ve noticed from engineers and designers I’ve worked with in the past, and it’s a sentiment that’s a lot more transparent with the broader web community at large. You can hear it in Medium posts and on indie blogs, whether in conversations about CSS, web performance, or design tools.

The sentiment is that front-end development is a problem to be solved: “if we just have the right tools and frameworks, then we might never have to write another line of HTML or CSS ever again!” And oh boy what a dream that would be, right?

Well, no, actually. I certainly don’t think that front-end development is a problem at all.

In this post, we are going to be talking about the linked list data structure in the language of "thank u, next" by Ariana Grande. If you haven't watched the piece of art that is the music video for the song, please pause and do so before we begin.

Linked lists are linear collections of data that consist of nodes with data and pointers. We're going to be focusing on singly linked lists, which contain nodes that store the value of the node and a pointer to the next node. There are also other types of linked lists, like doubly linked lists and cyclical linked lists, but we'll focus on singly linked ones for now.

If you run a WordPress site you have probably contemplated already at some point whether or not you should implement the hot new Google AMP for mobile. We had the same dilemma here at Kinsta and ended up testing it for a while. In the end, we didn’t see good results and it ended up hurting our conversion rate on mobile devices. So today we are going to dive into how to disable Google AMP on your blog, and how to safely do it without 404 errors or harming your SEO. Simply deactivating the AMP plugin alone could end up really harming your site, so be careful. The good news is that both methods mentioned below don’t require a WordPress developer and can be done in a few minutes!

Several proposals expand the existing JavaScript class syntax with new functionality. This article explains the new public class fields syntax in V8 v7.2 and Chrome 72, as well as the upcoming private class fields syntax.

Here’s a code example that creates an instance of a class named IncreasingCounter:

A little more info: - a blog post from @hardmaru on how the model works: https://magenta.tensorflow.org/sketch-rnn-demo - the Quick, Draw! dataset: https://quickdraw.withgoogle.com - I basically changed one of the OG SketchRNN demos to consider only 1 input stroke at a time, and let you replay model predictions

If you work with JavaScript at all, you probably saw a ton of noise yesterday about a vulnerability in the event-stream npm package. Unfortunately, the actual forensic analysis of the issue is buried under 600+ comments on the GitHub issue, most of which are just people flaming about the state of npm, open source, etc. I thought that was a shame, because the vulnerability was actually exceptionally clever and technically interesting, and teaches some important lessons about maintaining security in JavaScript applications. So I decided to write an explainer detailing what happened, how the attack worked, and how the JavaScript community can better defend against similar attacks in the future.

Remote code is fetched and cached on first execution, and never updated until the code is run with the --reload flag. (So, this will still work on an airplane. See ~/.deno/src for details on the cache.)

JavaScript is the main cause for making websites slow. Ten years ago it was network bottlenecks, but the growth of JavaScript has outpaced network and CPU improvements on today's devices. In the chart below, based on an analysis from the HTTP Archive, we see the number of requests has increased for both first and third party JavaScript since 2011.

The following chart shows the growth in the total size of JavaScript from 2011.

Certainly the amount of JavaScript has increased. Total number of requests went from 9 to 18, and total size increased from 85 KB to 364 KB. (These are medians from the world's top ~1.3 million sites tested on desktop.) It's interesting to see that the size of scripts is growing more rapidly than the number of scripts - we're packing twice as much code into each script compared to 7 years ago.

The state of commercial web conferences is utterly broken. What lurks behind the scenes of such events is a widely spread, toxic culture despite the hefty ticket price. And more often than not, speakers bear the burden of all of their conference-related expenses, flights, and accommodation from their own pockets. This isn’t right, and it shouldn’t be acceptable in our industry.

All browsers ship with a set of default styles that are applied to every web page in what is called the “user agent stylesheet”. Most of these stylesheets are open source so you can have a look through them:

A lot of styles are consistent across all user agent stylesheets. For example, I used to think that the <head> element was not visible due to some special feature, but it is actually hidden like any other element on a page, with display: none! You can see this same style in WebKit,

When we select an element by its .class or #id, we’re using predefined and unchanging attributes that are baked into the DOM. With pseudo-classes, we can select elements using information that isn’t already in the DOM and can change based on how a user interacts with the page.

:hover, :focus, and :active are pseudo-classes that are determined by a user’s actions. They each correspond to a very specific point in how a user will interact with an element on a page such as a link or a button or an input field.

With the range in input devices we have today, these pseudo-classes also behave slightly differently depending on if the user is interacting with the element with a mouse

When we hear the world “visibility”, we tend to only think of vision. But the visibility: hidden rule actually affects different kinds of visibility - visual, spatial, assistive technology, and interaction - in different ways....

When we select an element by its .class or #id, we’re using predefined and unchanging attributes that are baked into the DOM. With pseudo-classes, we can select elements using information that isn’t already in the DOM and can change based on how a user interacts with the page....

A couple of weeks ago, I wrote an article on what exactly the DOM is. To recap, the Document Object Model is a representation of an HTML document. It is used by browsers to determine what to render on the page, and by Javascript programs to modify the content, structure,...

This is an analysis of the event-stream incident of which many of you became aware earlier this week. npm acts immediately to address operational concerns and issues that affect the safety of our community, but we typically perform more thorough analysis before discussing incidents—we know you’ve been waiting.

On the morning of November 26th, npm’s security team was notified of a malicious package that had made its way into event-stream, a popular npm package. After triaging the malware, npm Security responded by removing flatmap-stream and event-stream@3.3.6 from the Registry and taking ownership of the

Firefox 64 is available today! Our new browser has a wealth of exciting developer additions both in terms of interface features and web platform features, and we can’t wait to tell you about them. You can find out all the news in the sections below — please check them out, have a play around, and let us know your feedback in the comment section below.

New Firefox interface features

We’re excited to introduce multiple tab selection, which makes it easier to manage windows with many open tabs. Simply hold Control (Windows, Linux) or Command (macOS) and click on tabs to select them.

Once selected, click and drag to move the tabs as a group — either within a given window, or out into a new window.

When we first started working on this project, our ambitions were pretty modest — take all of the tips and tricks we’ve shared on Twitter, bundle them up into one resource, and put it out into the world.

But the more time we put into planning it, the more we realized that we had an opportunity to create something better than that. Something that wasn’t just a book, but more like a complete survival kit for designing for the web.