No matter your role, if you’ve ever been involved in a digital design project, chances are you’re familiar with wireframes. After all, they’re among the most popular and widely used tools when designing websites, apps, dashboards, and other digital user interfaces.

But they do have their problems, and wireframes are so integrated into the accepted way of working that many don’t consider those drawbacks. That’s a shame, because the tool’s downsides can seriously undermine user-centricity. Ever lose yourself in aesthetic details when you should have been talking about content and functionality? We have!

Reference materials of any sort are also often printed. For many people, being able to make notes on paper copies is the way they best learn. Again, it means the information is accessible in an offline format. It is easy for us to wonder why people want to print web pages, however, our job is often to make content accessible — in the best format for our visitors. If that best format is printed to paper, then who are we to argue?

Why Would This Page Be Printed?

A good question to ask when deciding on the content to include or hide in the print stylesheet is, “Why is the user printing this page?” Well, maybe there’s a recipe they’d like to follow while cooking in the kitchen or take along with them when shopping to buy ingredients. Or they’d like to print out a confirmation page after purchasing a ticket as proof of booking. Or perhaps they’d like a receipt or invoice to be printed (or printed to PDF) in order to store it in the accounts either as paper or electronically.

Most of the web pages do not fit on a single screen, so the ability to scroll information is taken for granted by all users. For front-end developers and UX designers, though, implementing scrolling experiences that work well across browsers, fit nicely into a design, and still perform well, can be a challenge. With web standards evolving faster than ever, coding practices often lag behind. Read on to revisit some common corner cases for scrolling and check if the solution you are using have been replaced with something more elegant.

Over the past thirty years, a scrollbar’s appearance kept changing, responding to design trends. Colors, shadows, shapes of arrows, the radius of borders—interface designers experimented with everything. Here’s how the journey looked like on Windows:

Kick-Off

Useful tip: I always kick off a project by talking to the stakeholders. For smaller projects with one or two stakeholders, you can blend the kick-off and the interview into one. Just make sure it’s no longer than an hour.

Stakeholder Interviews

Our two stakeholders are both domain experts. They have a brick-and-mortar store in the center of Bangalore that attracts a lot of people. Once in there, people are delighted by the way the designs look and feel. Our clients wanted to have a website that conveys the same feeling online and that would make its visitors want to go to the store.

Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.

As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:

In this article, I’ll introduce the early implementation of a few tools which based on techniques from the machine learning allow us to perform data-driven chunk clustering and pre-fetching for single-page applications. The purpose is to provide a zero-configuration mechanism which based on data from Google Analytics for the users’ behavior performs the most optimal build. We’re also going to introduce a webpack plugin which works with Angular CLI and Create React App.

Such tool can improve the user-perceived page load performance by making the build process of our applications data-driven!

A couple months ago, I was traveling outside of the U.S. and wanted to show a friend a link on my personal (static) site. I tried navigating to my website, but it took much longer than I anticipated. There’s absolutely nothing dynamic about it — it has animations and some responsive design, but the content always stays the same. I was pretty appalled at the results, ~4s to DOMContentLoaded, and 6.8s for a full page load. There were 20 requests for a static site, with 1mb of total data transferred. I was accustomed to my 1Gb/s, low latency internet in Los Angeles connecting to my server in San Francisco, which made this monstrosity seem lightning fast. In Italy, at 8mb/s, it was a different picture entirely.

You've optimized all of your code, but your site still loads too slowly. Who's the culprit?

Often, performance problems slowing pages down are due to third-party scripts: ads, analytics, trackers, social-media buttons, and so on.

Third-party scripts provide a wide range of useful functionality, making the web more dynamic, interactive, and interconnected. These scripts may be crucial to your website's functionality or revenue stream. But third-party scripts also come with many risks that should be taken into consideration to minimize their impact while still providing value.

Some folks called for browsers to 'fix' it. Some folks dug a bit deeper and saw that it only affected sites built in React-like frameworks, and pointed the finger at React. But the real problem is thinking that third party content is 'safe'.

Third party images

If I include the above, I'm trusting example.com. They may betray that trust by deleting the resource, giving me a 404, making my site look broken. Or, they might replace the kitten data with something a lot less pleasant.

However, the impact of an image is limited to the content box of the element itself. I can try and explain to users "Here's some content from

NOTE: the stylesheets are extracted on the first page only. The tool does not re-extract the styles from your other pages.

Currently the tool is only 100 pages deep, we're still testing it and will icrease the limit in several days. Half of the work is done on our server's side (we can't do everything on the client-side because of the CORS limitations), so please don't abuse it. Results are cached for 24 hours, please be patient. If you see a flickering iframe at the bottom of the page it's OK, your pages are being (somewhat) loaded into it. We'll remove it after testing is done. Frankly, we are just testing the concept and if there's enough demand we'll keep developing the tool further and even post the sources on Github. Send your bug reports to @jitbit on Twitter.

Unlike typical software engineer job interviews, front-end job interviews have less emphasis on algorithms and have more questions on intricate knowledge and expertise about the domain — HTML, CSS, JavaScript, just to name a few areas.

While there are some existing resources to help front end developers in preparing for interviews, they aren't as abundant as materials for a software engineer interview. Among the existing resources, probably the most helpful question bank would be Front-end Developer Interview Questions. Unfortunately, I couldn't find many complete and satisfactory answers for these questions online, hence here is my attempt at answering them. Being an open source repository, the project can live on with the support of the community as the state of web evolves.

Lets state the obvious, this is an imperfect and evolving measure and the goal is to foster discussion and rivalry in making our pages better, faster, and lighter. Bear in mind this was built as an internal tool at Hearst Newspapers to track changes as we rollout our new Article template on mobile for SFGate and eventually all sites (SF Chronicle, Houston Chronicle, Times Union, etc).

Developers, designers, and product need to talk more on how to achieve this. A 1,700 word article might weigh 10KB but by the time you load HTML, JS, CSS, images, 3rd-parties, and ads, it can range between 2MB to 8MB depending on the web site. Bear in mind, the first Harry Potter ebook is 1.1MB and that includes cover art.

Trying To Find Studies About Users’ Perception

I wanted to find some scientific research that could support that these techniques to load images were (or not) beneficial. This proved to be a challenge. I couldn’t find any study proving that showing something like a blurry thumbnail before the image loads improves a user’s perception. Then I thought of progressive JPEGs.

Back To Basis: Progressive JPEGs

In a certain way, we have had a similar “progressive image loading technique” backed into images for a long time. Progressive JPEG is a good example.

Progressive JPEGs have been proposed as a good practice for images, especially for sites used in slow networks.

Single-page applications are everywhere. Even blogs, simple html pages (in the beginning something like https://danluu.com/), have turned into big fat monsters – for example, jlongster’s blog has 1206 stars at the moment of writing, and not because of the content (people usually subscribe to blogs rather than star the source): the only reason is that once he implemented it using React & Redux. What is the problem, though? He wants, he makes it, no questions here. The problem is that it is considered normal to make blogs for reading so bloated – of course, there are some people who complain, but the general response is utterly positive. But who cares about blogs – the problem is that nowadays pretty often question is not “to SPA or not to SPA”, rather “which client-side framework should we use for our SPA”.

Future versions of Google Chrome will feature built-in support for lazy loading, a mechanism to defer the loading of images and iframes if they are not visible on the user's screen at load time.

This system will first ship with Chrome for Android and Google doesn't rule out adding it to desktop versions if tests go as planned.

The feature is called Blink LazyLoad, and as the name hints, it will implement the principle of "lazy loading" inside Chrome itself.

How lazy loading has helped improve page loading speed

By default, all browsers will load the entire web page when the user is accessing an URL. If the page is large, it takes more time to load, and as a side-effect of this longer page load time, the site may be downranked in Google search results.

We write a lot of JavaScript at Basecamp, but we don’t use it to create “JavaScript applications” in the contemporary sense. All our applications have server-side rendered HTML at their core, then add sprinkles of JavaScript to make them sparkle.

This is the way of the majestic monolith. Basecamp runs across half a dozen platforms, including native mobile apps, with a single set of controllers, views, and models created using Ruby on Rails. Having a single, shared interface that can be updated in a single place is key to being able to perform with a small team, despite the many platforms.

It allows us to party with productivity like days of yore. A throwback to when a single programmer could make rapacious progress without getting stuck in layers of indirection or distributed systems. A time before everyone thought the holy grail was to confine their server-side application to producing JSON for a JavaScript-based client application.

This is post # 7 of the series dedicated to exploring JavaScript and its building components. In the process of identifying and describing the core elements, we also share some rules of thumb we use when building SessionStack, a lightweight JavaScript application that has to be robust and highly-performant to help users see and reproduce their web app defects real-time.

If you missed the previous chapters, you can find them here:

This time we’ll be taking apart Web Workers: we’ll offer an overview, discuss the different types of workers, how their building components come to play together, and what advantages and limitations they offer in different scenarios. Finally, we’ll provide 5 use cases in which Web Workers will be the right choice.

Update: Check out headless-devtools, a library for automating DevTools actions from code by leveraging the DevTools Protocol

Chrome DevTools is thego-toanalysistool for understanding what's going on under-the-covers of your app and conducting perf-audits. As you are interacting with the site like a real user would, you can use DevTools to drill-down to every tiny detail about the page. This is great for manual analysis. But if your goal is to monitor web performance over time, you might find that tools in that space are not as powerful. Automated synthetic monitoring services don't expose nearly as much information as there is in DevTools, and for the most part the only type of user interaction they emulate is waiting for the page to load.

I believe this issue affects packages with versions before 2018 as well as versions after 2018.

Versions before 2018 cannot be installed, while versions after 2018 can be. For instance,

require-from-string@2.0.0 is unavailable, while require-from-string@2.0.2 is available. The difference between them is their publish year. Note that 2.0.2 was just published, which kicked off this series of problems.

Update: This theory appears to be wrong. See @BlackHole1 's comment below :)

Instead, this appears to be because floatdrop's packages have disappeared.

Getting Ready: Planning And Metrics

Micro-optimizations are great for keeping performance on track, but it's critical to have clearly defined targets in mind — measurable goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on.

Establish a performance culture.In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no alignment between dev/design and marketing teams, performance isn't going to sustain long-term. Study common complaints coming into customer service and see how improving performance can help relieve some of these common problems.

Stimulus is a JavaScript framework with modest ambitions. It doesn't seek to take over your entire front-end—in fact, it's not concerned with rendering HTML at all. Instead, it's designed to augment your HTML with just enough behavior to make it shine. Stimulus pairs beautifully with Turbolinks to provide a complete solution for fast, compelling applications with a minimal amount of effort.

How does it work? Sprinkle your HTML with magic controller, target, and action attributes:

If you had to list the characteristics of the perfect Node.js web application framework, you'd probably come up with something like this:

Next.js is close to this ideal. If you haven't encountered it yet, I strongly recommend going through the tutorials at learnnextjs.com. Next introduced a brilliant idea: all the pages of your app are files in a your-project/pages directory, and each of those files is just a React component.

Everything else flows from that breakthrough design decision. Finding the code responsible for a given page is easy, because you can just look at the filesystem rather than playing 'guess the component name'. Project structure bikeshedding is a thing of the past. And the combination of SSR (server-side rendering) and code-splitting — something the React Router team

And The Winner Is…

Leonardo Losoviz

The optimization techniques presented in Leonardo’s submission are all DIY, designed and implemented from scratch. He added all the optimizations to PoP, an open-source framework to build websites, and used Agenda Urbana to test the performance improvements on an actual project.

We felt this submission really entered into the spirit of the challenge by not only improving the performance of a single website but attempting to make enhancements to a framework used on a number of websites. The fact that PoP is backed by WordPress meant that Leonardo was in a similar situation to many people unable to do some of the things available to a JavaScript framework. As he noted:

Anyone who’s browsed the web on their phone has, at one point or another, experienced this situation:

You open a web page and click on something, but nothing happens.You click on it again—still nothing happens.You click on something else—nope, nothing.

This is bad enough on its own, but it often doesn’t end there. Here’s what usually happens next:

You start clicking everywhere just to get *some* feedback that your phone isn't broken—then suddenly a bunch of stuff all happens at the same time, and now you're on a completely different page and you have no idea how you got there.

If this sounds familiar, then you’ve experienced the opposite of interactivity on the web. But what exactly does the term “interactivity” mean?

In this tutorial you’ll learn how to automate and scrape the web with JavaScript. To do this, we’ll use Puppeteer. Puppeteer is a Node library API that allows us to control headless Chrome. Headless Chrome is a way to run the Chrome Browser without actually running Chrome.

If none of that makes any sense, all you really need to know is that we’ll be writing JavaScript code that will automate Google Chrome.

Before starting you’ll need to have Node 8+ installed on your computer. You can install it here. Make sure to choose the “Current” version as it is 8+.

I’ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I’ve found them practical and future-proof, and I’ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website.

If you haven’t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology—such as “time-based media” and “programmatically determined”—can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the

Lately, there’s been a lot of buzz around PWAs with many claiming it to be future of web development, especially in terms of mobile devices. At its core, a Progressive Web App (PWA) is simply a web application that uses modern web techniques to deliver a native app-like experience to users. These are web applications with progressive enhancement to implement features like caching, background sync, and push notifications.

Even though PWAs have been around for more than two years now, the response is quite underwhelming. Few big players have adopted this philosophy but most haven’t actually embraced it very much. Chrome and Mozilla are perhaps the best browsers to test out your PWAs as Apple is yet to get into this stuff.

Creating a New Project

Once you’ve signed up for Dropsource, you would create a project. You can choose either iOS or Android. After that, you’ll see the editor’s main screen. Create the initial structure in a few clicks. Each screen in your app is represented as a page, and your app needs to have at least one page in it. So, the first thing we need to do is select the “Pages” option on the left of the editor.

The first page you create will be automatically set as the home (or landing) page for your app. This is the page your users will see first. You can also respond to page lifecycle events (such as “Page Loading,” “Page Appearing,” etc.) in the “Events” tab. For example, by changing “Page Loading” events, you can add a loading animation during the data-loading process.