You are here

Web Design

A constant challenge faced by front-end developers is the performance of our applications. How can we deliver a robust and full-featured application to our users without forcing them to wait an eternity for the page to load? The techniques used to speed up a website are so numerous that it can often be confusing to decide where to focus our energy when optimising for performance and speed.

Thankfully, the solution isn't as complicated as it sometimes might seem. In this post, I'll break down one of the most effective techniques used by large web apps to speed up their user experience. I'll go over a package to facilitate this and ensure that we can deliver our app to users faster without them noticing that anything has changed.

What Does It Mean for a Website to Be Fast?

The question of web performance is as deep as it is broad. For the sake of this post, I'm going to try and define performance in the simplest terms: send as little as you can as fast as you can. Of course, this might be an oversimplification of the problem, but practically speaking, we can achieve dramatic speed improvements by simply sending less data for the user to download and sending that data fast.

For the purpose of this post, I'm going to focus on the first part of this definition—sending the least possible amount of information to the user's browser.

Invariably, the biggest offenders when it comes to slowing down our applications are images and JavaScript. In this post, I'm going to show you how to deal with the problem of large application bundles and speed up our website in the process.

React Loadable

React Loadable is a package that allows us to lazy load our JavaScript only when it's required by the application. Of course, not all websites use React, but for the sake of brevity I'm going to focus on implementing React Loadable in a server-side rendered app built with Webpack. The final result will be multiple JavaScript files delivered to the user's browser automatically when that code is needed.

Using our definition from before, this simply means we send less to the user up front so that data can be downloaded faster and our user will experience a more performant site.

1. Add React Loadable to Your Component

I'll take an example React component, MyComponent. I'll assume this component is made up of two files, MyComponent/MyComponent.jsx and MyComponent/index.js.

In these two files, I define the React component exactly as I normally would in MyComponent.jsx. In index.js, I import the React component and re-export it—this time wrapped in the Loadable function. Using the ECMAScript import feature, I can indicate to Webpack that I expect this file to be dynamically loaded. This pattern allows me to easily lazy load any component I've already written. It also allows me to separate the logic between lazy loading and rendering. That might sound complicated, but here's what this would look like in practice:

I've now introduced React Loadable into MyComponent. I can add more logic to this component later if I choose—this might include introducing a loading state or an error handler to the component. Thanks to Webpack, when we run our build, I'll now be provided with two separate JavaScript bundles: app.min.js is our regular application bundle, and myComponent.min.js contains the code we've just written. I'll discuss how to deliver these bundles to the browser a little later.

2. Simplify the Setup With Babel

Ordinarily, I'd have to include two extra options when passing an object to the Loadable function, modules and webpack. These help Webpack identify which modules we should be including. Thankfully, we can obviate the need to include these two options with every component by using the react-loadable/babel plugin. This automatically includes these options for us:

The first step is going to be to instruct React Loadable that I want all modules to be preloaded. This allows me to decide which ones should be loaded immediately on the client. I do this by modifying my server/index.js file like so:

The next step is going to be to push all components I want to render to an array so we can later determine which components require immediate loading. This is so the HTML can be returned with the correct JavaScript bundles included via script tags (more on this later). For now, I'm going modify my server file like so:

Every time a component is used that requires React Loadable, it will be added to the modules array. This is an automatic process done by React Loadable, so this is all that's required on our part for this process.

Now we have a list of modules that we know will need to be rendered immediately. The problem we now face is mapping these modules to the bundles that Webpack has automatically produced for us.

4. Mapping Webpack Bundles to Modules

So now I've instructed Webpack to create myComponent.min.js, and I know that MyComponent is being used immediately, so I need to load this bundle in the initial HTML payload we deliver to the user. Thankfully, React Loadable provides a way for us to achieve this, as well. In my client Webpack configuration file, I need to include a new plugin:

The loadable-manifest.json file will provide me a mapping between modules and bundles so that I can use the modules array I set up earlier to load the bundles I know I'll need. In my case, this file might look something like this:

The final step in loading our dynamic bundles on the server is to include these in the HTML we deliver to the user. For this step, I'm going to combine the output of steps 3 and 4. I can start by modifying the server file I created above:

The final step to using the bundles that we've loaded on the server is to consume them on the client. Doing this is simple—I can just instruct React Loadable to preload any modules it's found to be immediately available:

Following this process, I can split my application bundle into as many smaller bundles as I need. In this way, my app sends less to the user and only when they need it. I've reduced the amount of code that needs to be sent so that it can be sent faster. This can have significant performance gains for larger applications. It can also set smaller applications up for rapid growth should the need arise.

In this day and age, there are tons of businesses out there. This is particularly true in the digital marketing agency. Looking for a great SEO agency can often be a tedious process simply because there are so many options out there. How do you know which one is the right one for you? SEO […]

You’re a small business owner and you’ve struggled to get where you are today. Even though things are fairly comfortable now, you’ve never forgotten where you’ve been and how you’ve gotten to where you are today. That first dollar you made, that first big sale, long days and nights turn into the stuff that could […]

Black Friday is coming soon, and that means big savings on web hosting and other products used by website owners like backup servers, VPNs, marketing tools, WordPress themes, stock photos and much more. Following the US Thanksgiving holiday, Black Friday (November 22, 2018), is globally recognized as the beginning of the holiday shopping season. Retailers […]

A lot of people toss around the phrase, “It’s not about the destination. It’s about the journey.” And those people are telling the truth — until they hit a roadblock.

Missed turns or poorly-given directions can cost someone hours on a trip. When you’re on a mission, those hours spent trying to find what you need could ruin the entire experience.

It doesn’t always have to end in disaster. This more optimal scenario could occur: you take a wrong turn, but after stopping at a nearby gas station, you leave with more than accurate directions to your final destination. You’ve also managed to score a free ice cream cone from the sweet old lady working behind the gas station’s register because she saw you were lost… and wanted to cheer you up.

Often, website visitors can wind up getting turned around. It’s not always their fault. They could’ve typed in the wrong URL (common) or clicked on a broken link (our mistake). Whatever the reasoning, you now have confused people who only wanted to engage with your website in some way and now can’t. You hold the reins on their navigation. You can guide them back to where you wanted them to go all along or you can leave them frustrated and in the dark. Do they need to make a U-turn? Did they get off at the wrong exit? Only you can tell them, and the best way to do so is through a 404 error page.

Your website’s 404 error page can deliver either of these scenarios with regard to getting your visitors back on their buyer’s journey. A lackluster 404 page irritates your visitors and chases them away into the hands of a competing website that better guides them to what they’re looking for. That lackluster 404 page has bland messaging with minimal visual elements. It might include a variation of the same serif text: “This page does not exist.” That’s like your web users asking you for directions and telling them nothing more than “well, what you’re looking for isn’t here. Good luck.” Nothing more.

Even brands with seemingly clever branding can neglect a 404 page! The owner of this sad excuse for an error page will remain anonymous (but it rhymes with Bards Tragainst Bubanity). (Large preview)

Unfortunately, even some of the world’s best brands use these 404 pages. No navigation. No interesting text. Nothing that reflects their brand messaging. Visitors are left even more disappointed in their encounter than before.

However, there are some 404 pages that go above and beyond. Rather than the stark white of a standard 404 error page, these pages take an opportunity to speak to users in a more personal tone. Excellent 404 pages are exactly like getting an unexpected treat from a friendly face. Well-crafted 404 Pages can redirect your pages’ visitors away from being lost and confused and to a much happier mood and onto a more helpful page on your website.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Take Amazon, for instance. On Amazon Day 2018, Amazon learned firsthand the importance of a decent 404 page. Sure, buyers were still frustrated upon reaching a 404 page — even if it included a puppy. However, could you imagine how much more irritated buyers would’ve been had the 404 page looked clinical, cold, and not helpful?

Regardless of what tone you want to take or what visuals you want to use or what copy will best engage your readers, a great 404 page does one thing above all else: Makes website visitors okay with not finding what they need — if only for a moment — and directs them to where they need to go.

While 404 pages vary greatly, the best ones all seem to do two things well:

Support the company’s overall brand and messaging;

Successfully redirect website visitors elsewhere on the page through clear navigation and support.

Thematically, there are a few ways to accomplish the ‘perfect’ 404 page:

1. Nail Down The Overall Tone

If content isn’t your brand’s strong suit, this could be a struggle. However, if you have a sense of your brand’s voice and messaging, you can quickly identify where you can offer something unexpected. Visitors are already going to be disappointed when they hit your 404 page; they’re not getting what they wanted. Your 404 page is an opportunity to show that your brand has humans behind its marketing rather than robotic, cold, automated messages seen elsewhere. In short, move beyond the “this page is unavailable” and its variants.

Regardless of the tone, good 404 pages work like magicians. The best illusionists often acknowledge they’re magicians; they don’t pretend to be something they’re not. 404 pages own up to being an error page; the copy and visuals often reflect that. And then, like any good magician, 404 pages pull the attention away from the problem and put that attention elsewhere. Typically, that’s done with copy that matches the visual elements.

Here are some themes/moods that successful 404 pages have leveraged in the past used to succeed.

Crack A Joke

A joke (even a corny one) can do wonders for alleviating awkwardness or inconvenience. However, unless your brand is built on crude humor (i.e. Cards Against Humanity which ironically doesn’t have a good 404 page), it’s best to make the jokes either tongue in cheek or punny rather than too crass. This example from Modcloth makes a quick pun but keeps the mood light.

Happy and snappy, this 404 page aligns with the rest of the brand’s fun copy. (Large preview)
Get Clever

It might not be outright funny, but it’s something that gets a visitor’s attention shortly after arriving on your page. It can be a little sassy, snarky, even unexpected. This 404 page from Blizzard Entertainment does a great job at flipping the script both with its visual tone and its copy.

Sarcasm pays off well for the gaming giant’s 404 page. (Large preview)
Be Friendly

Prime example would be LEGO Shop’s 404 page with a friendly customer service rep (albeit a LEGO rep). The friendliness can come from an inviting design or warm copy. Ultimately, it’s anything that culminates in a sense of “oh hey, we’re really sorry about that. Let us try to fix it.”

“If your company’s brand excels in customer service and customer care, maybe taking a tone of genuine friendliness would be most appropriate to carry over brand messaging. If that’s the case, treat your 404 page like an extension of your guest services window.”

People love to click on things, especially if they’re engaging with the 404 page on desktop. And if they’re engaging with your website, all the better! One of the best examples online of interactivity on a 404 page is from Kualo. The site hosting provider gamified its 404 page into a recreation of Space Invaders, complete with the ability to earn extra lives as you level up. Even more impressive is that Kualo actually offers discounts on its hosting for certain thresholds of points that users reach.

The gamification of Kualo’s 404 keeps users coming back for more chances to win. (Large preview)
Be Thought-provoking

Yes, your 404 pages can even be educational! 404 pages can offer up resources and links to other helpful spots on your website. It’s an unexpected distraction that could easily keep guests entertained by more information. National Public Radio (NPR) does this exceptionally well. The media outlet provides a collection of features with one major similarity: the stories are about things which have also disappeared.

Use this one with caution, as there’s a very good chance you’ll have to change your 404 message if you’re going to be topical. Pop culture references move fast; if you’re not careful, you’ve spent too much time developing a 404 page that will be irrelevant in two weeks. (And this is a cardinal sin for any organization with a target market of Millennials or younger.) The Spotify 404 page above recently underwent a shift to keep up with trends. Prior to doing a quick play on Kanye West’s “808 and Heartbreak,” the 404 page featured lyrics from Justin Bieber’s “Sorry.”

Once you have an idea of the proper tone for your 404 page, visuals are an important next step in the process. Visuals are often the first element of a 404 page people will note first — and thus, it’s the first representation of that page’s desired tone.

Static visuals help emphasize the page copy. Adding in light animation can often collaborate with the text to further a message or tone. For example, Imgur’s 404 page brings its illustrations to life by making the eyes of its characters follow a visitor’s cursor.

Interactivity among the visual elements give people an opportunity to do what frustrated internet users love to do — click on everything in an attempt at making something useful happen.

3. Nail Down The Navigation Options

You know what tone you want your business to strike. You’ve got an idea of the visuals you’ll use to present that tone. Your website visitors will think it’s great and fun — but only for a moment. Your website still has to get them to what they’re looking for. Clear navigation is the next big step in directing your lost website visitors toward their goals. A 404 page that’s cute but lacks good navigation design is like that sweet old man who is kind but he gives you the world’s worst directions.

“After making a good first impression with your 404 page, the immediate next step should be getting website visitors off it and to where they want to be. There should always be clear indications on where they should go next.”

For example, Shutterstock’s 404 page offers three distinct options. Visitors can either go back to the previous page, which is helpful if they clicked on the wrong link. They can return to the homepage for more of a hard restart in their navigation, or maybe they came in from a search engine and found a broken link, but they’re not quite ready to give up on the website and want to look around. The final option is to report a problem. If someone has been scouring your website for minutes on end and they have an idea of what they’re looking for, they can report that they might have found an issue. At the very least, it gets your web visitors involved with your company and your development team gets feedback about the accessibility of your website.

In addition to clear navigation, these other navigation-based elements could help your visitors out even more:

Chatbots / live chat: Bots are often received one of two ways. Users either find them incredibly annoying or relatively helpful. Bots that pop up within a second of landing on a page often lead visitors to click out of a site entirely as the bot seems intrusive. However, your website can use bots by simply adding a “Click to chat” option. This invites lost visitors who want your help to engage with the bot rather than the bot making a potentially annoying first move.

Search Bars: This element can do wonders for websites with a high volume of pages and information. A search bar could also offer up answers to common questions or redirect to an FAQ.

And one final navigation note — make sure those navigation tactics are just as efficient on mobile as they are on desktop. Treat your 404 page as you would any other. In order for it to succeed, it should be easily navigable to a variety of users, especially in a mobile-first world.

While the look of your 404 page is critical, you ideally never want anyone to find it on your website. Knowing the most common 404 errors on your website could give you insights in how to reduce those issues.

How To Track 404 Events Using Google Analytics
What You Need To Start Tracking

The code provided will report 404 events within Google Analytics, so you must have an up-and-running account there to take advantage of this tutorial. You also need access to your site’s 404 template and (this is important) the 404 page must preserve the URL structure of the typed/clicked page. This means that your 404 events can’t just redirect to your 404 page. You must serve the 404 template dynamically with the exact URL that is throwing the error. Most web server services (such as Apache) allow you to do this with a variety of rewrite rules.

With Google Analytics, tracking explicit 404 errors is straightforward and simple. Just ensure that your main Google Analytics tracking script is in place and then add the following code to your 404 Error Page/Template:

You will need to swap out the ID of your specific Google Analytics account. After that, the script works by sending an “event” to Google Analytics. The category is “404 Response," the action uses JavaScript to pass the URL that throws the error, and the label uses JavaScript to pass along the previous URL the user was on. Through all of this data, you can then see what URLs cause 404 events and where people are accessing those URLs.

Tracking 404 Errors With Google Tag Manager

More and more web managers have decided to move to Google Tag Manager. This tool gives them the capability of embedding a whole host of scripts through a single container. It's especially useful if you have a lot of tracking scripts from several providers. To begin tracking 404s through Tag Manager, first begin by creating a “Variable” called “Page Title Variable.” This variable type is a “JavaScript” variable and the Variable Name is “document.title”:

Essentially, we’re creating a variable that checks for a page’s given title. This is how we will check if we are on a 404 page.

Then create a “Trigger” called “404 Page Title Trigger.” The type is “Page View” and the trigger fires when the “Page Title Variable” contains “404 — Page Not Found” or whatever it is your 404 page title displays as within the browser.

The Variable, Trigger, and Tag all work to pass along the relevant data directly to Google Analytics.

404 Event Reporting

No matter your tracking method (be it through Tag Manager or direct event beacons), your reporting should be the same within Google Analytics. Under “Behavior,” you will see an item called “Events.” Here you will see all reported 404 events. The “Event Action” and “Event Label” dimensions will give you the pertinent data of what URLs are throwing 404 errors and their referring source.

With this in place, you can now regularly monitor your 404 errors and take the necessary steps to minimize their occurrence. In doing so, you optimize your referral sources and provide the best user experience, keeping conversions and engagement on the right path.

What To Do With Your Google Analytics Results

Now that you know how to monitor those 404 errors, what’s a developer to do? The key takeaway from tracking 404 occurrences is to look for patterns that result in those errors. The data should help you determine user intent, cluing you into what your users want. Ideally, you’ll see trends in what brings people to your 404 page, and you can apply that knowledge to adjust your website accordingly.

If your website visitors are stumbling while searching for a page, take the opportunity to create content that fills in that hole. That way people get results they hadn’t previously seen from your site.

The 404 events could be avoided with a tweak in your website’s design. Make sure the navigation on your pages are clear and direct users to logical ending points. The fix could even be as simple as changing descriptions on a page to paint a clearer picture for users.

Putting It All Together

Tone, images, and navigation — these three elements can transform any 404 page from a ghost town into a pleasant serendipitous stop for your website visitors. And while you don’t want them to stay there forever, you can certainly make sure they stay with you is enjoyable before sending them on their way. By regularly monitoring your 404 errors, you can also alleviate some of the ditches, poorly-marked signage, and potholes that frequently derail users. Being proactive and reactive with 404 errors ultimately improves the journey and the destination for your website visitors.

How about some colorful inspiration for those gray and misty November days? We might have something for you. Just like every month since more than nine years already, artists and designers from across the globe once again tickled their creativity and designed unique wallpapers that are bound to breathe some fresh life into your desktop.

The wallpapers all come in versions with and without a calendar for November 2018 and can be downloaded for free. As a little bonus goodie, we added a selection of favorites from past November editions to this post. Because, well, some things are too good to be forgotten somewhere down in the archives, right? Enjoy!

All images can be clicked on and lead to the preview of the wallpaper,

You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

This November, we are inspired by the nature around us and the universe above us, so we created an out of this world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space? — Designed by PopArt Studio from Serbia.

Foliage is the most mystical process of nature to ever occur. Leaves bursting and falling in shades of red, orange, yellow and making the landscape look magical. — Designed by ATop Digital from India.

Diwali is all about celebrating the victory of good over evil and light over darkness. The hearts of the vast majority are as dark as the night of the new moon. The house is lit with lamps, but the heart is full of the darkness of ignorance. Wake up from the slumber of ignorance. Realize the constant and eternal light of the Soul which neither rises nor sets through meditation and make this festive month even brighter and more vibrant. — Designed by Intranet Software from India.

I already had an old sketch that I wanted to try to convert to a digital illustration. The colors of the drawing were inspired by nature that at this time of the year has both the warm of fallen leaves as it has the cold greens of the leaves that make it through winter. — Designed by Ana Matos from Portugal.

Monsoon is all about the chill, the tranquillity that whizzes around, a light drizzle that splashes off our faces, the musty aroma of the earth and more than anything - a sense of liberation. The designer here has depicted the soul of monsoon, one that you would want to heartily soak in. — Designed by Nafis Mohamed from London.

Universal Children’s Day, 20 November. It feels like a dream world, it invites you to let your imagination flow, see the details, and find the child inside you. — Designed by Luis Costa from Portugal.

It is believed that childhood is the happiest time cause this period of life cannot be matched with any other phases of life. During this month of November, let’s continue celebrating Children’s Day no matter how old you are, by sharing wishes to your children and childhood friends. — Designed by Taxi Software from India.

This month’s wallpaper is dedicated to the magical place of Barcelona that has filled my soul with renewed purpose and hope. I wanted to recreate the enchanting Parc Güell where I’m celebrating Thanksgiving with the people I’ve met that have given me so much in so little time. — Designed by Priscilla Li from the United States.

I have a maple tree in my yard that sometimes turns these colors in the fall - red on the outer leaves, then yellow, and the inner leaves still green. — Designed by Hannah Joy Patterson from South Carolina, USA.

Below you’ll find some November goodies from past years. Please note that these wallpapers don’t come with a calendar.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I’d liked to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

“The design of trees has always fascinated me. Each one has it’s own unique entanglement of branches. With or without leaves they are always intriguing. Take some time to enjoy the trees around you — and the one on this wallpaper if you’d like!” — Designed by Rachel Litzinger from Chiang Rai, Thailand.

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Are you building an app and looking for tools that can help you streamline your build? Take the effort out of your next front-end app build with one of these powerful admin templates.

Whether you prefer to work with React, Angular, or Vue.js, there are a range of templates available on ThemeForest that make it painless to create beautiful, interactive UIs. Built using cutting-edge technology, these templates offer flexibility and dependability for your app build. Create a stunning UI easily by selecting from modular components and clean layouts so that you can focus on the business logic of your app build.

React Admin Templates

React is a JavaScript library for building user interfaces that has taken the web development world by storm. React is known for its blazing-fast performance and has spawned an ecosystem of thousands of related modules on NPM, including many tooling options.

These admin templates and dashboards are a great starting point for your next React app.

Isomorphic is a React and Redux-powered single-page admin dashboard. It's based on a progressive web application pattern and is highly optimized for your next React app. With no need to install or configure tools like Webpack or Babel, you can get started building your app immediately.

This template helps you write apps that behave consistently, run properly in different environments, and are easy to test. With Sass and CSS styling modules, multilingual support, a built-in Algolia search tool, Firestore CRUD, and easy-to-integrate code, you can use this template to build anything you want.

User justinr1234 says:

“Easily the most well-designed template using React out there, from both a code and design perspective. Integrating the code off the shelf was a breeze. If you have an existing app or are looking to roll a new one on the front end, this template successfully solves the problem for either use case. Excellent product!”2. Clean UI React Admin Template

Are you building a single-page app and interested in moving to React and Redux? Don’t start from scratch—build a scalable, highly polished admin app with this React, Redux, Bootstrap, and Ant Design template that works well on mobile, tablet, and desktop.

Clean UI React is create-react-app based, so getting started is simple. Modular code allows you to add and remove components with ease. Developer friendly and highly customizable, this template includes 9 example apps, more than 50 pages, multiple layout options with easy-to-update Sass or CSS styling, and ample reusable React components.

Kick-start your app project with Jumbo React, a complete admin template. This product includes two React templates, one based on Google Material Design and the other on the stunning flat style. Each template comes with multiple design concepts and hundreds of UI components and widgets, as well as an internationalization feature that allows you to develop a multilingual app.

Think of this template package as a starter kit to build your app. With it, you can develop a scalable React app rapidly and effectively and save yourself time and money in the process.

Looking for a template to get your React project started? Fuse is a complete admin template that follows Google’s Material Design guidelines and will allow you to learn some of the advanced aspects of React while you build your app.

This template uses Material UI as the primary UI library and Redux for state management. It comes with built-in page templates, routing, and authorization features, along with five example apps, more than 20 pages, and lots of reusable React components.

User DevX101 says:

“Very well organized template ready for building a real app. Not just visual templates, but includes authorization and modular design. Great starter kit.”Angular Admin Templates

Angular is more than just the next version of a popular front-end framework. Angular takes all the best parts of AngularJS and improves them. It's a powerful and feature-complete framework that you can use to build fast, professional web apps.

Check out these templates that you can use to get your next Angular app off on the right foot with clean code and great design.

This best-selling template is a 3-in-1 bundle, with Angular 7+, Bootstrap 4, and 21 layered PSD designs. Fuse is based on Google Material Design and comes with AoT compiler support, as well as a complete NgRx example app. This template includes configurable layouts, a skeleton project, built-in apps such as calendar, e-commerce, mail, and chat, and more than 20 pages to get you started.

Fuse supports all modern browsers (Chrome, Firefox, Safari, Edge) and comes with Bootstrap 4, HTML, and CSS versions, along with the Angular app.

User haseeb90 says:

“This is a great theme. Comes with pre-built apps that you just need to plug your logic and back end into. The code quality is great and stays up-to-date with the latest Angular versions.”2. Pages Admin Dashboard Template

Pages is the simplest and fastest way to build a web UI for your dashboard or app. This beautifully designed UI framework comes with hundreds of customizable features, which means that you can style every layout to look exactly the way you want it to.

Pages is built with a clean, intuitive, and fully responsive design that works on all major browsers and devices. Featuring developer-friendly code, full Sass and RTL support, and five unique dashboard layouts, this Angular 5+ ready template boasts flawless design and a 5-star rating.

Clip-Two is an advanced, fully responsive admin template built with AngularJS. AngularJS, the original version of the popular Angular framework, lets you extend the HTML vocabulary. The resulting environment is expressive, readable, and quick to develop in.

Using a Bootstrap UI, Clip-Two is mobile-friendly and comes with ready-to-customize themes with six different skins and infinite styles with SASS. This template includes features like four level sidebar menus, CSS3 page transitions, custom scrollbar for vertical scrollable content, dynamic pagination, and RTL functionality.

User hafizminhas says:

“This is one of the most outstanding Angular 1.x templates available in the market.”4. Apex Admin Template

Apex is a powerful and flexible admin template based on Angular 6+ and Bootstrap 4. The Angular CLI makes it easy to maintain and modify this app template. With easy-to-understand code and a handy starter kit, this template works right out of the box. Apex includes multiple solid and gradient menu color options and sizes, with an organized folder structure and more than 500 components and 50 widgets.

This template is fully responsive, clean on every device and modern browser, and comes with AoT and lazy loading. Choose from a few pre-made layout options and customize with ready-to-use elements and popular UI components.

With three niche dashboards, Stack Admin can be used for any type of web app: project management, e-commerce back ends, analytics, or any custom admin panels. This template looks great on all devices, and it comes with a kit to help developers get started quickly.

Clean, unique, and blazing fast, Fury is an admin template that offers you everything you need to get started with your next project. Built with Angular and Material Design, this template is the perfect framework for building large enterprise apps, and it allows for a modular component setup.

This template is designed to be lightweight and easy to customize. Features include completely customizable dashboard widgets and Angular Flex-Layout, to provide a fast and flexible way to create your layouts.

User CreativelyMe says:

"The code quality is exceptional. It's clearly the work of a true craftsman. This template is truly a joy to work with, and continues to evolve over time. Excellent!!!"7. Able Pro 7.0 Responsive Template

Able Pro 7.0 is a fully responsive admin template that provides a flexible solution for your project development. Built with a Bootstrap 4 framework, this template has a Material look, with well structured and commented code. This retina-ready template comes with more than 150 pages and infinite design possibilities—use the Live Customizer feature to do one-click checks on color combinations and layout variations.

With more than 100 external plugins included, advanced menu layout options, and ready-to-deploy dashboards and landing pages, Able Pro 7.0 will streamline your app development process to save you time and effort.

User macugi says:

“An amazing template. Very good design, good quality code and also very good customer support.”Vue.js Admin Templates

Vuely is a fully responsive admin template designed to give you a hassle-free development experience. Carefully crafted to look beautiful on mobile and tablet devices with pre-designed custom pages and integrated features like charts, graphs, and data tables, this template allows you to create your back-end panel with ease. More than 200 UI elements and 78 custom widgets simplify your development process.

Vuely is translation ready with RTL support and comes with multiple color and theme options to give you the flexibility you need.

Looking for a full featured admin template for your Vue.js project? Look no further. This Vue.js admin template is completely modular, so you can modify layouts, colors, and other features without disturbing the rest of the code. Simply customize it with the provided Sass variables. This template is well documented, with seven layout and multiple color scheme options. With all the components you need, this Vue.js template will get you started on your next dashboard build.

User JimOQuinn says:

“Wow, the look and feel of the theme has progressed substantially since the initial release. Great job! Love the no-jQuery framework as I find VueJS much easier to work with. Looking forward to the next release. Keep up the good work!”Multi-Framework Admin Templates

This Material Design admin template provides you high performance and clean and modern Vue, React and Angular versions. This super flexible template uses SCSS, Gulp, Webpack, NPM Modern Workflow, and Flexbox, and has all the components you need to create your front-end app project. With stunning layouts, over 500 components and lifetime updates and customer support, this is the most complete admin app available.

User themeuser55 says:

“This is an absolutely amazing theme. It is done with quality in mind, regularly updated, and you can tell the developer cares about his work and clients. I would suggest using this template if you are looking for a production worthy front end for a basis to any modern web application.”2. Primer—Angular and React Admin Template

Primer is a creative Material Design admin template, with ahead-of-time (AoT) compilation for a more performant user experience. Fully responsive and packaged with both Angular and React versions, this template has left-to-right and right-to-left (LTR/RTL) support and light and dark colour schemes. Well documented and easy to customize, with this app template you get everything you need to start working on your SaaS, CRM, CMS, or dashboard based project.

User cjackett says:

“This is a very well built template with great flexibility and lots of options. The author continues to update and improve the template, extending its functionality and incorporating new angular packages. Excellent work.”Conclusion

This is just a sample of the many app admin templates available on ThemeForest. There is a template for you, no matter what your style or specifications. These templates will make coding the front end of your app easier and help you deliver an app that provides a high-quality user experience. All this will save you time and effort, letting you focus on the real details of coding your project.

When undertaking any sort of performance optimisation work, one of the very first things we learn is that before you can improve performance you must first measure it. Without being able to measure the speed at which something is working, we can’t tell if the changes being made are improving the performance, having no effect, or even making things worse.

Many of us will be familiar with working on a performance problem at some level. That might be something as simple as trying to figure out why JavaScript on your page isn’t kicking in soon enough, or why images are taking too long to appear on bad hotel wifi. The answer to these sorts of questions is often found in a very familiar place: your browser’s developer tools.

Over the years developer tools have been improved to help us troubleshoot these sorts of performance issues in the front end of our applications. Browsers now even have performance audits built right in. This can help track down front end issues, but these audits can show up another source of slowness that we can’t fix in the browser. That issue is slow server response times.

Time to First Byte

There’s very little browser optimisations can do to improve a page that is simply slow to build on the server. That cost is incurred between the browser making the request for the file and receiving the response. Studying your network waterfall chart in developer tools will show this delay up under the category of “Waiting (TTFB)”. This is how long the browser waits between making the request and receiving the response.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

In performance terms this is know as Time to First Byte - the amount of time it takes before the server starts sending something the browser can begin to work with. Encompassed in that wait time is everything the server needs to do to build the page. For a typical site, that might involve routing the request to the correct part of the application, authenticating the request, making multiple calls to backend systems such as databases and so on. It could involve running content through templating systems, making API calls out to third party services, and maybe even things like sending emails or resizing images. Any work that the server does to complete a request is squashed into that TTFB wait that the user experiences in their browser.

Inspecting a document request shows the time the browser spends waiting for the response from the server.

So how do we reduce that time and start delivering the page more quickly to the user? Well, that’s a big question, and the answer depends on your application. That is the work of performance optimisation itself. What we need to do first is measure the performance so that the benefit of any changes can be judged.

The Server Timing Header

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

The way this is done is very simple, transparent to the user, and has minimal impact on your page weight. The information is sent as a simple set of HTTP response headers.

Server-Timing: db;dur=123, tmpl;dur=56

This example communicates two different timing points named db and tmpl. These aren’t part of the spec - these are names that we’ve picked, in this case to represent some database and template timings respectively.

The dur property is stating the number of milliseconds the operation took to complete. If we look at the request in the Network section of Developer Tools, we can see that the timings have been added to the chart.

A new Server Timing section appears, showing the timings set with the Server-Timing HTTP header.

The Server-Timing header can take multiple metrics separated by commas:

Server-Timing: metric, metric, metric

Each metric can specify three possible properties

A short name for the metric (such as db in our example)

A duration in milliseconds (expressed as dur=123)

A description (expressed as desc="My Description")

Each property is separated with a semicolon as the delimiter. We could add descriptions to our example like so:

Server-Timing: db;dur=123;desc="Database", tmpl;dur=56;desc="Template processing"
The names are replaced with descriptions when provided.

The only property that is required is name. Both dur and desc are optional, and can be used optionally where required. For example, if you needed to debug a timing problem that was happening on one server or data center and not another, it might be useful to add that information into the response without an associated timing.

The "East coast data center" value is shown, even though it has no timings.

One thing you might notice is that the timing bars don’t show up in a waterfall pattern. This is simply because Server Timing doesn’t attempt to communicate the sequence of timings, just the raw metrics themselves.

Implementing Server Timing

The exact implementation within your own application is going to depend on your specific circumstance, but the principles are the same. The steps are always going to be:

The basics of implementing something along those lines should be straightforward in any language. A very simple PHP implementation could use the microtime() function for timing operations, and might look something along the lines of the following.

Server-Timing: db;dur=201.098919, tpl;dur=301.271915;desc="Templating", geo;dur=404.520988;desc="Geocoding"
The Server Timings set in the example show up in the Timings panel with the delays configured in our test script.
Existing Implementations

Considering how handy Server Timing is, there are relatively few implementations that I could find. The server-timing NPM package offers a convenient way to use Server Timing from Node projects.

If you use a middleware based PHP framework tuupola/server-timing-middleware provides a handy option too. I’ve been using that in production on Notist for a few months, and I always leave a few basic timings enabled if you’d like to see an example in the wild.

For browser support, the best I’ve seen is in Chrome DevTools, and that’s what I’ve used for the screenshots in this article.

Considerations

Server Timing itself adds very minimal overhead to the HTTP response sent back over the wire. The header is very minimal and is generally safe to be sending without worrying about targeting to only internal users. Even so, it’s worth keeping names and descriptions short so that you’re not adding unnecessary overhead.

More of a concern is the extra work you might be doing on the server to time your page or application. Adding extra timing and logging can itself have an impact on performance, so it’s worth implementing a way to turn this on and off when required.

Using a Server Timing header is a great way to make sure all timing information from both the front-end and the back-end of your application are accessible in one location. Provided your application isn’t too complex, it can be easy to implement and you can be up and running within a very short amount of time.

If you’d like to read more about Server Timing, you might try the following:

The CSS Working Group At TPAC: What’s New In CSS?
The CSS Working Group At TPAC: What’s New In CSS?
Rachel Andrew
2018-10-26T22:30:30+02:00
2018-10-31T14:48:30+00:00

Last week, I attended W3C TPAC as well as the CSS Working Group meeting there. Various changes were made to specifications, and discussions had which I feel are of interest to web designers and developers. In this article, I’ll explain a little bit about what happens at TPAC, and show some examples and demos of the things we discussed at TPAC for CSS in particular.

What Is TPAC?

TPAC is the Technical Plenary / Advisory Committee Meetings Week of the W3C. A chance for all of the various working groups that are part of the W3C to get together under one roof. The event is in a different part of the world each year, this year it was held in Lyon, France. At TPAC, Working Groups such as the CSS Working Group have their own meetings, just as we do at other times of the year. However, because we are all in one building, it means that people from other groups can more easily come as observers, and cross-working group interests can be discussed.

Attendees of TPAC are typically members of one or more of the Working Groups, working on W3C technologies. They will either be representatives of a member organization or Invited Experts. As with any other meetings of W3C Working Groups, the minutes of all of the discussions held at TPAC will be openly available, usually as IRC logs scribed during the meetings.

The CSS Working Group

The CSS Working Group meet face-to-face at TPAC and on at least two other occasions during the year; this is in addition to our weekly phone calls. At all of our meetings, the various issues raised on the specifications are discussed, and decisions made. Some issues are kept for face-to-face discussions due to the benefits of being able to have them with the whole group, or just being able to all get around a whiteboard or see a demo on screen.

When an issue is discussed in any meeting (whether face-to-face or teleconference), the relevant GitHub issue is updated with the minutes of the discussion. This means if you have an issue you want to keep track of, you can star it and see when it is updated. The full IRC minutes are also posted to the www-style mailing list.

Here is a selection of the things we discussed that I think will be of most interest to you.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

The CSS Scrollbars specification seeks to give a standard way of styling the size and color of scrollbars. If you have Firefox Nightly, you can test it out. To see the examples below, use Firefox Nightly and enable the flags layout.css.scrollbar-width.enabled and layout.css.scrollbar-color.enabled by visiting http://about:config in Firefox Nightly.

The specification gives us two new properties: scrollbar-width and scrollbar-color. The scrollbar-width property can take a value of auto, thin, none, or length (such as 1em). It looks as if the length value may be removed from the specification. As you can imagine, it would be possible for a web developer to make a very unusable scrollbar by playing with the width, so it may be better to allow the browser to decide the exact width that makes sense but instead to either show thin or thick scrollbars. Firefox has not implemented the length option.

If you use auto as the value, then the browser will use the default scrollbars: thin will give you a thin scrollbar, and none will show no visible scrollbar (but the element will still be scrollable).

The scrollbar-color property deals with — as you would expect — scrollbar colors. A scrollbar has two parts which you may wish to color independently:

thumb
The slider that moves up and down as you scroll.

track
The scrollbar background.

The values for the scrollbar-color property are auto, dark, light and <color> <color>.

Using auto as a keyword value will give you the default scrollbar colors for that browser, dark will provide a dark scrollbar, either in the dark mode of that platform or a custom dark mode, light the light mode of the platform or a custom light mode.

To set your own colors, you add two colors as the value that are separated by a space. The first color will be used for the thumb and the second one for the track. You should take care that there is enough contrast between the colors, as otherwise the scrollbar may be difficult to use for some people.

In this example, I have set custom colors for the scrollbar elements. (Large preview)

In a browser with support for CSS Scrollbars, you can see this in the demo:

We’ve been using the padding hack in CSS to achieve aspect ratio boxes for some time, however, with the advent of Grid Layout and better ways of sizing content, having a real way to do aspect ratios in CSS has become a more pressing need.

There are two issues raised on GitHub which relate to this requirement:

There is now a draft spec in Level 4 of CSS Sizing, and the decision of the meeting was that this needed further discussion on GitHub before any decisions can be made. So, if you are interested in this, or have additional use cases, the CSS Working Group would be interested in your comments on those issues.

The :where() Functional Pseudo-Class

Last year, the CSSWG resolved to add a pseudo-class which acted like :matches() but with zero specificity, thus making it easy to override without needing to artificially inflate the specificity of later elements to override something in a default stylesheet.

The :matches() pseudo-class might be new to you as it is a Level 4 Selector, however, it allows you to specify a group of selectors to apply some CSS, too. For example, you could write:

.foo a:hover,
p a:hover {
color: green;
}

Or with :matches()

:matches(.foo, p) a:hover {
color: green;
}

If you have ever had a big stack of selectors just in order to set the same couple of rules, you will see how useful this will be. The following CodePen uses the prefixed names of webkit-any and -moz-any to demonstrate the matches() functionality. You can also read more about matches() on MDN.

Where we often do this kind of stacking of selectors, and thus where :matches() will be most useful is in some kind of initial, default stylesheet. However, we then need to be careful when overwriting those defaults that any overwriting is done in a way that will ensure it is more specific than the defaults. It is for this reason that a zero specificity version was proposed.

The issue that was discussed in the meeting was in regard to naming this pseudo-class, you can see the final resolution here, and if you wonder why various names were ruled out take a look at the full thread. Naming things in CSS is very hard — because we are all going to have to live with it forever! After a lot of debate, the group voted and decided to call this selector :where().

Since the meeting, and while I was writing up this post, a suggestion has been raised to rename matches() to is(). Take a look at the issue and comment if you have any strong feelings either way!

Logical Shorthands For Margins And Padding

On the subject of naming things, I’ve written about Logical Properties and Values here on Smashing Magazine in the past, take a look at “Understanding Logical Properties and Values”. These properties and values provide flow relative mappings. This means that if you are using Writing Modes other than a horizontal top to bottom writing mode, such as English, things like margins and padding, widths and height follow the text direction and are not linked to the physical screen dimensions.

For example, for physical margins we have:

margin-top

margin-right

margin-bottom

margin-left

The logical mappings for these (assuming horizontal-tb) are:

margin-block-start

margin-inline-end

margin-block-end

margin-inline-start

We can have two value shorthands. For example, to set both margin-block-start and margin-block-end as a shorthand, we can use margin-block: 20px 1em. The first value representing the start edge in the block dimension, the second value the end edge in the block dimension.

We hit a problem, however, when we come to the four-value shorthand margin. That property name is used for physical margins — how do we denote the logical four-value version? Various things have been suggested, including a switch at the top of the file:

@mode "logical";

Or, to use a block that looks a little like a media query:

@mode (flow-mode: relative) {
}

Then various suggestions for keyword modifiers, using some punctuation character, or creating a brand new property name:

You can read the issue to see the various things that are being considered. Issues discussed were that while the logical version may well end up being generally the default, sometimes you will want things to relate to the screen geometry; we need to be able to have both options in one stylesheet. Having a @mode setting at the top of the CSS could be confusing; it would fail if someone were to copy and paste a chunk of the stylesheet.

My preference is to have some sort of keyword value. That way, if you look at the rule, you can see exactly which mode is being used, even if it does seem slightly inelegant. It is the sort of thing that a preprocessor could deal with for you; if you did indeed want all of your properties and values to use the logical versions.

We didn’t manage to resolve on the issue, so if you do have thoughts on which of these might be best, or can see problems with them that we haven’t described, please comment on the issue on GitHub.

Web Platform Tests Discussion

At the CSS Working Group meeting and then during the unconference style Technical Plenary Day, I was involved in discussing how to get more people involved in writing tests for CSS specifications. The Web Platform Tests project aims to provide tests for all of the web platform. These tests then help browser vendors check whether their browser is correct as to the spec. In the CSS Working Group, the aim is that any normative change to a specification which has reached Candidate Recommendation (CR) status, should be accompanied by a test. This makes sense as once a spec is in CR, we are asking browsers to implement that spec and provide feedback. They need to know if anything in the spec changes so they can update their code.

The problem is that we have very few people writing specs, so for spec writers to have to write all the tests will slow the progress of CSS down. We would love to see other people writing tests, as it is a way to contribute to the web platform and to gain deep knowledge of how specifications work. So we met to think about how we could encourage people to participate in the effort. I’ve written on this subject in the past; if the idea of writing tests for the platform interests you, take a look at my 24 Ways article on “Testing the Web Platform”.

On With The Work!

TPAC has added to my personal to-do list considerably. However, I’ve been able to pick up tips about specification editing, test writing, and to come up with a plan to get the Multi-Column Layout specification — of which I’m the co-editor — back to CR status. As someone who is not a fan of meetings, I’ve come to see how valuable these face-to-face meetings are for the web platform, giving those of us contributing to it a chance to share the knowledge we individually are developing. I feel it is important though to then take that knowledge and share it outside of the group in order to help more people get involved with developing as well as using the platform.

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress
Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress
Denis Žoljom
2018-10-26T13:45:46+02:00
2018-10-27T11:17:44+00:00

WordPress came a long way from its start as a simple blog writing tool. A long 15 years later it became the number one CMS choice for developers and non-developers alike. WordPress now powers roughly 30% of the top 10 million sites on the web.

Ever since REST API was bundled in the WordPress core, developers can experiment and use it in a decoupled way, i.e. writing the front-end part by using JavaScript frameworks or libraries. At Infinum, we were (and still are) using WordPress in a ‘classic’ way: PHP for the frontend as well as the backend. After a while, we wanted to give the decoupled approach a go. In this article, I’ll share an overview of what it was that we wanted to achieve and what we encountered while trying to implement our goals.

There are several types of projects that can benefit from this approach. For example, simple presentational sites or sites that use WordPress as a backend are the main candidates for the decoupled approach.

In recent years, the industry thankfully started paying more attention to performance. However, being an easy-to-use inclusive and versatile piece of software, WordPress comes with a plethora of options that are not necessarily utilized in each and every project. As a result, website performance can suffer.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

When it comes down to how WordPress is programmed, one thing is certain: it doesn’t follow the Model-View-Controller (MVC) design pattern that many developers are familiar with. Because of its history and for being sort of a fork of an old blogging platform called “b2” (more details here), it’s largely written in a procedural way (using function-based code). WordPress core developers used a system of hooks which allowed other developers to modify or extend certain functionalities.

It’s an all-in-one system that is equipped with a working admin interface; it manages database connection, and has a bunch of useful APIs exposed that handle user authentication, routing, and more.

But thanks to the REST API, you can separate the WordPress backend as a sort of model and controller bundled together that handle data manipulation and database interaction, and use REST API Controller to interact with a separate view layer using various API endpoints. In addition to MVC separation, we can (for security reasons or speed improvements) place the JS App on a separate server like in the schema below:

One thing why you may want to use this approach for is to ensure a separation of concerns. The frontend and the backend are interacting via endpoints; each can be on its separate server which can be optimized specifically for each respective task, i.e. separately running a PHP app and running a Node.js app.

By separating your frontend from the backend, it’s easier to redesign it in the future, without changing the CMS. Also, front-end developers only need to care about what to do with the data the backend provides them. This lets them get creative and use modern libraries like ReactJS, Vue or Angular to deliver highly dynamic web apps. For example, it’s easier to build a progressive web app when using the aforementioned libraries.

Another advantage is reflected in the website security. Exploiting the website through the backend becomes more difficult since it’s largely hidden from the public.

First, having a decoupled WordPress means maintaining two separate instances:

WordPress for the backend;

A separate front-end app, including timely security updates.

Second, some of the front-end libraries do have a steeper learning curve. It will either take a lot of time to learn a new language (if you are only accustomed to HTML and CSS for templating), or will require bringing additional JavaScript experts to the project.

Third, by separating the frontend, you are losing the power of the WYSIWYG editor, and the ‘Live Preview’ button in WordPress doesn’t work either.

Working With WordPress REST API

Before we delve deeper in the code, a couple more things about WordPress REST API. The full power of the REST API in WordPress came with version 4.7 on December 6th, 2016.

What WordPress REST API allows you to do is to interact with your WordPress installation remotely by sending and receiving JSON objects.

Setting Up A Project

Since it comes bundled with latest WordPress installation, we will be working on the Twenty Seventeen theme. I’m working on Varying Vagrant Vagrants, and have set up a test site with an URL http://dev.wordpress.test/. This URL will be used throughout the article. We’ll also import posts from the wordpress.org Theme Review Teams repository so that we have some test data to work with. But first, we will get familiar working with default endpoints, and then we’ll create our own custom endpoint.

Access The Default REST Endpoint

As already mentioned, WordPress comes with several built-in endpoints that you can examine by going to the /wp-json/ route:

http://dev.wordpress.test/wp-json/

Either by putting this URL directly in your browser, or adding it in the postman app, you’ll get out a JSON response from WordPress REST API that looks something like this:

So in order to get all of the posts in our site by using REST, we would need to go to http://dev.wordpress.test/wp-json/wp/v2/posts. Notice that the wp/v2/ marks the reserved core endpoints like posts, pages, media, taxonomies, categories, and so on.

So, how do we add a custom endpoint?

Create A Custom REST Endpoint

Let’s say we want to add a new endpoint or additional field to the existing endpoint. There are several ways we can do that. First, one can be done automatically when creating a custom post type. For instance, we want to create a documentation endpoint. Let’s create a small test plugin. Create a test-documentation folder in the wp-content/plugins folder, and add documentation.php file that looks like this:

By registering the new post type and taxonomy, and setting the show_in_rest argument to true, WordPress automatically created a REST route in the /wp/v2/namespace. You now have http://dev.wordpress.test/wp-json/wp/v2/documentation and http://dev.wordpress.test/wp-json/wp/v2/documentation-category endpoints available. If we add a post in our newly created documentation custom post going to http://dev.wordpress.test/?post_type=documentation, it will give us a response that looks like this:

This is a great starting point for our single-page application. Another way we can add a custom endpoint is by hooking to the rest_api_init hook and creating an endpoint ourselves. Let’s add a custom-documentation route that is a bit different than the one we registered. Still working in the same plugin, we can add:

There are a lot of other things you can do with REST API (you can find more details in the REST API handbook).

Work Around Long Response Times When Using The Default REST API

For anyone who has tried to build a decoupled WordPress site, this is not a new thing — REST API is slow.

My team and I first encountered the strange WordPress-lagging REST API on a client site (not decoupled), where we used the custom endpoints to get the list of locations on a Google map, alongside other meta information created using the Advanced Custom Fields Pro plugin. It turned out that the time the first byte (TTFB) — which is used as an indication of the responsiveness of a web server or other network resource — took more than 3 seconds.

After a bit of investigating, we realized the default REST API calls were actually really slow, especially when we “burdened” the site with additional plugins. So, we did a small test. We installed a couple of popular plugins and encountered some interesting results. The postman app gave the load time of 1.97s for 41.9KB of response size. Chrome’s load time was 1.25s (TTFB was 1.25s, content was downloaded in 3.96ms). Just to retrieve a simple list of posts. No taxonomy, no user data, no additional meta fields.

Why did this happen?

It turns out that accessing REST API on the default WordPress will load the entire WordPress core to serve the endpoints, even though it’s not used. Also, the more plugins you add, the worse things get. The default REST controller WP_REST_Controller is a really big class that does a lot more than necessary when building a simple web page. It handles routes registering, permission checks, creating and deleting items, and so on.

There are two common workarounds for this issue:

Intercept the loading of the plugins, and prevent loading them all when you need to serve a simple REST response;

Load only the bare minimum of WordPress and store the data in a transient, from which we then fetch the data using a custom page.

Improving Performance With The Decoupled JSON Approach

When you are working with simple presentation sites, you don’t need all the functionality REST API offers you. Of course, this is where good planning is crucial. You really don’t want to build your site without REST API, and then say in a years time that you’d like to connect to your site, or maybe create a mobile app that needs to use REST API functionality. Do you?

For that reason, we utilized two WordPress features that can help you out when serving simple JSON data out:

The define( 'SHORTINIT', true ); will prevent the majority of WordPress core files to be loaded, as can be seen in the wp-settings.php file.

We still may need some of the WordPress functionality, so we can require the file (like wp-load.php) manually. Since wp-load.php sits in the root of our WordPress installation, we will fetch it by getting the path of our file using $_SERVER['SCRIPT_FILENAME'], and then exploding that string by wp-content string. This will return an array with two values:

The root of our installation;

The rest of the file path (which is of no interest to us).

Keep in mind that we’re using the default installation of WordPress, and not a modified one, like for example in the Bedrock boilerplate, which splits the WordPress in a different file organization.

Lastly, we require the wp-load.php file, with a little bit of sanitization, for security.

The helper methods in the above code will enable us to do some caching:

get_allowed_post_types()
This method lets post types know that we want to enable showing in our custom ‘endpoint’. You can extend this, and the plugin we’ve actually made this method filterable so that you can just use a filter to add additional items.

is_post_type_allowed_to_save()
This method simply checks to see if the post type we’re trying to fetch the data from is in the allowed array specified by the previous method.

get_page_cache_name_by_slug()
This method will return the name of the transient that the data will be fetched from.

get_page_data_by_slug()
This method is the method that will perform the WP_Query on the post via its slug and post type and return the contents of the post array that we’ll convert with the JSON using the get_json_page() method.

update_page_transient()
This will be run on the save_post hook and will overwrite the transient in the database with the JSON data of our post. This last method is known as the “key caching method”.

Let’s explain transients in more depth.

Transients API

Transients API is used to store data in the options table of your WordPress database for a specific period of time. It’s a persisted object cache, meaning that you are storing some object, for example, results of big and slow queries or full pages that can be persisted across page loads. It is similar to regular WordPress Object Cache, but unlike WP_Cache, transients will persist data across page loads, where WP_Cache (storing the data in memory) will only hold the data for the duration of a request.

It’s a key-value store, meaning that we can easily and quickly fetch the desired data, similar to what in-memory caching systems like Memcached or Redis do. The difference is that you’d usually need to install those separately on the server (which can be an issue on shared servers), whereas transients are built in with WordPress.

As noted on its Codex page — transients are inherently sped up by caching plugins. Since they can store transients in memory instead of a database. The general rule is that you shouldn’t assume that transient is always present in the database — which is why it’s a good practice to check for its existence before fetching it

You can use it without expiration (like we are doing), and that’s why we implemented a sort of ‘cache-busting’ on post save. In addition to all the great functionality they provide, they can hold up to 4GB of data in it, but we don’t recommend storing anything that big in a single database field.

The last piece of the puzzle that we need is an ‘endpoint’. I’m using the term endpoint here, even though it’s not an endpoint since we are directly calling a specific file to fetch our results. So we can create a test.php file that looks like this:

If we go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php, we’ll see this message:

Error, page slug or type is missing!

So, we’ll need to specify the post type and post slug. When we now go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php?slug=hello-world&type=post we’ll see:

Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!

Oh, wait! We need to re-save our pages and posts first. So when you’re starting out, this can be easy. But if you already have 100+ pages or posts, this can be a challenging task. This is why we implemented a way to clear the transients in the Decoupled JSON Content plugin, and rebuild them in a batch.

But go ahead and re-save the Hello World post and then open the link again. What you should now have is something that looks like this:

And that’s it. The plugin we made has some more extra functionality that you can use, but in a nutshell, this is how you can fetch the JSON data from your WordPress that is way faster than using REST API.

Before And After: Improved Response Time

We conducted testing in Chrome, where we could see the total response time and the TTFB separately. We tested response times ten times in a row: first without plugins and then with the plugins added. Also, we tested the response for a list of posts and for a single post.

The results of the test are illustrated in the tables below:

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster. (Large preview)
Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster. (Large preview)

As you can see, the difference is drastic.

Security Concerns

There are some caveats that you’ll need to take a good look at. First of all, we are manually loading WordPress core files, which in the WordPress world is a big no-no. Why? Well, besides the fact that manually fetching core files can be tricky (especially if you’re using nonstandard installations such as Bedrock), it could pose some security concerns.

If you decide to use the method described in this article, be sure you know how to fortify your server security.

The first header is a way to bypass CORS security measure so that only your front-end app can fetch the contents when going to the specified file.

Second, disable directory traversal of your app. You can do this by modifying nginx settings, or add Options -Indexes to your .htaccess file if you’re on an Apache server.

Adding a token check to the response is also a good measure that can prevent unwanted access. We are actually working on a way to modify our Decoupled JSON plugin so that we can include these security measures by default.

A check for an Authorization header sent by the frontend app could look like this:

Then you can check if the specific token (a secret that is only shared by the front- and back-end apps) is provided and correct.

Conclusion

REST API is great because it can be used to create fully-fledged apps — creating, retrieving, updating and deleting the data. The downside of using it is its speed.

Obviously, creating an app is different than creating a classic website. You probably won’t need all the plugins we installed. But if you just need the data for presentational purposes, caching data and serving it in a custom file seems like the perfect solution at the moment, when working with decoupled sites.

The method explained in this article solves the speed issue that the WordPress REST API encounters and will give you an extra boost when working on a decoupled WordPress project. As we are on our never-ending quest to squeeze out that last millisecond out of every request and response, we plan to optimize the plugin even more. In the meantime, please share your ideas on speeding up decoupled WordPress!

In my previous post, I examined video trends on the web today, using data from the HTTP Archive. I found that many websites serve the same video content on mobile and desktop, and that many video streams are being delivered at bitrates that are too high to playback on 3G speed connections. We also discovered that may websites automatically download video to mobile devices — damaging customer’s data plans, battery life, for videos that might not ever be played.

TL;DR: In this post, we look at techniques to optimize the speed and delivery of video to your customers, and provide a list of 9 best practices to help you deliver your video assets.

Video Playback Metrics

There are 3 principal video playback metrics in use today:

Video Startup Time

Video Stalling

Video Quality

Since video files are large, optimizing the video to be as small as possible will lead to faster video delivery, speeding up video start, lowering the number of stalls, and minimizing the effect of the quality of the video delivered. Of course, we need to balance startup speed and stalling with the third metric of quality (and higher quality videos generally use more data).

Video Startup

When a user presses play on a video, they expect to be able to watch the video quickly. According to Conviva (a leader in video metric analysis), in Q1 of 2018, 14% of videos never started playing (that’s 2.4 Billion video plays) after the user pressed play.

2.3% of videos (400M video requests) failed to play after the user pressed the play button. 11.54% (2B plays) were abandoned by the user after pressing play. Let’s try to break down what might be causing these issues.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Video playback failure accounted for 2.3% of all video plays. What could lead to this? In the HTTP Archive data, we see 0.3% of all video requests resulting in a 4xx or 5xx HTTP response — so some percentage fail to bad URLs or server misconfigurations. Another potential issue (that is not observed in the HTTP Archive data) are videos that are blocked by Geolocation (blocked based on the location of the viewer and the licensing of the provider to display the video in that locale).

Video Playback Abandonment

The Conviva report states that 11.5% of all video plays would play, but that the customer abandoned the playback before the video started playing. The issue here is that the video is not being delivered to the customer fast enough, and they give up. There are many studies on the mobile web where long delays cause abandonment of web pages, and it appears that the same effect occurs with video playback as well.

Research from Akamai shows that viewers will wait for 2 seconds, but for each subsequent second, 5.8% of viewers abandon the video.

So what leads to video playback issues? In general, larger files take longer to download, so will delay playback. Let’s look at a few ways that one can speed up the playback of videos. To reduce the number of videos abandoned at startup, we should ‘slim’ down these files as best as possible, so they download (and begin playback) quickly.

MP4: Video Preload

To ensure fast playback on the web, one option is to preload the video onto the device in advance. That way, when your customer clicks ‘play’ the video is already downloaded, and playback will be fast. HTML offers a preload attribute with 3 possible options: auto, metadata and none.

preload = auto

When your video is delivered with preload="auto", the browser downloads the entire video file and stores it locally. This permits a large performance improvement for video startup, since the video is available locally on the device, and no network interference will slow the startup.

However, preload="auto" should only be used if there is a high probability that the video will be viewed. If the video is simply resident on your webpage, and it is downloaded each time, this will add a large data penalty to your mobile users, as well as increase your server/CDN costs for delivering the entire video to all of your users.

This website has a section entitled “Video Gallery” with several videos. Each video in this section has preload set to auto, and we can visualize their download in the WebPageTest waterfall as green horizontal lines:

There is a section called “Video Gallery”, and the files for this small section of the website account for 14.6M (83%) of the page download. The odds that one (of many) videos will be played is probably pretty low, and so utilizing preload="auto" only generates a lot of data traffic for the site.

In this case, it is unlikely that even one of these videos will be viewed, yet all of them are downloaded completely, adding 14.8MB of content to the mobile site (83% of the content on the page). For videos that are have a high probability of playback (perhaps >90% of page views result in video play) — preloading the entire video is a very good idea. But for videos that are unlikely to be played, preload="auto" will only cause extra tonnage of content through your servers and to your customer’s mobile (and desktop) devices.

preload="metadata"

When the preload="metadata" attribute is used, an initial segment of the video is downloaded. This allows the player to know the size of the video window, and to perhaps have a second or 2 of video downloaded for immediate playback. The browser simply makes a 206 (partial request) of the video content. By storing a small bit of video data on the device, video startup time is decreased, without a large impact to the amount of data transferred.

On Chrome, metadata is the default choice if no attribute is chosen.

Note: This can still lead to a large amount of video to be downloaded, if the video is large.

For example, on a mobile website with a video set at preload="metadata", we see just one request for video:

And the request is a partial download, but it still results in 2.7 MB of video to be downloaded because the full video is 1080p, 150s long and 97 MB (we’ll talk about optimizing video size in the next sections).

So, I would recommend that preload="metadata" still only be used when there is a fairly high probability that the video will be viewed by your users, or if the video is small.

preload="none"

The most economical download option for videos, as no video files are downloaded when the page is loaded. This will potentially add a delay in playback, but will result in faster initial page load For sites with many videos on a single page, it may make sense to add a poster to the video window, and not download any of the video until it is expressly requested by the end user. All YouTube videos that are embedded on websites never download any video content until the play button is pressed, essentially behaving as if preload="none".

Preload Best Practice: Only use preload="auto" if there is a high probability that the video will be watched. In general, the use of preload="metadata" provides a good balance in data usage vs. startup time, but should be monitored for excessive data usage.

MP4 Video Playback Tips

Now that the video has started, how can we ensure that the video playback can be optimized to not stall and continue playing. Again, the trick is to make sure the video is as small as possible.

Let’s look at some tricks to optimize the size of video downloads. There are several dimensions of video that can be optimized to reduce the size of the video:

Audio

Video files are split into different “streams” — the most common being the video stream. The second most common stream is the audio track that syncs to the video. In some video playback applications, the audio stream is delivered separately; this allows for different languages to be delivered in s seamless manner.

If your video is played back in a silent manner (like a looping GIF, or a background video), removing the audio stream from the video is a quick and easy way to reduce the file size. In one example of a background video, the full file was 5.3 MB, but the audio track (which is never heard) was nearly 300 KB (5% of the file) By simple eliminating the audio, the file will be delivered quickly without wasting bytes.

42% of the MP4 files found on the HTTP Archive have no audio stream.

Best Practice: Remove the audio tracks from videos that are played silently.

Video Encoding

When encoding a video, there are options to reduce the video quality (number of pixels per frame, or the frames per second). Reducing a high-quality video to be suitable for the web is easy to do, and generally does not affect the quality delivered to your end users. This article is not long enough for an in depth discussion of all the various compression techniques available for video. In x264 and x265 encoders, there is a term called the Constant Rate Factor (CRF). Using a CRF of 23-28 will generally give a good compression/quality trade off, and is a great first start into the realm of video compression

Video Size

Video size can be affected by many dimensions: length, width, and height (you could probably include audio here as well).

Video Duration

The length of the video is generally not a feature that a web developer can adjust. If the video is going to playback for three minutes, it is going to playback for three minutes. In cases in which the video is exceptionally long, tools like preload="none" or streaming the video can allow for a smaller amount of data to be downloaded initially to reduce page load time.

Video Dimensions

18% of all video found in the HTTP Archive is identical on mobile and desktop. Those who have worked with responsive web design know how optimizing images for different viewports can drastically reduce load times since the size of the images is much smaller for smaller screens.

The same holds for video. A website with a 30 MB 2560×1226 background video will have a hard time downloading the video on mobile (probably on desktop, too!). Resizing the video drastically decreases the files size, and might even allow for three or four different background videos to be served:

Width
Video (MB)
1226
30
1080
8.1
720
43
608
3.3
405
1.76

Now, unfortunately, browsers do not support media queries for video in HTML, meaning that this just does not work:

Therefore, we’ll need to create a small JS wrapper to deliver the videos we want to different screen sizes. But before we go there…

Downloading Video, But Hiding It From View

Another throwback to the early responsive web is to download full-size images, but to hide them on mobile devices. Your customers get all the delay for downloading the large images (and hit to mobile data plan, and extra battery drain, etc.), and none of the benefit of actually seeing the image. This occurs quite frequently with video on mobile. So, as we write our script, we can ensure that smaller screens never request the video that will not appear in the first place.

Retina Quality Videos

You may have different videos for different device screen densities. This can lead to added time to download the videos to your mobile customers. You may wish to prevent retina videos on smaller screen devices, or on devices with a limited network bandwidth, falling to back to standard quality videos for these devices. Tools like the Network Information API can provide you with the network throughput, and help you decide which video quality you’d like to serve to your customer.

Downloading Different Video Types Based On Device Size And Network Quality

We’ve just covered a few ways to optimize the delivery of movies to smaller screens, and also noted the inability of the video tag to choose between video types, so here is a quick JS snippet that will use the screen width to:

Our page has a responsive video with two different sizes: one for mobile, and another for desktop-sized screens. Mobile users get great video quality, but the file is only 2.6 MB, compared to the 10MB video for desktop.

Animated GIFs

Animated GIFs are big files. While both aGIFs and video files compress the data through width and height dimensions, only video files have compression (on the often larger) time axis. aGIFs are essentially “flipping through” static GIF images quickly. This lack of compression adds a significant amount of data. Thankfully, it is possible to replace aGIFs with a looping video, potentially saving MBs of data for each request.

<video loop autoplay muted src="pseudoGif.mp4">

In Safari, there is an even fancier approach: You can place a looping mp4 in the picture tag, like so:

In this case, Safari will play the animated GIF, while Chrome (and other browsers that support WebP) will play the animated WebP, with a fallback to the animated GIF. You can read more about this approach in Colin Bendell’s great post.

Third-Party Videos

One of the easiest ways to add video to your website is to simply copy/paste the code from a video sharing service and put it on your site. However, just like adding any third party to your site, you need to be vigilant about what kind of content is added to your page, and how that will affect page load. Many of these “simply paste this into your HTML” widgets add 100s of KB of JavaScript. Others will download the entire movie (think preload="auto"), and some will do both.

Third-Party Video Best Practice: Trust but verify. Examine how much content is added, and how much it affects your page load time. Also, the behavior might change, so track with your analytics regularly.

Streaming Startup

When a video stream is requested, the server supplies a manifest file to the player, listing every available stream (with dimensions and bitrate information). In HLS streaming, the player generally chooses the first stream in the list to begin playback. Therefore, the stream positioned first in the manifest file should be optimized for video startup on both mobile and desktop (or perhaps alternative manifest files should be delivered to mobile vs. desktop).

In most cases, the startup is optimized by using a lower quality stream to begin playback. Once the player downloads a few segments, it has a better idea of available throughput and can select a higher quality stream for later segments. As a user, you have likely seen this — where the first few seconds of a video looks very pixelated, but a few seconds into playback the video sharpens.

In examining 1,065 manifest files delivered to mobile devices from the HTTP Archive, we find that 59% of videos have an initial bitrate under 1.2 MBPS — and are likely to start streaming without any much delay at 1.6 MBPS 3G data rates. 11% use a bitrate between 1.2 and 1.6 MBPS — which may slow the startup on 3G, and 30% have a bitrate above 1.6 MBPS — and are unable to playback at this bitrate on a 3G connection. Based on this data, it appears that ~41% of all videos will not be able to sustain the initial bitrate on mobile — adding to startup delay, and possibly increased number of stalls during playback.

Streaming Startup Best Practice: Ensure your initial bitrate in the manifest file is one that will work for most of your customers. If the player has to change streams during startup, playback will be delayed and you will lose video views.

So, what happens when the video’s bitrate is near (or above) the available throughput? After a few seconds of download without a completed video segment ready for playback, the player stops the download and chooses a lower quality bitrate video, and begins the process again. The action of downloading a video segment and then abandoning leads to additional startup delay, which will lead to video abandonment.

We can visualize this by building video manifests with different initial bitrates. We test 3 different scenarios: starting with the lowest (215 KBPS), middle (600 KBPS), and highest bitrate (2.6 MBPS).

When beginning with the lowest quality video, playback begins at 11s. After a few seconds, the player begins requesting a higher quality stream, and the picture sharpens.

When starting with the highest bitrate (testing on a 3G connection at 1.6 MBPS), the player quickly realizes that playback cannot occur, and switches to the lowest bitrate video (215 KBPS). The video starts playing at 17s. There is a 6-second delay, and the video quality is the same low quality delivered to in the first test.

Using the middle-quality video allows for a bit of a tradeoff, the video begins playing at 13s (2 seconds slower), but is high quality from the start -and there is no jump from pixelated to higher quality video.

Best Practice for Video Startup: For fastest playback, start with the lowest quality stream. For longer videos, you might consider using a ‘middle quality’ stream at start to deliver sharp video at startup (with a slightly longer delay).

WebPageTest results: Initial video stream is low, middle and high (from top to bottom). The video starts the fastest with the lowest quality video. It is important to note that the high quality start video at 17s is the same quality as the low quality start at 11s.

Streaming: Continuing Playback

When the video player can determine the optimal video stream for playback and the stream is lower than the available network speed, the video will playback with no issues. There are tricks that can help ensure that the video will deliver in an optimal manner. If we examine the following manifest entry:

The information line reports that this stream has a 913 KBPS bitrate, and 640×360 resolution. If we look at the URL that this line points to, we see that it references a 600k video. Examining the video files shows that the video is 600 KBPS, and the manifest is overstating the bitrate.

Overstating The Video Bitrate

PRO
Overstating the bitrate will ensure that when the player chooses a stream, the video will download faster than expected, and the buffer will fill up faster than expected, reducing the possibility of a stall.

CON
By overstating the bitrate, the video delivered will be a lower quality stream. If we look at the entire list of reported vs. actual bitrates:

For users on a 1.6 MBPS connection, the player will choose the 913 KBPS bitrate, serving the customer 600 KBPS video. However, if the bitrates had been reported accurately, the 950 KBPS bitrate would be used, and would likely have streamed with no issues. While the choices here prevent stalls, they also lower the quality of the delivered video to the consumer.

Best Practice: A small overstatement of video bitrate may be useful to reduce the number of stalls in playback. However, too large a value can lead to reduced quality playback.

Test the Neilsen video in the browser, and see if you can make it jump back and forth.

Conclusion

In this post, we’ve walked through a number of ways to optimize the videos that you present on your websites. By following the best practices illustrated in this post:

preload="auto"
Only use if there is a high probability that this video will be watched by your customers.

preload="metadata"
Default in Chrome, but can still lead to large video file downloads. Use with caution.

Video Dimensions
Consider delivering differently sized video to mobile over desktop. The videos will be smaller, download faster, and your users are unlikely to see the difference (your server load will go down too!)

Video Compression
Don’t forget to compress the videos to ensure that they are delivered

Don’t ‘hide’ videos
If the video will not be displayed — don’t download it.

Video Playback On The Web: The Current State Of Video (Part 1)
Video Playback On The Web: The Current State Of Video (Part 1)
Doug Sillars
2018-10-24T13:30:24+02:00
2018-10-25T13:47:34+00:00

Usage of video on the web is increasing as devices and networks become faster and more capable of handling video content. Research shows that sites with video increase engagement by 80%. E-Commerce sites with video have higher conversions than sites without video.

But adding video can come at a cost. Videos (being larger files) add to the page load time, and performance research shows that slower pages have the opposite effect of lower customer engagement and conversions. In this aticle, I’ll examine the important metrics to balance performance and video playback on the web, look at how video is being used today, and provide best practices on delivering video on the web.

One of the first steps to improve customer satisfaction is to speed up the load time of a page. Google has shown that mobile pages that take over three seconds to load lose 53% of their audience to abandonment. Other studies find that on improving site performance, usage and sales increase.

Adding video to a website will increase engagement, but it can also dramatically slow down the load time, so it is clear that a balance must be found between adding videos to your site and not impacting the load time too greatly.

To examine the state of video on the web today, I’ll use data from the HTTP Archive. The HTTP Archive uses WebPageTest to scan the performance of 1.2 million mobile and desktop websites every two weeks, and then stores the data in Google BigQuery.

Typically just the main page of each domain is checked (meaning www.cnn.com is run, but www.cnn.com/politics is not). This data can help us understand how the usage of video on the web affects the performance of websites. Mobile tests are run on an emulated Motorola G4 with a 3G internet connection (1.6 MBPS down, 768 KBPS up, 300 ms RTT), and desktop tests run Chrome on a cable connection (5 MBPS down, 1 MBPS up, 28ms RTT). I’ll be using the data set from 1 August 2018.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

As a first step to study sites with video, we should look at sites that download video files when the page loads. There are 35k mobile sites and 55k desktop sites with video file downloads in the HTTP Archive data set (that’s 3% of all mobile sites and 4.5% of all desktop sites). Comparing desktop to mobile, we find that 30k of these sites have video on both mobile and desktop (leaving ~5,800 sites on mobile with no video on the desktop).

The median mobile page with video weighs in at a hefty 7 MB (583% larger than 1.2 MB found for the median mobile site). This increase is not fully accounted for by video alone (2.5 MB). As sites with video tend to be more feature rich and visually engaging, they also use more images (the median site has over 1 MB more), CSS, and Javascript. The table below also shows that the median SpeedIndex (a measurement of how quickly content appears on the screen) for sites with video is 3.7s slower than a typical mobile site, taking a whopping 11.5 seconds to load.

This clearly shows that sites that are more interactive and have video content take (on average) longer to load that sites without video. But can we speed up video delivery? What else can we learn from the data at hand?

Video Hosting

When examining video delivery, are the files being served from a CDN or video provider, or are developers hosting the videos on their own servers? By examining the domain of the videos delivered on mobile, we find that 12,163 domains are used to deliver video, indicating that ~49% of sites are serving their own video files. If we stack rank the domains by frequency, we are able to determine the most common video hosting solutions:

Top CDNs and domains Facebook, Akamai, Google, Cloudinary, AWS, and Cloudfront lead the way, which is not surprising. However, it is interesting to see YouTube and Vimeo so far down in the list, as they are two of the most popular video sharing sites.

Let’s look into YouTube, Vimeo and Facebook video delivery:

YouTube Video Counts

By default, pages with a YouTube video embedded do not actually download any video files — just scripts and a placeholder image, so they do not show up in a querly looking for sites with video downloads. One of the Javascript downloads for the YouTube Video player is www-embed-player.js. Searching for this file, we find 69k instances on 66,647 mobile sites. These sites have a median SpeedIndex of 10,700, and data usage of 3.31MB — better than sites with video downloaded, but still slower than sites with no video at all. The increase in data is directly associated with more images and Javascript (as shown below).

There are 14,148 requests for Vimeo videos in HTTP Archive for Video playback. I see only 5,848 requests for the player.js file (in the format https://f.vimeocdn.com/p/3.2.0/js/player.js — implying that perhaps there are many videos on one page, or perhaps another location for the video player file. There are 17 different versions of the player present into HTTP Archive, with the most popular being 3.1.5 and 3.1.4:

There does not appear to be any performance gain for any Vimeo Library — all of the pages have similar load times.

Note: Using www-embed-player.js for YouTube or https://f.vimeocdn.com/p/*/js/player.js for Vimeo are good fingerprints for browsers with a clean cache, but if the customer has previously browsed a site with an embedded video, this file might already be in the browser cache, and thus will not be re-requested. But, as Andy Davies recently noted, this is not a safe assumption to make.

Facebook Video Counts

It is surprising that in the HTTP Archive data, 67% of all video requests are from Facebook’s CDN. It turns out that on Chrome, 3rd party Facebook widgets download 30% of all videos posted inside the widget (This effect does not occur in Safari or in Firefox.). It turns out that a 3rd party widget added with just a few lines of code is responsible for 57% of all the video seen in the HTTP Archive.

Video File Types

The majority of videos on pages tested are Mp4s. If we look at all the videos downloaded (excluding those from Facebook), we get the following view:

What can we learn about these files? Let’s look each file type to learn more about the content being delivered.

I used FFPROBE to query the 34k unique MP4 files, and obtained data for 14,700 videos (many of the videos had changed or been removed in the 3 weeks from HTTP Archive capture to analysis).

MP4 Video Data

Of the 14.7k videos in the dataset, 8,564 have audio tracks (58%). Shorter videos that autoplay or videos that play in the background do not require audio, so stripping the audio track is a great way to reduce the file size (and speed the delivery) of your videos.

The next most important aspect to quickly downloading a video are the dimensions. The larger the dimensions (and in the case of video, there are three dimensions to consider: width × height × time), the larger the video file will be.

MP4 Video Duration

Most of the 14k videos studied are short: the median (50th percentile) duration is 21s. However, 10% of the videos surveyed are over 2 minutes in duration. Use cases here will, of course, be divided, but for short video loops, or background videos — shorter videos will use less data, and download faster.

The dimensions of the video on the screen decide how many pixels each frame will have to use. The chart below shows the various video widths that are being served to the mobile device. (As a note, the Moto G4 has a screen size of 1080×1920, and the pages are all viewed in portrait mode).

As the data shows, the two most utilized video widths are significantly larger than the G4 screen (when held in portrait mode). A full 49% of all videos are served with a width greater than 1080 pixels wide. The Alcatel 1x, a new Android Go device, has a 480×960-pixel screen. 77% of the videos delivered in the sample set are larger than 480 pixels wide.

As dimensions of videos decrease, so does the files size (and thus time to deliver the video). This is the primary reason to resize videos.

Why are these videos so large? If we correlate the videos served on mobile and desktop, we find that 18% of videos served on mobile are the same videos served on the desktop. This is a ‘problem’ solved for images years ago with responsive images. By serving differently sized videos to different sized devices, we can ensure that beautiful videos are served, but at a size and dimension that makes sense for the device.

MP4 Video Bitrate

The bitrate of the video delivered to the device plays a large effect on how well the video will play back. The HTTP Archive tests are run on a 3G connection at 1.6 MBPS download speed. To playback (without stalling) the download has to be faster than playback. Let’s provide 80% of the available bitrate to video files (1.3 MBPS). 47% of the videos in the sample set have a bitrate over 1.3 MBPS, meaning that when these videos are played on a 3G connection, they are more likely to stall — leading to unhappy customers. 27% of videos have a bitrate higher than 2.5 MBPS, 10% are higher than 5 MBPS, and 35 of videos served to mobile devices have a bitrate > 10 MBPS. These larger videos are unlikely to play without stalling on many connections — fixed or mobile.

Larger videos tend to carry a larger bitrate, as larger dimensioned videos require a lot more data to populate the additional pixels on the device. Cross referencing the bitrate of each video to the width confirms this: videos with width 1280 (orange) and 1920 (gray) have a much broader distribution of bitrates (more data points to the right in the chart). The column marked in yellow denotes the 136 videos with width 1920, and a bitrate between 10-11 MBPS.

HTTP2 has redefined content delivery with multiplexed connections — where just one connection per server is required. For large files like video, does HTTP2 provide a meaningful improvement to content delivery? If we look at the stats from the HTTP Archive:

Omitting the 116k Facebook videos (all sent via HTTP2), we see that it is about a 50:50 split between HTTP 1.1 and HTTP2. However, HTTP1.1 can utilize HTTPS, and when we filter for HTTPS usage, we find that 81% of video streams are sent via HTTPS, with HTTP2 being used slightly more than HTTPS1.1 (41%:36%)

As you can see, comparing the speed of HTTP and HTTP2 video delivery is a work in progress.

HLS Video Streaming

Video streaming using adaptive bitrate is an ideal way to deliver video to the end user. Multiple versions of each video are built with different dimensions and bitrates. The list of available streams is presented to the playback device, and the video player on the device can choose the most appropriate stream based on the size of the device screen and the available network conditions. There are 1,065 manifest files (and 14k video stream files) in the HTTP Archive data set that I examined.

Video Stream Playback

One key metric in video streaming is to have the video start as quickly as possible. While the manifest file has a list of available streams, the player has no idea the available bandwidth of the network at the beginning of playback. To begin streaming, and because the player has to pick a stream, it typically chooses the first one in the list. In order to facilitate a fast video startup, it is important to place the correct stream at the top of your manifest file.

Note: Utilizing the Chrome Network Info API to generate manifest files on the fly might be a good way to quickly optimize video content at startup.

One way to ensure that the video starts quickly is to start with the lowest quality video segment, as the download will be the fastest. The initial video quality may be pixelated, but as the player better understands the network quality, it can quickly adjust to a more appropriate (hopefully higher quality) video stream. With that in mind, let’s look at the initial stream bitrates in the HTTP Archive.

The red line in the above chart denotes 1.5 MBPS (recall that mobile tests are run at 1.6 MBPS, and not only video content is being downloaded). We see 30.5% of all of the streams (everything to the left of the line) start with an initial bitrate higher than 1.5 MBPS (and are thus unlikely to playback on a 3G connection) 17% start above 2 MBPS.

So what happens when video download is slower than the actual playback of the video? Initially, the player will attempt to download the (too) large bitrate files, but based on the download speed, will realise the problem. The player will then switch to downloading a lower bitrate video, so that download is faster than playback. The problem is that the initial download attempt takes time, and adding a delay to video playback start leads to customer abandonment.

We can also look at the most common bitrates of .ts files (the files that have the video content), to see what bitrates end up being downloaded on mobile. This data includes the initial bitrates, and any subsequent file downloaded during the WebPageTest run:

We can see two major groupings of video streaming bitrates (100-300 KBPS, and a broader peak from 300-1,000 MBPS). This is where we would expect streams to appear, given that the network speed is capped at 1.6 MBPS.

Comparing the data to the desktop, Mobile clearly is higher at the lower bitrates, and desktop streams have high peaks in the 500-600 and 800-900 KBPS ranges, that are not seen in mobile.

When we compare the initial bitrates observed (blue) with the actual files downloaded, it is very clear that for mobile the bitrate generally decreases during stream playback, indicating that lowering the initial bitrate for video streams might improve the performance of video startup and prevent stalls in early playback of the video. Desktop video also appears to decrease, but it is also possible that some video move to higher playback speeds.

Conclusion

Video content on the web increases customer engagement and satisfaction. Pages that load quickly have the same effect. The addition of video to your website will slow down the page rendering time, necessitating a balance between overall page load and video content. To reduce the size of your video content, ensure that you have versions appropriately sized for mobile device dimensions, and use shorter videos when possible.

If playback of your videos is not essential, follow the path of YouTube and Vimeo — download all the required pieces to be ready for playback, but don’t actually download any video segments until the user presses play. Finally — if you are streaming video — start with the lowest quality setting to ensure a fast video playback.

In my next post on video, I will take these general findings, and dig deeper into how to resolve potential issues with examples. Stay tuned!

For most of my career, attribute selectors have been more magic than science. I’d stare, gobsmacked, at the CSS for outputting a link in a print style sheet, understanding nothing. I’d dutifully copy, and paste it into my print stylesheet then run off to put out whatever project was the largest burning trash heap.

But you don’t have to stare slack-jawed at CSS attribute selectors anymore. By the end of this article, you’ll use them to run diagnostics on your site, fix otherwise unsolvable problems, and generate technologic experiences so advanced they feel like magic. You may think I’m promising too much and you’re right, but once you understand the power of attribute selectors, you might feel like exaggerating yourself.

On the most basic level, you put an HTML attribute in square brackets and call it an attribute selector like so:

[href] {
color: chartreuse;
}

The text of any element that has an href and doesn’t have a more specific selector will now magically turn chartreuse. Attribute selector specificity is the same as classes.

But you can do far more with attribute selectors. Just like your DNA, they have built-in logic to help you choose all kinds of attribute combinations and values. Instead of only exact matching the way a tag, class, or id selector would, they can match any attribute and even string values within attributes.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Attribute selectors can live on their own or be more specific, i.e. if you need to select all div tags that had a title attribute.

div[title]

But you could also select the children of divs that have a title by doing the following:

div [title]

To be clear, no space between them means the attribute is on the same element (just like an element and class without a space), and a space between them means a descendant selector, i.e. selecting the element’s children who have the attribute.

You can get far more granular in how you select attributes including the values of attributes.

div[title="dna"]

The above selects all divs with an exact title of “dna”. A title of “dna is awesome” or “dnamutation” wouldn’t be selected, though there are selector algorithms that handle each of those cases (and more). We’ll get to those soon.

Note: Quotation marks are not required in attribute selectors in most cases, but I will use them because I believe it increases clarity and ensures edge cases work appropriately.

If you wanted to select “dna” out of a space separated list like “my beautiful dna” or “mutating dna is fun!” you can add a tilde or “squiggly,” as I like to call it, in front of the equal sign.

div[title~="dna"]

You can select titles such as “dontblamemeblamemydna” or “his-stupidity-is-from-upbringing-not-dna” then you can use the dollar sign $ to match the end of a title.

[title$="dna"]

To match the front of an attribute value such as titles of “dnamutants” or “dna-splicing-for-all” use a caret.

[title^="dna"]

While having an exact match is helpful it might be too tight of a selection, and the caret front match might be too wide for your needs. For instance, you might not want to select a title of “genealogy”, but still select both “gene” and “gene-data”. The pipe character (or vertical bar) is just that; it does an exact match, but includes when the exact match is followed by a dash.

[title|="gene"]

Lastly, there’s a full search attribute operator that will match any substring.

[title*="dna"]

But use it wisely as the above will match “I-like-my-dna-like-my-meat-rare” as well as “edna”, “kidnapping”, and “echidnas” (something Edna really shouldn’t do.)

What makes these attribute selectors even more powerful is that they’re stackable — allowing you to select elements with multiple matching factors.

But you need to find the a tag that has a title and has a class ending in “genes”? Here’s how:

a[title][class$="genes"]

Not only can you select the attributes of an HTML element you can also print their mutated genes using pseudo-“science” (meaning pseudo-elements and the content declaration).

<span class="joke" title="Gene Editing!">What’s the first thing a biotech journalist does after finishing the first draft of an article?</span>
.joke:hover:after {
content: "Answer:" attr(title);
display: block;
}

The code above will show the answer to one of the worst jokes ever written on hover (yes, I wrote it myself, and, yes, calling it a “joke” is being generous).

The last thing to know is that you can add a flag to make the attribute searches case insensitive. You add an i before the closing square bracket.

[title*="DNA" i]

And thus it would match “dna”, “DNA”, “dnA”, and any other variation.

The only downside to this is that the i only works in Firefox, Chrome, Safari, Opera and a smattering of mobile browsers.

Now that we’ve seen how to select with attribute selectors, let’s look at some use cases. I’ve divided them into two categories: General Uses and Diagnostics.

One attribute HTML5 gave us was “download” which tells the browser to, you guessed it, download that file rather than trying to open it. This is useful for PDFs and DOCs you want people to access but don’t want them to open right now. It also makes the workflow for downloading lots of files in a row easier. The downside to the download attribute is that there’s no default visual that distinguishes it from a more traditional link. Often this is what you want, but when it’s not, you can do something like the below.

a[download]:after {
content: url(download-arrow.svg);
}

You could also communicate file types with different icons like PDF vs. DOCX vs. ODF, and so on.

We’ve all come across old sites that have outdated code, but sometimes updating the code isn’t worth the time it’d take to do it on six thousand pages. You might need to override or even reapply a style implemented as an attribute before HTML5.

Sometimes you’ll come across inline styles that are gumming up the works, but they’re coming from code outside your control. It should be said if you can change them that would be ideal, but if you can’t, here’s a workaround.

Note: This works best if you know the exact property and value you want to override, and if you want it overridden wherever it appears.

For this example, the element’s margin is set in pixels, but it needs to be expanded and set in ems so that the element can re-adjust properly if the user changes the default font size.

Note: This approach should be used with extreme caution as if you ever need to override this style you’ll fall into an !important war and kittens will die.

Showing File Types

The list of acceptable files for a file input is invisible by default. Typically, we’d use a pseudo element for exposing them and, though you can’t do pseudo elements on most input tags (or at all in Firefox or Edge), you can use them on file inputs.

The not-well-publicized details and summary tag duo are a way to do expandable/accordion menus with just HTML. Details wrap both the summary tag and the content you want to display when the accordion is open. Clicking on the summary expands the detail tag and adds an open attribute. The open attribute makes it easy to style an open details tag differently from a closed details tag.

Showing URLs in print styles led me down this road to understanding attribute selectors. You should know how to construct it yourself now. You simply select all a tags with an href, add a pseudo-element, and print them using attr() and content.

a[href]:after {
content: " (" attr(href) ") ";
}
Custom Tooltips

Creating custom tooltips is fun and easy with attribute selectors (okay, fun if you’re a nerd like me, but easy either way).

Note: This code should get you in the general vicinity, but may need some tweaks to the spacing, padding, and color scheme depending on your site’s environment and whether you have better taste than me or not.

One of the great things about the web is that it provides many different options for accessing information. One rarely used attribute is the ability to set an accesskey so that that item can be accessed directly through a key combination and the letter set by accesskey (the exact key combination depends on the browser). But there’s no easy way to know what keys have been set on a website.

The following code will show those keys on :focus. I don’t use on hover because most of the time people who need the accesskey are those who have trouble using a mouse. You can add that as a second option, but be sure it isn’t the only option.

These options are for helping you identify issues either during the build process or locally while trying to fix issues. Putting these on your production site will make errors stick out to your users.

Audio Without Controls

I don’t use the audio tag too often, but when I do use it, I often forget to include the controls attribute. The result: nothing is shown. If you’re working in Firefox, this code can help you suss out if you’ve got an audio element hiding or if syntax or some other issue is preventing it from appearing (it only works in Firefox).

Web pages can be a conglomerate of content management systems and plugins and frameworks and code that Ted (sitting three cubicles over) wrote on vacation because the site was down and he fears your boss. Figuring out what JavaScript loads asynchronously and what doesn’t can help you focus on where to enhance page performance.

You can also highlight elements that have a JavaScript event attribute to refactor them into your JavaScript file. I’ve focused on the OnMouseOver attribute here, but it works for any of the JavaScript event attributes.

If you need to see where your hidden elements or hidden inputs live you can show them with:

[hidden], [type="hidden"] {
display: block;
}

But with all these amazing capabilities you think there must be a catch. Surely attribute selectors must only work while flagged in Chrome or in the nightly builds of Fiery Foxes on the Edge of a Safari. This is just too good to be true. And, unfortunately, there is a catch.

If you want to work with attribute selectors in that most beloved of browsers — that is, IE6 — you won’t be able to. (It’s okay; let the tears fall. No judgments.) Pretty much everywhere else you’re good to go. Attribute selectors are part of the CSS 2.1 spec and thus have been in browsers for the better part of a decade.

And so these selectors should no longer be magical to you but revealed as a sufficiently advanced technology. They are more science than magic, and now that you know their deepest secrets, it’s up to you. Go forth and work mystifying wonders of science upon the web.

With the latest studies and official reports out this week, it seems that to avoid an irreversible climate change on Planet Earth, we need to act drastically within the next ten years. This rose a couple of doubts and assumptions that I find worth writing about.

One of the arguments I hear often is that we as individuals cannot make an impact and that climate change is “the big companies’ fault”. However, we as the consumers are the ones who make the decisions what we buy and from whom, whose products we use and which ones we avoid. And by choosing wisely, we can make a change. By talking to other people around you, by convincing your company owner to switch to renewable energy, for example, we can transform our society and economy to a more sustainable one that doesn’t harm the planet as much. It will be a hard task, of course, but we can’t deny our individual responsibility.

Chrome 70 is here with Desktop Progressive Web Apps on Windows and Linux, public key credentials in the Credential Management API, and named Workers.

Postgres 11 is out and brings more robustness and performance for partitioning, enhanced capabilities for query parallelism, Just-in-Time (JIT) compilation for expressions, and a couple of other useful and convenient changes.

As the new macOS Mojave and iOS 12 are out now, Safari 12 is as well. What’s new in this version? A built-in password generator, a 3D and AR model viewer, icons in tabs, web pages on the latest watch OS, new form field attribute values, the Fullscreen API for iOS on iPads, font collection support in WOFF2, the font-display loading CSS property, Intelligent Tracking Prevention 2.0, and a couple of security enhancements.

Google’s decision to force users to log into their Google account in the browser to be able to access services like Gmail caused a lot of discussions. Due to the negative feedback, Google promptly announced changes for v70. Nevertheless, this clearly shows the interests of the company and in which direction they’re pushing the app. This is unfortunate as Chrome and the people working on that project shaped the web a lot in the past years and brought the ecosystem “web” to an entirely new level.

Microsoft Edge 18 is out and brings along the Web Authentication API, new autoplay policies, Service Worker updates, as well as CSS masking, background blend, and overscroll.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Max Böck wrote about the Hurricane Web and what we can do to keep users up-to-date even when bandwidth and battery are limited. Interestingly, CNN and NPR provided text-only pages during Hurricane Florence to serve low traffic that doesn’t drain batteries. It would be amazing if we could move the default websites towards these goals — saving power and bandwidth — to improve not only performance and load times but also help the environment and make users happier.

Accessibility is about more than making your website accessible for people with physical impairments. We shouldn’t forget that designing for cognitive differences is essential, too, if we want to serve our sites to as many people as possible.

Guess what? Our simple privacy-enhancing tools that delete cookies are useless as this article shows. There are smarter ways to track a user via TLS session tracking, and we don’t have much power to do anything against it. So be aware that someone might be able to track you regardless of how many countermeasures you have enabled in your browser.

Josh Clark’s comment on university research about Google’s data collection is highlighting the most important parts about how important Android phone data is to Google’s business model and what type of information they collect even when your smartphone is idle and not moving location.

Philip Walton explains his Idle until urgent principle for optimizing the load and paint performance of websites.

How can we build a website that’s working well and is fast on low-tech devices while using as little resources as possible? The Low-Tech Magazine wanted to find out and built their website following a crazy approach to save resources. Neat additional fun fact: The website goes offline when there’s not enough sun to power the 2.5 Watt solar panel that powers the server.

Willian Martins shares the secrets of JavaScript’s bind() function, a widely unknown operator that is so powerful and allows us to invoke this from somewhere else into named, non-anonymous functions. A different way to write JavaScript.

Everyone knows what the “9am rush hour” means. Paul Lewis uses the term to rethink how we build for the web and why we should try to avoid traffic jams on the main thread of the browser and outsource everything that doesn’t belong to the UI into separate traffic lanes instead.

We’ve all heard a lot about how David Heinemeier Hansson from Basecamp thinks differently about work, employment, and success. This interview summarizes the “Basecamp way” and the challenges which are linked to it.

“The tech industry is growing at an exponential rate influencing society to the point that we are seeing the biggest shift, perhaps ever, in man-kind. Some tech services actually have billions of users. You read that right, not thousands, not millions, but BILLIONS of human beings using them regularly. It would be arrogant not to say that these services are forming our society and shaping our norms while their only objective was to keep the growth curve… growing.” — Anton Sten in “What about my responsibilities?”

You’re working hard to finish that project in expectation that it’ll feel so good and relaxing when it’s live. Itamar Turner-Trauring shares why this way of thinking is wrong and how we can avoid burning out.

David Wolpert explains why computers use so much energy and how we could make them vastly more efficient. But for that to happen, we need to understand the thermodynamics of computing better.

Turning down twenty billion dollars is cool. Of course, it is. But the interesting point in this article about the Whatsapp founder who just told the world how unhappy he is having sold his service to Facebook is that it seems that he believed he could keep the control over his product.

In business, there’s a lot of talk about generating customer loyalty and retaining the business of good customers. Mobile apps aren’t all that different when you think about it.

While the number of installs may signal that an app is popular with users initially, it doesn’t tell the whole story. In order for an app to be successful, it must have loyal subscribers that make use of the app as it was intended. Which is where the retention rate enters the picture.

In this article, I want to explore what a good retention rate looks like for mobile apps. I’ll dig into the more common reasons why mobile apps have low retention rates and how those issues can be fixed.

Let’s start with the basics.

Checking The Facts: What Is A Good Mobile App Retention Rate?

A retention rate is the percentage of users that remain active on your mobile app after a certain period of time. It doesn’t necessarily pertain to how many people have uninstalled the app either. A sustained lack of activity is generally accepted as a sign that a user has lost interest in an app.

To calculate a good retention rate for your mobile app, be sure to take into account the frequency of logins you expect users to make. Some apps realistically should see daily logins, especially for gaming, dating, and social networking. Others, though, may only need weekly logins, like for ride-sharing apps, Google Authenticator or local business apps.

When calculating the retention rate for anticipated daily usage, you should run the calculation for at least a week, if not more. For weekly or monthly usage, adjust your calculation accordingly.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Users Logged into the App on Day 0
Users Logged into the App on Day 1
Users Logged into the App on Day 2
Users Logged into the App on Day 3
Users Logged into the App on Day 4
Users Logged into the App on Day 5
Users Logged into the App on Day 6
Users Logged into the App on Day 7

This will give you a curve that demonstrates how well your mobile app is able to sustain users. Here is an example of how you would calculate this:

Basically, this is the average cost to build and market an app — a number you should aim to recuperate per user once the app has been installed. However, if your app loses about 90% of its users within a month’s time, think about what the loss actually translates to for your business.

Ankit Jain of Gradient Ventures summarized the key lesson to take away from these findings:

“Users try out a lot of apps but decide which ones they want to ‘stop using’ within the first 3-7 days. For ‘decent’ apps, the majority of users retained for 7 days stick around much longer. The key to success is to get the users hooked during that critical first 3-7 day period.”

As you can see from the charting of the top Android apps, Jain’s argument holds water:

Top Android apps still see a sharp decline in active users after about three days, but then the numbers plateau. They also don’t bleed as many new users upfront, which allows them to sustain a larger percentage of users.

This is exactly what you should be aiming for.

A Retention Recovery Guide For Mobile Apps

So, we know what makes for a good and bad retention rate. We also understand that it’s not about how many people have uninstalled or deleted the app from their devices. Letting an app sit in isolation, untouched on a mobile device, is just as bad.

As you can imagine, increasing your retention rate will lead to other big wins for your mobile app:

More engagement

More meaningful engagement

Greater loyalty

Increased conversions (if your app is monetized, that is)

Now you need to ask yourself:

“When are users dropping off? And why?”

You can draw your own hypotheses about this based on the retention rate alone, though it might be helpful to make use of tools like heat maps to spot problem areas in the mobile app. Once you know what’s going on, you can take action to remove the friction from the user experience.

To get you started, I’ve included a number of issues that commonly plague mobile apps with low retention rates. If your app is guilty of any of these, get to work on fixing the design or functionality ASAP!

1. Difficult Onboarding

Aside from the app store description and screenshots users encounter, onboarding is the first real experience they have with a mobile app. As you can imagine, a frustrating sign-in or onboarding procedure could easily turn off those who take that as a signal the rest of the app will be as difficult to use.

Let’s use the OkCupid dating app as an example. The initial splash screen looks great and is well-designed. It has a clear value proposition, and an easy-to-find call-to-action:

The first option is a Facebook sign-in. The other is to use a personal email address. Since Facebook logins can streamline not just signup, but the setup of dating mobile apps (since users can automatically import details, photos, and connections), this option is probably one many users’ choose.

But there’s a problem with it: After seven clicks to connect to Facebook and confirm one’s identity, here is what the user sees (or, at least, this is what I encountered the last couple of times I tried):

One of the main reasons why users choose a Facebook sign-in is because of how quick and easy it’s supposed to be. In these attempts of mine, however, my OkCupid app wouldn’t connect to Facebook. So, after 14 total clicks (7 for each time I tried to sign up), I ended up having to provide an email anyway.

This is obviously not a great first impression OkCupid has left on me (or any of its users). What makes it worse is that we know there’s a lot more work to get onboarded with the app. Unlike competitors like Bumble that have greatly simplified signup, OkCupid forces users into a buggy onboarding experience as well as a more lengthy profile configuration.

Needless to say, this is probably a bit too much for some users.

2. Slow or Sloppy Navigation

Here’s another example of a time-waster for mobile app users.

Let’s say getting inside the new app is easy. There’s no real onboarding required. Maybe you just ask if it’s okay to use their location for personalization purposes or if you can send push notifications. Otherwise, users are able to start using the app right away.

That’s great — until they realize how clunky the experience is.

To start, navigation of a mobile app should be easy and ever-present. It’s not like a browser window where users can hit that “Back” button in order to get out of an unwanted page. On a mobile app, they need a clear and intuitive exit strategy. Also, navigation of an app should never take more than two or three steps to get to a desired outcome.

One example of this comes from Wendy’s. Specifically, I want to look at the “Offers” user journey:

As you can see, the navigation at the bottom of the app is as clear as day. Users have three areas of the app they can explore — each of which makes sense for a business like Wendy’s. There are three additional navigation options in the top-right corner of the app, too.

When “Offers” is clicked, users are taken to a sort of full-screen pop-up containing all current special deals:

As you can see, the navigation is no longer there. The “X” for the Offers pop-up also sits in the top-left corner (instead of the right, which is the more intuitive choice). This is already a problem. It also persists throughout the entire Offers redemption experience.

Let’s say that users aren’t turned off by the poor navigational choices and still want to redeem one of these offers. This is what they encounter next:

Now, this is pretty cool. Users can redeem the offer at that very moment while they’re in a Wendy’s or they can place the order through the app and pick it up. Either way, this is a great way to integrate the mobile app and in-store experiences.

Imagine standing in line at a Wendy’s or going through a drive-thru that isn’t particularly busy. That image above is not one you’d want to see.

They call it “fast food” for a reason and if your app isn’t working or it takes just a few seconds too long to load the offer code, imagine what that will do for everyone else’s experience at Wendy’s. The cashiers will be annoyed that they’ve held up the flow of traffic and everyone waiting in line will be frustrated in having to wait longer.

While mobile apps generally are designed to cater to the single user experience, you do have to consider how something like this could affect the experience of others.

A poorly constructed or non-visible navigation is one thing. But a navigation that gives way too many options can be just as problematic. While a mega menu on something like an e-commerce website certainly makes sense, an oversized menu in mobile apps doesn’t.

It pains me to do this since I love the BBC, but its news app is guilty of this crime:

This looks like a standard news navigation at first glance. Top (popular) stories sit at the top; my News (customized) stories below it. But then it appears there’s more, so users are apt to scroll downwards and see what others options there are:

What’s worse is that many of the stories overlap categories, which means users could realistically see the same headlines over and over again as they scroll through the personalized categories they’ve chosen.

If BBC News (or any other app that does this) wants to allow for such deep personalization, the app should be programmed to hide stories that have already been seen or scrolled past — much like how Feedly handles its stream of news. That way, all that personalization really is valuable.

Real estate apps or, really, any apps that deal in the transaction of purchasing or renting of property or products should include images with each listing. It’s the whole reason why consumers are able to rent and buy online (or at least use it in the decisionmaking process).

But this app seems to be missing many images, which can lead to an unhelpful and unpleasant experience for users who hope to get information from the more convenient mobile app option.

If you’re going to build a mobile app that’s supposed to inform and compel users to engage, make sure it’s running in tip-top shape. All information is available. All tabs are accessible. And pages load in a reasonable timeframe.

5. Complicated or Impossible Gestures

We’ve already seen what a poorly made navigation can do to the user experience as well as the problem with pages that just don’t load. But sometimes friction can come from intentionally complicated gestures and engagements.

This is something I personally encountered with Sinemia recently. Sinemia is a competitor of the revolutionary yet failing MoviePass mobile app. Sinemia seems like a reasonable deal and one that could possibly sustain a lot longer than the unrealistic MoviePass model that promises entry to one movie every single day. However, Sinemia has had many issues with meeting the demand of its users.

To start, it delayed the sending of cards by a week. When I signed up in May, I was told I would have to wait at least 60 days to receive my card in the mail, even though my subscription had already kicked in. So, there was already a disparity there.

Sinemia’s response to that was to create a “Cardless” feature. This would enable those users who hadn’t yet received their cards to begin using their accounts. As you can see here, the FAQ included a dedicated section to Sinemia Cardless:

See that point that says “I can confirm I have the latest Sinemia release installed…”? The reason why that point is there is because many Sinemia Cardless users (myself included) couldn’t actually activate the Cardless feature. When attempting to do so, the app would display an error.

The Sinemia FAQ then goes on to provide this answer to the complaint/question:

Here’s the problem: there were never any updates available for the mobile app. So, I and many others reached out to Sinemia for support. The answer repeatedly given was that Cardless could not work if your app ran on an old version. Support asked users to delete the app from their devices and reinstall from the app store to ensure they had the correct version — to no avail.

For me, this was a big problem. I was paying for a service that I had no way of using, and I was spending way too much time uninstalling and installing an app that should work straight out the gate.

I gave up after 48 hours of futile attempts. I went to my Profile to delete my account and get a refund on the subscription I had yet to use. But the app told me it was impossible to cancel my account through it. I tried to ask support for help, but no one responded. So, after Googling similar issues with account cancellations, I found that the only channel through which Sinemia would handle these requests was Facebook Messenger.

Needless to say, the whole experience left me quite jaded about apps that can’t do something as simple as activating or deactivating an account. While I recognize an urge to get a better solution on the mobile app market, rushing out an app and functionality that’s not ready to reach the public isn’t the solution.

For those of you who notice your retention rate remaining high for the first week or so from installation, the problem may more have to do with the mobile app’s limitations.

Recolor is a coloring book app I discovered in the app store. There’s nothing in the description that would lead me to believe that the app requires payment in order to enjoy the calming benefits of coloring pictures, but that’s indeed what I encountered:

Above, you can see there are a number of free drawings available. Some of the more complex drawings will take some time to fill in, but not as much as a physical coloring book would by hand, which means users are apt to get through this quickly.

Inevitably, mobile app users will go searching for more options and this is what they will encounter:

When users look into some of the more popular drawings from Recolor, you’d hope they would encounter at least a few free drawings, right? After all, how many users could possibly be paying for a subscription to this app that’s not outrightly advertised as premium?

But it’s not just the Popular choices that require a fee to access (that’s what the yellow symbol in the bottom-right means). So, too, do most of the other categories:

It’s a shame that so much of the content is gated off. Coloring books have proven to be good for managing anxiety and stress, so leaving users with only a few dozen options doesn’t seem right. Plus, the weekly membership to the app is pretty expensive, even if users try to earn coins by watching videos.

A mobile app such as this one should make its intentions clear from the start: “Consider this a free trial. If you want more, you’ll have to pay.”

While I’m sure the developer didn’t intend to deceive with this app model, I can see how the retention rate might suffer and prevent this app from becoming a long-term staple on many users’ devices.

As I noted earlier, those initial signups might make you hopeful of the app’s long-term potential, but a forced pay-to-play scenario could easily disrupt that after just a few weeks.

7. Impossible to Convert in-App

Why do we create mobile apps? For many developers, it’s because the mobile web experience is insufficient. And because many users want a more convenient way to connect with your brand. A mobile app sits on the home screen of devices and requires just a single click to get inside.

So, why would someone build an app that forces users to leave it in order to convert? It seems pointless to even go through the trouble of creating the app in the first place (which is usually no easy task).

Megabus is a low-cost transportation service that operates in Canada, the United States and the United Kingdom. There are a number of reasons why users would gravitate to the mobile app counterpart for the website; namely, the convenience of logging in and purchasing tickets while they’re traveling.

The image above shows the search I did for Megabus tickets through the mobile app. I entered all pertinent details, found tickets available for my destination and got ready to “Buy Tickets” right then and there.

Upon clicking “Buy Tickets”, the app pushes users out and into their browser. They then are asked to reinput all those details from the mobile app to search for open trips and make a purchase.

For a service that’s supposed to make travel over long distances convenient, its mobile app has done anything but reinforce that experience.

For those of you considering building an app (whether on your own accord or because a client asked) solely so you can land a spot in app store search results, don’t waste users’ time. If they can’t have a complete experience within the app, you’re likely to see your retention rate tank fairly quickly.

Wrapping Up

Clearly, there are a number of ways in which a mobile app may suffer a misstep in terms of the user experience. And I’m sure that there are times when mobile app developers don’t even realize there’s something off in the experience.

This is why your mobile app retention rate is such a critical data point to pay attention to. It’s not enough to just know what that rate is. You should watch for when those major dropoffs occur; not just in terms of the timeline, but also in terms of which pages lead to a stoppage in activity or an uninstall altogether.

With this data in hand, you can refine the in-app experience and make it one that users want to stay inside of for the long run.

Full-stack web development requires coding both a front-end for the browser and a back-end server. Using JavaScript for both parts of the app makes life a lot simpler for full-stack devs. With the MEAN technologies, you can code a cutting-edge web app in JavaScript, from the front-end all the way down to the database.

In this detailed 3.5-hour course, Derek Jensen will show you how to use the MEAN technologies to build full-stack web apps using only JavaScript (and its close cousin TypeScript). You'll start from absolute scratch, scaffolding an empty project, and build up a complete web app using the MEAN stack.

You'll learn how to configure a MongoDB database, how to write a database abstraction layer, and how to create a REST API to make that data available to the front-end. On the client side, you'll learn how to structure an Angular app, how to create a service to connect with the back-end API, and how to implement each of the UI components that make a complete app.

Watch the Introduction Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 700,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

Live Accessibility And Performance Audits At SmashingConf Toronto
Live Accessibility And Performance Audits At SmashingConf Toronto
Markus Seyfferth
2018-10-17T13:20:35+02:00
2018-10-25T13:47:34+00:00

Earlier this year, many of your favorite speakers were featured at SmashingConf Toronto, however, things were quite different this time. The speakers had been asked to present without slides. It was interesting to see the different ways our speakers approached the challenge.

Two of our speakers chose to demonstrate how they audit a site or application live on stage: Marcy Sutton on accessibility, and Tim Kadlec on performance. Watch the videos to see an expert perform these audits, and see if there is anything you can take back to your own testing processes.

Marcy took two example components, built using React, and walked us through how these components could be made more accessible with some straightforward changes.

Performance: Tim Kadlec

Tim demonstrates how to test the performance of a site, and find bottlenecks leading to poor experiences for visitors. If you have ever wondered how to get started testing for performance, this is a talk you will find incredibly useful.

Adobe Photoshop plays a role in almost every digital creator’s life. Photoshop is what many digital artists, photographers, graphic designers, and even some web developers have in common. The tool is so flexible that often you can achieve the same results in several different ways. What sets us all apart is our personal workflows and our preferences on how we use it to achieve the desired outcome.

I use Photoshop every day and shortcuts are a vital part of my workflow. They allow me to save time and to focus better on what I am doing: digital illustration. In this article, I am going to share the Photoshop shortcuts I use frequently — some of its features that help me be more productive, and a few key parts of my creative process.

To profit the most from this tutorial, some familiarity with Photoshop would be required but no matter if you are a complete beginner or an advanced user, you should be able to follow along because every technique will be explained in detail.

For this article, I've decided to use one of my most famous Photoshop artworks named “Regret”:

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Every single designer, artist, photographer or web developer has probably once opened Photoshop and has pointed and clicked on an icon to select the Brush tool, the Move tool, and so on. We’ve all been there, but those days are long gone for most of us who use Photoshop every day. Some might still do it today, however, what I would like to talk about before getting into the details, is the importance of shortcuts.

When you think about it, you’re saving perhaps half a second by using a keyboard shortcut instead of moving your mouse (or stylus) over to the Tools bar and selecting the tool you need by clicking on the tool’s little icon. To some that may seem petty, however, do consider that every digital creator does thousands of selections per project and these half-seconds add up to become hours in the end!

Now, before we continue, please note the following:

Shortcuts Notation
I use Photoshop on Windows but all of the shortcuts should work the same on Mac OS; the only thing worth mentioning is that the Ctrl (Control) key on Windows corresponds to the Cmd (Command) key on the Mac, so I’ll be using Ctrl/Cmd throughout this tutorial.

Photoshop CS6+
All the features and shortcuts mentioned here should work in Photoshop CS6 and later — including the latest Photoshop CC 2018.

2. The Keyboard Shortcuts Window

To start off, I would like to show you where you can find the Keyboard Shortcuts window where you could modify the already existing shortcuts, and learn which key is bound to which feature or tool:

Open Photoshop, go to Edit and select Keyboard Shortcuts. Alternatively, you can access the same from here: Window → Workspace → Keyboard Shortcuts & Menus.

Now you will be greeted by the Keyboard Shortcuts and Menus window (dialog box), where you can pick a category you would like to check out. There are a ton of options in there, so it could get a bit intimidating at first, but that feeling will pass soon. The main three options (accessible through the Shortcuts for:... dropdown list) are:

Application Menus

Panel Menus

Tools

Typically the Application Menus will be the first thing you’ll see. These are the shortcuts for the menu options you see on the top of Photoshop’s window (File, Edit, Image, Layer, Type, and so on).

So for example if you’re using the Brightness/Contrast option often, instead of having to click on Image (in the menu), then Adjustments and finally find and click on Brightness/Contrast item, you can simply assign a key combination and Brightness/Contrast will show right up after you press the keys assigned.

The second section, Panel Menus, is an interesting one as well, especially in its Layers portion. You get to see several options that could be of use to you depending on the type of work you need to do. That’s where the standard New Layer shortcut lies (Ctrl/Cmd + Shift + N) but also you can set up a shortcut for Delete Hidden Layers. Deleting unnecessary layers helps in lowering the size of the Photoshop file and helps improving performance because your computer will not have to cache in on those extra layers that you’re actually not using.

The third section is Tools where you can see the shortcuts assigned to all the tools found in the left panel of Photoshop.

Pro Tip:To cycle between any of the tools that have sub-tools (example: the Eraser tool has a Background Eraser and a Magic Eraser) you just need to hold your Shift key and the appropriate shortcut button. In case of the Eraser example, press Shift + E a few times until you reach the desired sub-tool.

One last thing I would like to mention before wrapping up this section is that the Keyboard Shortcuts and Menus allows you to set up different Profiles (Photoshop calls them “sets” but I think that “profiles” better suits the purpose), so that if you don’t really want to mess with the Photoshop Defaults one, you can simply create a new personalized profile. It’s worth mentioning that when you create a new Profile, you get the Default set of Photoshop Shortcuts in it until you start modifying them.

The Keyboard Shortcuts menu can take a bit of time to get around to, however, if you invest the time in the beginning (best if you do it in your own time rather than during a project), you will benefit later.

Focusing On The Shortcuts On The Left Side Of Your Keyboard

After people acknowledged the usefulness of using shortcuts, eventually they agreed that time is being wasted moving your hand from one side of the keyboard to the opposite one. Sounds a bit petty again, however, remember those half-seconds? They still add up, but this time it can even fatigue your arm if you’re constantly switching tools and have to move your arm around. So this probably led to Adobe adding a few more shortcut features focused on the left side of the keyboard.

Now let me show you the shortcuts that I most often use (and why).

3. How To Increase And Decrease The Brush Size

In order to increase or decrease the size of your brush, you need to:

Click and hold the Alt key. (On Mac, this would be the Ctrl and Alt keys),

Click and hold the right mouse button,

Then drag horizontally from left to right to increase, and from right to left to decrease the size.

If you’re using anything from Photoshop CC 2017 and above, try pressing Fn + Ctrl + Alt while dragging. Looks like Adobe has changed this specific shortcut and haven’t document it just yet.

The moment I learned about this shortcut, I literally couldn’t stop using it!

If you’re a digital artist, I believe you will particularly love it as well. Sketching, painting, erasing, just about everything you need to do with a brush becomes a whole lot easier and fluent because you wouldn’t need to reach for the all too familiar [ and ] keys which are the default ones for increasing and decreasing the brush size. Going for those keys can disrupt your workflow, especially if you need to take your eyes off your project or put the stylus aside.

4. How To Increase And Decrease The Brush Softness

It’s actually the same key combination but with a slight twist: increasing and decreasing the softness of your Brush will work only for Photoshop’s default Round brushes. Unfortunately, if you have any custom made brushes that have a custom form, this wouldn’t work for those.

Click and hold the Alt key. (On the Mac this would be Ctrl and Alt keys),

Click and hold the right mouse button,

Then drag upwards to harden the edge of your brush and drag downwards to make it softer.

I am referring to what Adobe calls “HUD Color Picker” that pops up right where your cursor is located on the canvas.

This so-called HUD Color Picker is a built-in version and I believe it’s been around since at least Photoshop CS6 (which was released back in 2012). If you’re learning about this now, probably you’re as surprised as I was when I first came across it a few months ago. Yes, it took me a while to get used to, too! Well, to be fair, I do also have some reservations about this color picker, but I’ll get to them in a second.

If you’ve followed the key combinations above, you should see this colorful square. However, you’ve probably noticed that it’s a bit awkward to work with it. For example, you need to continue holding all of the keys, and while you do that, you need to hover over to the right rectangle to pick a color gamut and then hover back to the square to pick the shade. With all of the hovering that’s going on, it’s somewhat easy to miss the color that you’ve actually set your heart to pick, which could get a little annoying.

Nevertheless, I do believe that with a little practice you will be able to master the Quick Color Picker and get your desired results. If you’re not too keen on using that built-in version, there are always third-party extensions that you can strap to your Photoshop, for example, Coolorus 2 Color Wheel or Painters Wheel (works with PS CS4, CS5, CS6).

6. Working With Layers

One of the advantages of working digitally is undisputedly the ability to work with layers. They are quite versatile, and there’s a lot of things that you could do with them. You could say that one could write a book just on Layers alone. However, I’m going to do the next best thing, and that would be to share with you the options I most commonly use when working on my projects.

As you may have guessed, the Layer section is a pretty important one for any type of digital creative. In this section, I’m going to share the simpler but very useful shortcuts that could be some real lifesavers.

Clipping Mask Layer

A Clipping Mask Layer is what I most often use when I’m drawing. For those of you who do not know what that is, it’s basically a layer which you clip on to the layer below. The layer below defines what’s visible on the clipped on layer.

For example, let’s say that you have a circle on the base layer and then you add a Clipping Mask Layer to that circle. When you start drawing on your Clipping Mask Layer, you will be restricted only to the shapes in the Base Layer.

Take notice of the layers on the right side of the screen. Layer 0 is the Clipping Mask Layer of the Base Layer — Layer 1.

This option allows you to really easily create frames and the best part is that they’re non-destructive. The more shapes you add (in this case it’s Layer 1), the more visible parts of the image can be seen.

Drawly’s artwork added into various shapes as a clipping mask (Large preview)

The most common use for Clipping Mask Layers in digital art/painting is to add shadows and highlights to a base color. For example, let’s say that you’ve completed your character’s line-art and you’ve added their base skin tone. You can use Clipping Mask Layers to add non-destructive shadows and highlights.

Note: I’m using the term “non-destructive” because you cannot erase away something from the base layers — they will be safe and sound.)

So, how do you create those Clipping Mask Layers? Well, each one starts off as a regular “Layer”.

An alternative way to make a regular layer into a Clipping Mask is to press and hold the Alt key, and click between the two Layers. The upper layer will then become the Clipping Mask of the layer below.

Selecting All Layers

Every once in a while, you may want to select all of the layers, and group them together so that you can continue building on top of them or a number of other reasons. Typically, what I used to do is simply hold the Ctrl/Cmd key and then start clicking away at all of the layers. Needless to say, that was a bit time-consuming, especially if I’m working on a big project. So here’s a better way:

What you would need to do is simply press: Ctrl/Cmd + Alt + A.

Now that should’ve selected all of your layers and you will be able to do anything you want with them.

Flattening Visible Layers

Clipping Mask Layers may be totally awesome, however, they don’t always work well if you want to modify something in the general image you’re doing. Sometimes you just need everything (e.g. base color, highlights and shadows) to stop being on different layers and just be combined into one. Sometimes you just need to merge all currently visible layers into one, in a non-destructive way.

Here’s how:

Press and hold Ctrl/Cmd + Alt + Shift + E.

Et voilà! Now you should be seeing an extra layer on the top that has all other visible layers in it. The beauty of this shortcut is that you still have your other layers below — untouched and safe. If you mess up something with the newly created layer, you can still bring things back to the way they were before and start afresh.

Copying Multiple Layers

Every now and then we’re faced with the need to copy stuff from multiple layers. Typically what most people do is duplicate the two given layers they need, merge them and then start erasing away the unnecessary parts of the image.

As you can see, each color dot is on a separate layer. Let’s say that we need to copy a straight rectangle through the center of the dots and copy it on a layer at the top.

Three different colored circles with a selection box inside them (Large preview)

We’ve made a selection and once you press Ctrl/Cmd + Shift + C, Photoshop will copy everything you have in your selection to the clipboard. Then all you have to do is simply paste (Ctrl/Cmd + V) anywhere, and a new layer will appear on the top of the page.

This shortcut can come really handy especially when you’re working with multiple layers, and you need just a portion of the image to be together in a single layer.

7. Working With Curves

In this section of the article, I would like to cover the importance of values as well as Curves which are generally a big topic to cover.

Starting off with the shortcut: Ctrl/Cmd + M.

Pretty simple, right? The best things in life are (almost) always simple! However, don’t let this talk about simplicity fool you, the Curves setting is one of the most powerful tools you have in Photoshop. Especially when it comes to tweaking brightness, contrast, colors, tones, and so on.

Now some of you may be feeling a bit of intimidated by the previous sentence: colors, tones, contrast,... say what now? Don’t worry, because the Curves tool is pretty simple to understand and it will do marvelous things for you. Let’s dig into the details.

This is what the Curves tool basically looks like. As you can see, there’s a moderate amount of options available. What we’re interested in, however, is the area I’ve captured inside the red square. It is actually a simple Histogram with a diagonal line across. The Histogram’s purpose is to show the values of the given image (or painting), left being the darkest points and right being the lightest ones.

Using the mouse, we can put points on the diagonal line and drag it up and down. We typically decide what we want to darken or lighten. If, for example, we want to have the light parts of our image be just a bit darker, we need to click somewhere on the right side and drag down (just like in the first image).

AIn addition, just for demonstration purposes, here’s what would happen if we have the lighter parts darkened and the darker parts lightened:

Curves histogram with two anchor points making the ‘S’ shape (Large preview)

You see, basically the linework is the darkest part, which stayed and the other darks have been lightened to a grayish type of value.

Now let me quickly elaborate on values and why they matter: by “values,” especially in the art world, we’re referring to the amount of lightness or darkness in a drawing (painting). With values, we create depth in our painting which on its part helps with creating the illusion which element is closer to the viewer and which one is in the distance (further back).

8. Actions: Recording Everything You Need For Your Project

Every so often we all need to deal with repetitive processes which could range from adding a filter over our image to creating certain types of layers with blending modes. Does this sound familiar? If so, keep reading.

Did you know that Photoshop supports programming languages such as JavaScript, AppleScript, and VBScript to automate certain processes? I didn’t, as programming has never been my cup of tea. The good thing is that instead, I came across the Actions panel, which offers a lot of functionality and options for automating some repetitive tasks and workflows. In my opinion, this is the best automation tool that Photoshop has to offer if you don’t know how to code.

The Actions panel basically can record every process you’re doing (e.g. adding a layer, cropping the image, changing its hue, and so on); then you can assign a function key to this process and easily re-use it later at any time.

By using the Actions panel, you can capture just about anything that you do in Photoshop and then save it as a process.

Let me give you an example. Let’s say that you want to automate the process of Create a new Layer, set it as a Clipping Mask, and then set its blending mode to Multiply (or anything else). You can record this whole process which would then be available to you for re-use by the press of a button.

As you can probably see, there are some default (pre-recorded) processes on there. What we’re interested in, however, is creating our own action, which is done by clicking on the “Create new action” icon.

Your automation process is now ready to go! When you press Shift + F2, you’ll get a new Layer set as a Clipping Mask to the layer below and its blending mode set to Multiply.

I would also like to mention that the Actions automation process is not limited to just creating layers and setting blending modes. Here are some examples of some pretty handy other uses and options for actions:

You can set up to save images as certain types of files to certain folders on your computer;

Using File → Automate → Batch for processing lots of images;

The Allow Tool Recording option in the flyout Actions panel menu allows actions to include painting, and so on;

The Insert Conditional option in the flyout Actions panel menu allows actions to change their behavior, based on the state of the document;

File → Scripts → Script Events Manager lets actions run based on events, like when a document is opened or a new document is created.

Let me give you another example, I’ll create another Action that will change the size of my image and save it as a PNG file in a certain folder on my desktop.

So after we hit the New Action button on the Actions panel, we’ll proceed with picking the shortcut that we want, set a name for it, and I’ll take it a step further and assign a color to the Action (I’ll explain why this is a helpful feature in a bit).

Now about that color, you may notice that when you assign a color, it doesn’t really reflect in the Actions Panel. Instead, everything stays monochrome. The reason is because when you typically open that panel, you’re in the Edit view, where you’re able to modify the Actions, record new ones, and so on. In order to see all of the available actions in a simpler interface, do this:

On the upper-right hand corner of the panel you will see four horizontal lines. Click on those.

You’ll get a drop-down menu, where you have different Actions options. On the top, you’ll notice a Button Mode.

If you haven’t guessed it already, coloring your Actions will help you distinguish them more easily at a glance. In Button mode, when you take a glance at the panel, you will be able to navigate quickly to the Action that you want to apply to your image/drawing (if you don’t really remember the shortcut you’ve assigned for it).

Okay, so what we have so far is the following:

We’ve created a new action;

Set the shortcut for it;

Changed its color;

Named it.

Let’s proceed with recording the process that we need.

To open the Image Size menu, you can either go to Image → Image Size or simply hit Ctrl + Alt + I and you’ll get this window:

Next, what we want to do is use the Save As option in order to get the option to choose the type of file, destination folder, and so on. You can either go to File → Save As... or you could simply press Ctrl + Shift + S and you will get the following window:

Navigate to the dedicated folder in which you want to save the current project in and actually save it there. An additional Action you can do is to close the image/project you’re working on (don’t worry, the Actions won’t stop recording unless you close down Photoshop).

Once all of that is done, you can hit the Stop icon on the Actions Panel to stop recording your movement in Photoshop.

If you need to resize a bunch of files and save them in a dedicated folder, you just have to load them up in Photoshop and continue hitting the Action shortcut that you’ve created for Resizing and Saving.

If you take the time to get accustomed to the Actions tool in Photoshop and utilize it, you can say “Goodbye” to the bothersome repetitive work that usually eats up most of your time. You will be able to fly through these tasks with such speed that even the Flash could get jealous of.

9. Conclusion

In this article, I’ve shared some of the shortcuts I mostly use. I sincerely hope that they will help you boost up your productivity and make your workflow better as well.

Special Thanks

I would like to mention that this tutorial was made possible with the help of Angel (a.k.a. ArcanumEX). You can check out his artwork on his Facebook page, on Instagram, and on his YouTube channel.

Further Reading

In addition to everything I’ve talked about so far, I’ll include more resources that I believe you might find helpful. Be sure to check out: