Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More!

I’m extremely excited to announce the release of our very first State of Online Retail Performance report. This report is a semi-annual analysis of the intersection of performance metrics from three different perspectives: IT, business, and user experience.

It’s always a thrill to release new research into the wild, and I’m extra thrilled about this particular project.

As our first piece of new research to be released under the Akamai umbrella, it’s fitting that this project is also the biggest of its kind in the performance industry. We gathered one month’s worth of beacon data from leading retail sites, comprised of our customers who have given permission for their data to be anonymized, aggregated and used in this type of research. This study represents a whopping 27.7 billion beacons’ worth of user data, which equates to more than 10 billion user visits.

What Did We Set Out to Learn?

When we started this project, we had a number of questions we wanted to ask our data. These are three of the big ones:

What is the “magic number” for page load time that yields the highest conversion rate? This is one of the most common questions I encounter. While there’s no single magic number that applies to all sites, there is value in pinning down medians from large data sets in order to benchmark current user behavior so that we can compare future behavior.

What is the impact of one second of performance improvement (or slowdown) on conversion rate/bounce rate/session length? This is another common question. Again, these aren’t magic numbers, but rather important benchmarks.

How are high-performance pages different — in terms of size, complexity, and resources — from pages that perform poorly?

There are far too many findings to share in one blog post, but today I want to talk about a few of the things we discovered.

When I talk to marketers and site owners and ask them about their site’s conversion rate, everyone knows what their average or median rate is. But very few have broken out conversion rates by load time. If they did, they’d get a much more nuanced understanding of their site’s user experience. As the graph below shows, optimal page performance correlates to significantly higher conversion rates. (More on this in point 2, below.)

In other words, if you’re chugging along, assuming that your average desktop conversion rate of 4.1% is just fine because it falls within industry norms, you’re missing out on opportunities to optimize web performance and increase conversions.

2. Optimal Load Times for Peak Conversions Ranged From 1.8 to 2.7 Seconds Across Device Types

This is huge. It’s the first time I’ve seen sub-two-second load times appear in this kind of research.

This is enormously significant for at least two reasons:

Site owners have broken a huge performance barrier by delivering a significant number of sub-2-second pages to visitors. Given the fact that retail pages are expected to deliver so much visible and invisible content — from videos and high-res images to third-party tags — this is an impressive achievement.

It validates that user behavior follows what site owners are able to deliver. This does not mean that 1.8 seconds is “fast enough”. It means that, as we continue to make pages faster, we should continue to see payback in terms of increased conversion rates (or whatever metric you care about).

3. Even 100ms Delays Correlate to Lower Conversion Rates

Even tenths of a second count. As discussed in finding 2, desktop pages that loaded in 2.7 seconds experienced a peak conversion rate of 12.8%. Pages that loaded 100 milliseconds slower — in other words, in 2.8 seconds — experienced a 2.4% decrease in conversion rate. Smartphones and tablets were affected more, with 7.1% and 3.8% decreases in conversion rates, respectively.

These impacts were felt even more with pages that were one and two seconds slower. Desktop pages that experienced a two-second delay — loading in 3.8 seconds instead of the optimal 1.8 seconds — had conversion rates that were almost 37% lower.

4. A Two-Second Delay Hurt Bounce Rates by Up to 103%

Unlike our conversion findings, 100-millisecond delays didn’t have a significant effect on bounce rates, but at one and two seconds, the impact of delays was much more noticeable.

In other words, you might think that four- or five-second load times are pretty fast. But if those load times correlate with a 103% increase in bounce rate, that hurts — a lot.

5. Start Render Time Is an Important Metric

Start render time — defined as the moment when content begins to render in the browser — is a solid metric for measuring user-perceived performance. We found that the optimal start render time for desktop users was under one second. Mobile and tablet user expectations were not far behind.

Desktop pages with a start render time of 900 milliseconds experienced the lowest bounce rate (18.1%).

Mobile pages with a start render time of 1.3 seconds experienced the lowest bounce rate (23.1%).

Tablet pages with a start render time of 1.5 seconds experienced the lowest bounce rate (18.5%).

6. A Two-Second Delay Correlated With Up to a 51% Decrease in Session Length

Faster pages compel shoppers to spend more time on retail sites, visit more pages, and add more items to their carts. When examined alongside metrics like bounce rate and conversions, session length (defined as the number of pages visited in a single visit) is a strong indicator of user engagement and satisfaction.

Similar to the effects of slowdowns on bounce rate, we found that 100 milliseconds had little to no impact on session length. But the results became more pronounced at the one- and two-second points. Sessions with median page loads that were one second slower than optimal speeds were up to 25% shorter. A two-second delay correlated with a 51% decrease in session length for mobile users, a 47% decrease for desktop users, and an almost 38% decrease for tablet users.

Why Do Visitors React Differently Depending on Device Type?

Some interesting patterns emerged from these findings.

When it comes to conversions, page slowdowns tend to have a greater negative impact on desktop users.

Mobile users seem to be more sensitive to slowdowns in terms of bounce rate.

Tablet users appear to be the most patient of the three groups.

We can only guess why people behave differently according to their device type.

We found that almost half (47%) of the retail traffic came from mobile devices, but only 22% of conversions happened on mobile. Clearly, mobile is an important part of the entire transaction process, even if people aren’t converting on their phones. The greater risk is losing these mobile shoppers who are sensitive to slowness and more likely to bounce. And perhaps because desktop users tend to convert more overall (accounting for 68% of all conversions in our study), they’re more sensitive to speed as it relates to conversions.

And maybe tablet users have learned to be patient because, according to other research we’ve done, many tend to use older – and therefore less performant – tablets. (Speaking purely anecdotally, this theory resonates with me. I use a newer laptop, desktop, and phone, but my iPad is almost five years old. And it is circa-1995 slooooow.)

As I said, these behavior patterns are ripe for speculation. I definitely welcome your theories.

What to Do (and Not Do) With These Findings

Analyze your own user data. Do not use these numbers to set goalposts for your own site. These are based on aggregated data. Every site is unique, which means you need to gather and analyze your own data. Create histograms of your user experiences that correlate metrics like load time and start render with bounce rate and conversions. Understand how the behavior of your visitors changes with 100-millisecond changes in load time.

Explore the intersections of multiple metrics. Don’t focus solely on load time as your front-end performance metric. While there’s a strong correlation between load time and business KPIs, you should also pay attention to more user-oriented metrics like start render time. The goal is to get a 360-degree view of the performance of your digital properties, so you can make informed choices about what to optimize and what the ROI will be.

Prioritize user research as an ongoing endeavor. Don’t assume any of these numbers are set in stone. This is benchmark research – the first in what will be a long line of future reports. We’ll be tracking and reporting changes, and identifying and analyzing any trends that emerge over time.

As I mentioned at the top of this post, these are only a few of the findings that are available in the report. I strongly encourage you to get the report and learn the rest.

Collect, analyze, and visualize performance data from mobile to mainframe with AutoPilot APM. Get a Demo!