Blog Comments & Posts

With the mad rush to optimize mobile sites prior to April 21st, it can be very easy to sacrifice performance in the process. Our friends at Zoompf are back to help you make sure your site loads quickly over a mobile connection.

Google developed a protocol called SPDY that is now being used as the basis for HTTP/2. This post details how this new protocol works, why it is important to you, and how you can get started using it today.

When HTTPS was confirmed as a ranking factor, many organizations rushed to implement it without realizing they were impacting the UX of their sites. In this post, the folks at Zoompf detail the steps we should all take to minimize those side effects.

In August, Zoompf published new research showing a clear correlation between the web performance of "Time to First Byte" and Search Engine ranking. In this article, we explore this topic in more detail, including guidance on how you can improve your Time to First Byte.

Google has long stated website performance will impact search ranking, but what exactly does this mean? In this article, Zoompf researches over 40 different speed metrics to determine the most impactful performance changes you can make to your website to improve search ranking.

Thanks Patrick. There are a lot of good free tools that can help you like WebPageTest and RedBot. If you want something a little more, we'd be happy to sell you something :-) You can learn more at our website: Zoompf.com

There is kind of a debate in the web design world right now about whether to have specific mobile/tablet versions, or to have a responsive site. Responsive sites can be easier and supporting multiple versions of the same site can be a pain. Really, it depends on what you are trying to accomplish and what technical resources you have. It's getting easier and easier to build a single responsive site and that's good. But there are common mistakes you can make that impact performance, which I tried to outline above.

Awesome! I wanted to provide a high level discussion of the problems areas to educate. Implementing specific solutions can depend on your technology stack, CMS, internal processes, etc, so its hard to give specific concrete technical advice. However, hopefully this raises awareness and provides a starting point for you to discuss with your team how to address these areas.

Good question. GWT will help show you mobile usability issues, but it will not flag performance issues with your mobile site. Even Google's PageSpeed Insights will not flag soem things, like oversized background images or lossy image optimizations. You'll need to look at something like Zoompf's free report or WebPageTest.

You are right. Honestly we should titled it the "better, hacky way for now" instead of "right way".

The smart, scalable, sustainable way to do it are things like the picture element and srcset. These proposals are trying to allow a website to say "I want to display an image here in the page. Here are different versions, formats, sizes, and densities, use the one that makes sense".

Unfortunately, browser support for these is totally fragmented and incomplete. We cannot do the "right way" yet, and even JavaScript pollyfills are not a clean solution (JS library to download, blocking downloads, execution and DOM reflows/repaints, etc).

A simple, though hacky, way to do this now is with CSS, since that provides some conditional logic about the device capabilities. And while there are problems with this approach as well, so many people aren't doing anything at all, even a hacky solution like CSS is a big step forward that many non-technical people can do.

Hi Alex, I agree. The background-image option is workable for a few major images, but tedious and with limits for the large majority of your images. With that said, the effort is probably only worth it anyway for your large "hero" images (and sprites if you use them), as the differences for small images are likely not worth it.

They do mention "This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results.". Perhaps the fine folks at Moz may have better insight into this?

Actually, SPDY will *prevent* resource stravation, since only a single TCP connection is made to a server. HTTP/1.1 allows multiple TCP connections to be made to a website per visitor, meaning an HTTP/1.1 website can only support 1/2 or 1/4 the number of uses as a HTTP/2 website before exhausting available port numbers.

I also understand your concerns about binary vs. clear text transport, but to say that "Interoperability requires clear text transports" is patently false. The very fabric of the Internet, TCP and IP, are both binary protocols. TLS is a binary protocol. However interoperability clearly exists at those levels. If you are concerned with debugging, tools like WireShark make it easy to see what is happening to a SPDY connection.

I'm not sure I understand your question. It doesn't matter where your web server software is, or who is hosting or managing that system. You want to upgrade/update that software to something that supports HTTP/2, or use a reverse proxy like HAProxy in front of your website to provide HTTP/2 functionality. The process is the same for real vs. virtual systems.

You are absolutely correct! There are a lot of websites, from really big brands, that are making some pretty terrible performance mistakes. HTTP/2 is an improvement certainly, but there are many other performance optimizations you should be making first to your website.

HTTP/2 simply alters how data is transmitted on the wire to be more efficient. However, bloated images will always make a website slower. HTTP/2 doesn't change that. Same thing applies with client-side caching. As I said at the end of the article, HTTP/2 doesn't alleviate the need to be smart about how you create your content.

Great summary of HTTP/2 benefits. Keep in mind that HTTP/1.1 isn't bad just because it is old. TCP, IP, and SMTP are all examples of 30+ year old technology that is still going strong and work fairly well.

The problem with HTTP/1.1 is that it was designed with certain assumptions about the network that are no longer true today. This means that HTTP/1.1 does a poor job of fully utilizing your network connection.

Correct, SPDY/ HTTP/2 is not currently supported on IIS, but it's inevitable Microsoft will cave to pressure as the HTTP/2 spec continues to gain traction. Remember originally SPDY was proposed by Google, so you can draw your own conclusions on why Microsoft has or has not embraced the protocol earlier, but as it gets adopted into HTTP/2 it becomes inevitable that everyone will need to play on the same field.

Thanks Brady! Couldn't agree more, but inertia always takes energy to overcome. We've also gotten very good about working around the problem (faster network connections, more connections in parallel, faster computers, etc.) which have made the problem more bearable, but still it's avoiding the fundamental problems. With more efficiency in our network connections AND faster connections, we'll get a true jump in performance...Which i'm sure we'll fine new ways of filling up in no time :-)

It depends on what exactly you are trying to secure. I'm not a big fan of only using HTTPS for part of your website. This exposes you to an SSL Stripping attack. Granted, you might not be a large enough site or have important and sensitive data to worry about this, but it is a growing risk. And attack only get cheaper and easier, so more and more people become targets.

Hi Lilia, I totally agree about the value of combining files. I did make a brief mention of CSS sprites in the article, but spriting is also a more advanced fix and not always for the faint of heart, so I wanted to balance that with the easier fixes as well. If your site has a lot of small images (little icons, toolbar buttons, etc.), then the benefits of sprites could be more extreme and worth pursuing.

Ken it's funny, I've been using the analogy of boosting performance like going on a diet. Everyone says they need to lose wait, but few do anything about it until things get really dire. Like losing weight, fast performance is not a one-time "fix it and you're done" activity, but an ongoing lifestyle choice. In many cases integrating 3-5 best practices in your workflow will keep you fast over time with minimal fuss.

Great list Ori! One recommendation I'd make on the image sizes is to (when possible) actually resize the original image files to the desired dimensions, rather then using style rules to resize. When resizing via style rules, the browser still has to download the entire image, but then perform additional processing on the client side to resize to the new dimensions. So for example if you have a 1024x768 image sized down to 32x32, you'd still have to download the full 1024.x767. It's better to just resize down to 32x32 to begin with.

Of course if your app needs to handle lots of different sizes for the same image this may not be practical, but most sites only use 1 or 2 sizes of a particular image.

Marc, smush.it doesn't compress your images, instead it strips out "extraneous" information such as unusued palette info, comments, thumbnails, etc. This data is all useful to the designer when editing in their favorite image tool, but it is unnecessary to the end user when viewing the image. We recommend the designer keeps a copy of the original images with all that extra data, but then you use smush.it on a copy of those images before deploying to your website. That way you keep the rich design data, but still have an optimized version for your website.

In terms of compression, PNG and JPEG images are already compressed (hence why you shouldn't try to compress them again using GZIP as that just wastes processor time and can sometimes even increase the file size), but they are optimized for different use cases. PNGs are ideal for computer art, text, etc. while JPEGs are better for photos. Awhile ago we published a guide on image optimization techniques that might be of value to you here. Achieving Better Image Optimization

DNS and TCP connection time would be included in latency, as well as any network switching between A and B. For redirects, if you're referring to webserver initiated redirects, those would be in the server processing component of TTFB. Hope that helps clarify.

To be clear, if you refer to our earlier Moz post and research, you'll see that we found the top websites for a search query consistently had better TTFB, regardless of the number of keywords in the query. We tested 2000 different search queries with between 1 to 5 keywords and then looked at the top 50 websites in the results. We didn't pick these queries at random. Moz data scientists provided us with a representative section of queries across different verticals and market sizes. And we consistently saw higher ranked sites have lower TTFBs

Are these just "bigger" brands who have fast sites because they throw money at the problem? Not at all, Because when you are doing a 4 or 5 word search query, you are getting highly niche search results to websites that are not large companies which have huge IT budgets and blazing fast servers. We discuss this in the "Tail Wagging the Dog" Section of our original post.

To call TTFB meaningless is silly and short sighted. At the same time, sacrificing an obvious front-end performance improvement like GZIP because buffering by the server/inline accelerators can slightly delay TTFB is just as silly.

Our research showed no correlation between page load time or document complete time and search engine ranking. We even made our data available for other people to review. If you'd like to discuss the data and our interpretation of the data, our previous post is a much better place to do it.

Obviously a good user experience and a fast render time is important. In fact, we specifically call this out in our conclusions by saying "While we have found that ... document complete and fully rendered times... do not directly factor into search engine rankings, it would be a mistake to assume they are not important or that they don't effect search engine rankings in another way. ... In short, faster websites make users happy, and happy users promote your website through linking and sharing. All of these things contribute to improving search engine rankings."

A WordPress plugin taking 2-3 seconds indicates a serious problem. Either the plugin is broken/misconfigured/poorly written, or your server/hosting provider is not giving your site enough CPU power to generate pages in a reasonable amount of time. You should take the time and review your plugins and find out what is going on there.

We agree and made a comment at the top of this post and a larger one in our prior post that this is purely correlative and needs to be weighed with other factors as well. The interesting result, though, is this trend occurred over the aggregate of the top 50 rankings for each term. I definitely agree the top 5-10 sites even may have a more mature SEO optimization strategy then the norm, but would that also apply to sites 11-50? We'll never truly know for sure, but the correlations across the 100k of results is definitely interesting to speculate about.

Absolutely Chris, thanks for putting this article in the larger perspective: The back-end performance of your website should not be your #1 priority when it comes to your SEO efforts. It shouldn't even been in the top 5. Writing fresh, rich, compelling content should everyone's focus.

Hey Dan, great point. Just remember that for this article, we are only talking about the time to generate and deliver the base HTML page. The total page loading time (including CSS, JS, image,fonts, ads, etc) can and will be much longer.

A quick update to the post. I mentioned using WebPageTest to find your TTFB. I then mentioned using ping to find the latency component. This works, as long as the WPT instance you are using is physically near you. Otherwise the value return by ping does not help you.

For the best results, you the Network Tab of Google Chome's Developer Tools. This will give you the TTFB for your machine. Running ping, again from your local machine, will help you figure out the latency component. (Firebug for Firefox offers similiar capabilities to determine the TTFB).

An excellent point, and you're right one usually accompanies another. There are a lot of factors in performance that have nothing to do with Time to First Byte, for example how well your images are optimized or whether your javascript is minified and such. All of these contribute to a faster overall page load time and a better user experience. These are all important factors to consider, and a disciplined web developer should keep all of these under account (and they usually do), but in this research we focused just one that one particular paramater of TTFB. In our prior article we talked a bit about those other metrics as well, which are still very important to the overall experience.

Yeah thanks for the reminder, there are some interesting rollups of all the results on the 4th tab of the raw data that we didn't get into since they weren't directly related to the goal of the study, but still interesting to tease out. For example:

We did a similar study to this a few months back just examining the Alexa top 1000, and found Image Optimization and HTTP Compression were the top 2 optimizations you could make to speed up your site. That supports the values above as well.

At that time we also saw an average doc complete time of 9.5 seconds for the Alexa Top 1000, which is close to the average of 8.5 from above. Strangeloop Networks publishes ecommerce averages frequently and have come up with similar findings, they showed 7.25 seconds for the top 2k retail sites recently.

If there's interest we may do a followup post diving into some this additional data in more detail.

Yeah i wouldn't expect widespread change from these results, but if anything its telling that the performance influence factors on ranking are not as pronounced yet as some would lead you to believe. With that said, I do fully expect this to change in the future as capabilities of the crawler get more sophisticated. Heck it stands to common sense - slow web pages provide a poor user experience, and poor user experience should be a factor in relevancy. This benefits all of us in terms of letting the "better" sites organically rise to the top. This also helps raise the quality bar for all sites just to compete, which in the long run is in the best interest of the consumer. And where the consumer goes, so does Google.

Thanks! The results were very interesting to us as well - we fully expected some level of relationship between page render time and ranking and were quite surprised when we saw none. For that reason we even reran the test on a subset of queries (200 keywords x top 50 results with "median of 3" values captured for each) to see if our testing run was off, but sure enough we got very similar results the second time. The more we thought about it the more that made sense, which I guess is what experimentation is all about - testing to prove or disprove a hypothesis.

mschoeffler, glad the data was helpful! Could you elaborate on your findings on Full Render Time? We didn't see any clear correlation with that number, so i'm curious if you could elaborate on the correlations you saw? This would definitely be interesting to examine further if you saw an association there.

Interesting idea on the followup. Some of the other comments mention a possible followup study excluding "big brands" from the results - not entirely sure how we'd do that, but maybe we could add Alexa rank as another vector to the analysis and chart results above and below certain alexa thresholds.

Glad you find the results useful. We're definitely open to additional studies down the road if our friends at Moz are amenable to us posting them.

Thanks Tobias, yeah we talked to this somewhat in the "Tail Wagging the Dog" section as well as in my reply below to DanThies - basically there are a few things to consider here:

1. The results are correlative and as such can only at best be theories, not proven fact. We tried to talk to that in the disclaimer as well, but I wanted to make it clear we're not trying to make bold statements about fact, but just simply offering plausible theories.

2. In terms of why not apply the same theory about resources to page size - since the results are subjective they are open to interpretation. In terms of page size, since the results were more erratic (higher peaks and valleys), we had more difficulty drawing as strong a correlation. It does, however, trend in a consistent manner, but more erratically so, so it could be argued both ways for sure. Which leads into...

3. Inequity of resources (big sites versus small sites) - i speak to that a bit in my response to DanTheis - basically the raw data also shows results summarized for long tail searches as well (4 or more keywords) and showed similar results which led us to believe that these inequities washed out over the large scale of the data set. We didn't run an exclusion of big brands like you mention (say Alexa top 1000 sites), but the assertion is those would be less prominent on long tails. Still, that would be a useful followon study to confirm or deny.

Hope that helps explain loosely where our brains were coming from. Curious to learn more about your research as well - do you have a link or something we can check out to compare notes?

Thanks for the kind words on the study - we'd definitely love to see further independent research as well!

1. Branded queries: we had similar thoughts, so we actually looked into that as well. If you grab the raw data from our download page (warning it's a 33 MB excel file), and open to the "Rank Medians Long Tail" tab, you'll see some analysis we did on just search terms that had 4 or more keywords in them, for example "science fair project ideas". The assumption was these "long term" queries for the most part would have less interaction from well known brands and highly competitive terms. The TTFB metrics for this set were even MORE pronounced, while some of the other factors were more erratic. Since this post was already getting long we didn't dive into those results, but perhaps may make sense in a followup post.

2. 50 results: Our focus was more on google's algorithm and a large representative set to track trends, but even if you chart the first 10 ranks you'll see a clear trend up. In fact the first 7 are even more pronounced on a number of the factors so i think the observed correlations still hold.

3. Grouping by speed: agreed there are a lot of variables and unknowns, so to anyone outside Google internal this is more of an art of educated guessing. I'm not sure i fully follow the bucket methodology you propose - is the intent to filter out different "classes" of sites, say like the Amazons versus the Magento sites, that sortof thing? If so, I agree that would be interesting but wouldn't that also trend out over lower ranks? (in the graphs we shared several of the factors had sharper curves in the first 1-7 results which may be partially explained by that phenomenon).

elleyo, i'm curious - can you clarify, in technical terms if possible, what you mean by "time to start render?". We did capture document complete time, which is the time it takes to load the initial DOM, as well as the time to render all visible elements (the "page load time" section). The document complete time represents loosely when a site starts becoming "useful" to the user (okay, i know that is subjective, but let's start there). Would that capture what your intent is here or were you thinking something else?

Not entirely sure what the question is here, but it does introduce a noteworthy topic. We too were wondering if these results could be swayed by reliability - say time of day, erratic load on the target site, etc., so we did later run a re-run of a smaller subset (10k total pages instead of 100k), but with a higher retry rate ("median of 3" versus "single run") for each page. The results we saw were very similar, lending us to believe that any anomalous individual readings were washed out by the data size. (another reason why we chose medians instead of averages as well). We didn't get into that in this post for brevity sake, but could dive into that more in a future post if there is sufficient interest.

We've also published all the raw data on our website, so check it out for further analysis: http://zoompf.com/search-ranking-factors

Thanks Carla, completely agree. The last thing we want people to take away from this study is that page load time is not important. It's critical! While it may not (yet) indicate an impact on search ranking, it has a HUGE impact on user experience, ad conversions, cart abadonment, engagement, etc.

Thanks, yeah I think there's some more interesting results to tease out of this. For example, why does page size dip sharply for the first 5 ranks or so? Is it that the top sites invest in minification and optimization strategies, while those "below the fold" generally do not? It's not clear.

Similarly, why do the top 2 ranks have higher total image sizes? Is it that the top sites overly promote advertisement and ancillary products versus the rest? Again not sure.

Joshua, I couldn't agree more. We tried to make it clear in the post disclaimer these results are only correlative - we can't outright prove anything without access to all factors, which is something only Google can prove (and they never will, since that's the secret sauce of their algorithm). Matt Peter's prior post on search ranking factors also takes in additional factors to his analysis, but again those are correlative as well.

Still, short of all the information, the trends over 100k unique URLs are telling and lead to plausible hypothesis, which is what we are proposing here. We also posted the raw data available for download on our site for further analysis - would love to see independent corroboration or refuting of what we've found in a future post!

Phillip, this result surprised us as well, but the more we thought about it, the more it made sense. Time to First Byte is by far the easiest (and quickest) metric to capture. With that said, the crawler is always evolving and I would not be surprised if that changes rapidly in the future. Page load time remains a critical factor to positive user engagement, so the takeaway should still remain that optimizing load times is worthwhile and will only provide further benefit in the future once the crawler catches up.