(07-30-2015 06:01 AM)moojjoo Wrote: Psst. I do not have permission to adjust the images, can only stress the need to reduce them

Makes it really difficult to do any meaningful or efficacious site optimization.

Quote:However, when I run in Chrome Developer tools

Applying scientific method means we use only one set of test tools and parameters, as a control. I don't use any testing tool at all other than WebPageTest. Because it is real world connections, real world browsers. And those are what I am interested in satisfying.

Cloudflare is a CDN by any definition. They also do other things but they operate in a pull mode, cache static resources and serve them directly from globally distributed edge nodes.

As far as the target TTFB being unrealistic for CDN's - yes, it's unfortunate and I am considering a few options to make it better but when the base page is served through a CDN it throws off a bunch of the calculations. From the client side, in order to estimate the server processing time you need to remove the network round trip time (usually a function of the distance to the server). Unfortunately with a CDN, the round trip time is to the CDN edge and not the origin server (additionally, if the CDN does not maintain a persistent connection to the origin it may be hiding a DNS lookup and socket connect in the "server time" as well). WPT targets 100ms of server processing time when setting the target (which is totally achievable). In the case of connecting to a CDN I may just put in a fixed 250ms estimated server RTT which is enough to allow the origin to be on the other side of the world and still provide a reasonable target.

As to images impacting TTFB, in theory and on a well configured server it should have zero impact since the back-end doesn't know or care about the images when serving the html (in most cases). If the server is bandwidth constrained, out of clients to handle responses or otherwise not configured well then it's possible that the image requests for other users is using server resources and slowing down the tested TTFB. If the application logic for the base page actually opens and parses the images before serving the html then that could also cause an impact. In either case the root cause should be addressed rather than optimizing the images to improve TTFB. The images should absolutely still be fixed because that will impact the user experience, your bandwidth, etc - it just shouldn't be targeted for TTFB optimizations.

(07-30-2015 11:03 PM)pmeenan Wrote: Cloudflare is a CDN by any definition. They also do other things but they operate in a pull mode, cache static resources and serve them directly from globally distributed edge nodes.

As far as the target TTFB being unrealistic for CDN's - yes, it's unfortunate and I am considering a few options to make it better but when the base page is served through a CDN it throws off a bunch of the calculations. From the client side, in order to estimate the server processing time you need to remove the network round trip time (usually a function of the distance to the server). Unfortunately with a CDN, the round trip time is to the CDN edge and not the origin server (additionally, if the CDN does not maintain a persistent connection to the origin it may be hiding a DNS lookup and socket connect in the "server time" as well). WPT targets 100ms of server processing time when setting the target (which is totally achievable). In the case of connecting to a CDN I may just put in a fixed 250ms estimated server RTT which is enough to allow the origin to be on the other side of the world and still provide a reasonable target.

Thanks for taking the time to consider this, and it sounds like a reasonable fix. "Achievable" though, as regards to 100ms - almost anything is "achievable" depending on what cost and resources you throw at it. IMO 100ms isn't reasonably achievable for the average, lower skilled type site owner.

Quote:As to images impacting TTFB, in theory and on a well configured server it should have zero impact since the back-end doesn't know or care about the images when serving the html (in most cases). If the server is bandwidth constrained, out of clients to handle responses or otherwise not configured well then it's possible that the image requests for other users is using server resources and slowing down the tested TTFB. If the application logic for the base page actually opens and parses the images before serving the html then that could also cause an impact. In either case the root cause should be addressed rather than optimizing the images to improve TTFB. The images should absolutely still be fixed because that will impact the user experience, your bandwidth, etc - it just shouldn't be targeted for TTFB optimizations.

I only know from what I've seen over the last few years of fixing performance issues for many people which is, with no changes other than eliminating bloat - which is almost always in the images mainly - TTFB improves. I can only speculate and theorize that at some point during the handshake the server "tells" the browser how much in kb, is coming and for whatever reason if it is a big number, this delays the negotiation. Sounds dumb I admit, but as I said I am at a loss to explain the results and observations otherwise. Just like I was only speculating about CDNs causing a really low target FBT time - which as it turns out, you have verified is true.

Quote:it just shouldn't be targeted for TTFB optimizations.

And I don't target it for that reason, have only noticed getting rid of the bloat does have the happy side effect of helping TTFB. I go after the bloat first, because it is among the easiest fixes and best performance boosting things a site owner can do. Not to target TTFB improvement.

All I do for people is try to help them based on what has worked every time it is tried and I try to keep it limited to things I know the average site owner can do. Honestly, I don't go much beyond that because I know I am not qualified to do so. I can give people the basic stuff, the common sense stuff, stuff that always at least helps - but I pretty much stop there.

OK, I look at a waterfall of a website, and the x axis is time. Are we all agreed on that???

So how does the delivery of a static resource *after* completion of the delivery of the HTML framework directly affect it's delivery without the use of a Tardis?

As Patrick says, it is possible to indirectly affect it, either by weird processing or bandwidth constraints ( possibly?? ), but these are predominantly infrastructure resource/configuration/design problems, and need to be addressed as such. Banging on blindly about content won't make them go away, you need to monitor, analyse and address them.

Whilst is is possible that adaptive network traffic techniques may throttle a large page, the HTML header only contains the size of the page, which only contains pointers to other content. If you're commenting on these things Anton, you really should have a grasp of the basics...

I still contend that CloudFlare is a primarily a proxy server. It takes over your DNS, and redirects ALL traffic via its network ( which as I've also said before *still* runs on the same old internet as the rest of us *at the moment* ! ), not just static content. As such, any request for dynamic content will need to be proxied back to the original server before delivery, or served out of date ( which of course can't happen as Bill Smith will get Fred Blogg's content </massive oversimplification> ).

Maybe I'm a traditionalist, as I see no point in using CDNs for anything other than static content, to relieve the bandwidth resources on the primary server, leaving it more able to deliver the page HTML *directly* to the browser. CF is a simple implementation for non-technical users which does far more than just that.

Personally, because I work primarily with eCommerce solutions, I *have* to work with TTFB optimisations, as every tenth counts. Because of this, I shudder at the use of proxy server solutions, but make heavy use of CDNs. I also take great care in the placement of servers close to the target audience.

( Let's see you get close to http://www.webpagetest.org/result/150804_RV_1D06/ with CloudFlare - the catalog display page is traditionally the slowest page on a magento site - this is the standard sample data provided by Magento so don't comment on the image size please! It's running on a cheap blade server in Sydney, load average: 1.84, 1.41, 1.31 as is the 'CDN' - just cookie-free access to the same server. Alternatively my homepage - drupal on the same server - http://www.webpagetest.org/result/150804_JA_1D6K/ ).

Not sure what you're asking here. None of my own sites nor any of the dozens of sites I have optimized for people, use CF or any other "CDN" anymore, if they did. And they all get straight A grades.

As Patrick confirmed, anyone using a CDN likely won't be getting a B for FBT due to WPT "grading on a curve" for the target FBT, making every site that is on a CDN, fail this test. He said he would be fixing that and when he does, I see the B grades and maybe even A grades possible for "CDN" users.

(08-05-2015 12:51 PM)Anton Chigurh Wrote: Not sure what you're asking here. None of my own sites nor any of the dozens of sites I have optimized for people, use CF or any other "CDN" anymore, if they did. And they all get straight A grades.

Well, this is a Magento demo site. What CMSes have you optimised, and where are the examples? You're coming over as talking a good fight at the moment.

FWIW, routing dynamic traffic through a CDN can also make sense as long as the CDN is good at it. Akamai calls it DSA but most hagve offerings in that space.

It can reduce the connection set-up time between the users and the edge but that only works well if they maintain long-lived connections back to the origin or otherwise route the requests through their network and egress close to the origin. It can also help with slow start in that configuration.

Some of the CDN's will do more dynamic edge-serving with something like Edge-Side Includes (ESI) or flush a static initial part of the page to get the browser started while the origin does the heavy work for generating the full page.

Where it REALLY pays off is in the move to HTTP/2. Serving the static assets over the already-established connection for the base page can be a huge win.

So a reduction in the client<>edge latency can be beneficial over the additional proxy overhead to the origin server? Maybe it's because I'm used to being 200ms from most people that this seems unlikely to benefit my clients but I'm far too old to believe I'm right!

I have seen with my own plays with SPDY on nginx is that it does dramatically change the shape of the waterfall for sure. Still plenty of scope for abuse of decent design fundamentals like delivering far too many files though, but maybe it'll stop this nasty habit of delivering x from here, y from there, and so on... how are you supposed to guarantee performance like that?

Depends on what the proxy overhead to the origin is. If the edge nodes are essentially in the network path (not taking the traffic significantly out of the way) then at worst it is a no-op (if a new connection is established from the edge back to the origin).

Client <-> edge <-------> origin

It's not uncommon for it to look more like:

Client <-> client-edge <-------> origin-edge <-> origin

Where the CDN routes the request most of the way through an already warm and established connection and it comes out close to the origin.

The reality is a lot more complicated but it also depends on how efficiently the CDN is operating.

As far as SPDY and nginx goes, I don't think nginx implemented priorities in the SPDY implementation which was a performance killer. Instead of returning the most important resources first it would just shove them all down the pipe. Hopefully the HTTP/2 implementations behave better. I do know that the H2O proxy "does the right thing" with priorities.