Steve presented some recently published studies from Google, Microsoft and Shopzilla showing how much of a business impact web page performance has. Google and Bing presented a study showing how many people leave the site with degrading page performance and many fewer clicks these users make. Lost users and fewer clicks lead to less generated revenue with their ads.

Shopzilla showed how they could boost their business and save on the hardware side by improving their web site performance:

Top Low Hanging Fruits

In Steve Souders opinion, most of the web performance problems relate to how the web page is designed leading to issues like too many embedded resources, too large JavaScript files, missing compression and client side caching. Following is a list of some low hanging fruits in order to speed up web site performance. I am using the FREE Dynatrace AJAX Edition to visualize the individual use case scenarios:

Too many network roundtrips due to too many embedded resources

Analyze the actual network roundtrips when requesting a page. Not all embedded resources (like images, css, …) get loaded in parallel by the browser. Each browser has a set of physical network connections that are used to downloaded embedded objects. The following Dynatrace network view shows that – when accessing http://280slides.com/Editor – 172 .png images have to be loaded by the browser. It also shows how loading these objects gets deferred due to the network connection limitation of the browser.

Many embedded images slow down overall page page

You can use several different techniques to lower the amount of network roundtrips. If images are static think about client side caching. Or consider CSS Sprites to reduce the number of images by merging them into one bigger image. Another technique is Domain Sharding to overcome the network connection limitations.

No optimal use of Caching and Compression

Analyze the embedded objects for the usage of client side caching and compression. Static images can be cached on the browser to reduce network roundtrips on subsequent page requests. Look at the http response to analyze current cache settings and analyze if they could be cached in case they are not. Same thing holds true for compression. Large files can be compressed lowering the required network bandwidth.

When JavaScript files are downloaded by the browser – the browser blocks all other activity, e.g.: downloading additional embedded images. This holds true for older browsers (IE 6, FF 3). Newer versions of these browsers at least parallel load other resources – but – once a JavaScript file is downloaded it will be executed. During execution all other tasks are stopped. Looking at the network request image above shows us that there was a big gab after one of the JavaScript files was downloaded. This tells us that the browser actually executed the JavaScript filed and postponed downloading of the remaining embedded objects. Looking at the JavaScript that got executed shows us where the time was actually spent:

AJAX Frameworks tend to dynamically load embedded objects for different reasons either using XHR (XmlHttpRequest), iFrames or by creating DOM objects. Incorrect constructed URLs lead to failing network requests. Using all different types of frameworks that use techniques like that make it hard to identify where in the JavaScript library these URLs are constructed. With the Dynatrace AJAX Edition you can solve this by drilling from the failed network request in the network view down to the PurePath view showing the JavaScript methods that executed the call:

JavaScript library dynamically constructs URLs

In the above example the JavaScript framework tried different network path patterns to retrieve a specific file. Each attempt caused a very expensive network roundtrip (>500ms) totalling up to several seconds to retrieve a certain file.

Any other low hanging fruits?

Books like those from Steve Souders or blog entries from the web performance community cover many approaches and checklists to speed up your web page. Please share your experience and post your feedback on this blog.

Andreas Grabner has 20+ years of experience as a software developer, tester and architect and is an advocate for high-performing cloud scale applications. He is a regular contributor to the DevOps community, a frequent speaker at technology conferences and regularly publishes articles on blog.dynatrace.com. You can follow him on Twitter: @grabnerandi