Hi. I am not sure where to put this post. Sorry in advance if this is the wrong place.

The folks at SellerDeck have contacted me to say that since January 2017 my web server utilisation (data volume) has gone up by 800% from 1/2gb per month to 4gb.

This seems very odd as looking at google analytics over the same period my month traffic has gone down by approximately 60% (users, sessions and page views). I have checked and the behaviour of users and pattern of page views is broadly similar. I have also not structurally changed the site.

I was wondering if anyone else has been contacted by SellerDeck and seen similar behaviour?

I have had a look through the stats. They only go back 3 months so I cant compare to Jan to see why the traffic is going down but the volume is hugely increasing.

However it did give me data on current month which showed the highest URLs by hits and data (what the SellerDeck report calls them) . The top 10 were a series of JavaScript files like actinicextras and J Query 1.8 3 min. They had a number GBs of data against them even though no actual pages were showing up in the top URLs.

It seems very odd that something like actinicextras which is a small piece of Javascript could create so much data (biggest by a factor of 5 and 90+ of my total volume)? It does not touch any of the images on the site which the actual web pages do. The biggest part of any of our pages by far is the images.

I would be very interested if anyone else is seeing these JavaScript files consuming so much data and any ideas on why it is happening?

Actinicextras.js is a largish file but shouldn't be anywhere 90% of your traffic usage. It seems to be about 5% of your home page size. (23 KB of 640 KB total).

I was going to suggest you enable compression but it seems that is already running.

I also checked and it is only being loaded once on the page.

the only thing I can think to check is that your page design does use the standard JavascriptFunctions selector as that should avoid them being called from the server when the site is being previewed or worked on on the desktop.

Norman. Thank you. That is good to know. I will certainly make that change.

I guess I am still struggling to understand where the growth is coming from? I actually missed a digit off the data used. According to SellerDeck we used 1/2gb in January and 36gb in May (72 times more). Google analytics tells us the traffic has dropped by over 50% (page views).

The URL parameter forces the external files to be reloaded following a software upgrade, when the parameter value is changed. It was added due to a number of support tickets over the years, where the cause was the wrong file version being served from the browser cache.

The parameter does not prevent the files being cached at other times, when the parameter value is unchanged. We have tested this live, and in some depth on one site in particular. You can see it for yourself in your browser by enabling the tools (F12), selecting the Network tab and refreshing the page. It will show you which resources came from the cache and which did not.

One of our technical team had a quick look at your site and observed that you have cache control disabled, and this will result in files being reloaded more often than may be necessary. Cache control is configured through settings in your .htaccess file. It's not something that our support team cover, or that I can advise on here; but AIUI it's not unduly complex if you wanted to research it.

If you do make a change to your .htacess file, make sure you take a copy of the original first, so that you can restore it if necessary. An error in the file does have the potential to completely disable your site.

There are a LOT of HTTP servers, proxy servers and content delivery networks out there (each capable of being configured in many different ways). All of these interacting with a handful of different web-browsers / operating systems so it may be difficult to be definitive re what's cached or not.

I chipped in because earlier today I'd run foul of what Bruce mentioned about old JS files being served from the cache. I'm working on some customisation that requires a JavaScript helper file to be created from a small subset of the content tree on every upload. Changes that I'd made in the site contents were in my uploaded JS file but my Apache 2.2 server serving into Firefox (I didn't test other browsers) was misbehaving because Firefox was using the cached prior version. My solution was to incorporate the MD5 of the JavaScript file as the query string. E.g:

Norman, that's a nice technique but won't actually work if the .html file is not reloaded by the browser.

Cache control is part of the HTTP standard. My experience of it over a variety of devices and browsers is that it works very well. It reduces bandwidth and improves responsiveness of the site.

In a recent major update to a very busy site (typically hundreds of people online simultaneously) we used the cache control to ensure that customers didn't have problems with new javascript files. Previously this was not done and there were many customer issues with mismatching js, css and html. They now have cache control set to 1 hour for .js,, .html and .css. We set it to 1 second an hour prior to the publish, then set it back to 1 hour afterwards. During the publish, each browser checked with the server to ensure that every file was up to date (and only transferred the actual file data if it had changed). There were no customer complaints.

With the Secret Garden Quilting site browsers are checking too often with the server if files have changed. This affects all files, but because actinicextras.js is large and common to every page it shows up as the worst offender in the stats.

An addendum: there is a number available in the variable UploadReferenceNumber which is incremented on every publish/refresh. We use it in the filtering cache, which has to work reliably regardless of the server cache settings.

Yes, this happened to us too. Back in January 2016 we upgraded one of our sites to newest Sellerdeck 2016 and the traffic to this site increased hugely almost overnight. We had to upgrade our Sellerdeck Hosting to 'SellerDeck Hosting - GOLD Plus' and we are still exceeding our bandwidth allowance which is odd because the website is actually less busy than it used to be. Another website that is run on Sellerdeck 2013 uses a fraction of that bandwidth even though is not less busy (possibly even busier) and I very reluctant to upgrade.
As we are Sellerdeck Cover customer and we also pay for Sellerdeck Hosting would Sellerdeck look in to it for us?
Aneta

Aneta, if you contact support they should be able to point you in the right direction, or they will have access to people who can.

I did notice you have two sections, Browse by Manufacturer and Browse by Colour, both with 'View All' links that load every product in the site, using full size images. Those will consume a lot of bandwidth every time they are viewed. They will also load very slowly, which isn't good for usability or for your Google rankings.

I suggest setting 'Is Full Page Included in Pagination' to 'False' in the Section Details for both of those sections.

Alternatively you could create separate subsections for each manufacturer and each colour, and populate those dynamically using filters.

You could reduce the bandwidth and the page load times further by creating Thumbnail images for products, and using those in subsection pages instead of the full product image. That would require small change to whichever product layout you are using for those pages.

I am a bit confused about the content caching point. This is not something we have setup or disabled. The product site (actually SD 2013) is running on the SellerDeck servers configured by you guys and running the the 12.06 build that was provided to use from support.

Is caching turned off a standard part of the SellerDeck server configuration?