What would be a good way to attempt to load the hosted jQuery at Google (or other Google hosted libs), but load my copy of jQuery if the Google attempt fails?

I'm not saying Google is flaky. There are cases where the Google copy is blocked (apparently in Iran, for instance).

Would I set up a timer and check for the jQuery object?

What would be the danger of both copies coming through?

Not really looking for answers like "just use the Google one" or "just use your own." I understand those arguments. I also understand that the user is likely to have the Google version cached. I'm thinking about fallbacks for the cloud in general.

Edit: This part added...

Since Google suggests using google.load to load the ajax libraries, and it performs a callback when done, I'm wondering if that's the key to serializing this problem.

I know it sounds a bit crazy. I'm just trying to figure out if it can be done in a reliable way or not.

Good question. Also, good caveat. Too many SO answers are "don't do that!" not: "Ok, here's how to do that."
–
Jay StevensJun 18 '09 at 17:54

6

Of course the first answer was "don't use the google hosted version." :-)
–
NosrednaJun 18 '09 at 17:59

4

Of course it was because if you want to host a serious web site, you dont rely on someone else hosting your files.
–
Bryan MigliorisiJun 18 '09 at 18:09

4

@Bryan Migliorisi, I guess Twitter is not that serious after all? But I admit they had their problems with Google like a month ago when Google went down.
–
Ionuț G. StanJun 18 '09 at 18:11

14

The merits of using Google or not for JS lib hosting is a worthy one, but it's been discussed in several other threads. I was looking for technical answers regarding JS fallback on loading delays.
–
NosrednaJun 18 '09 at 18:18

Javascript downloads should be synchronous already, as Matt Sherman said. Otherwise, many problems would occur if the page tried to execute an inline script that relied on a library that was only half downloaded, or a library extension was executed without the library fully downloaded and executed. That's also one reason why Yahoo YSlow recomends placing javascript at the end of pages; so that it doesn't block the downloading of other page elements (including styles and images). At the very least, the browser would have to delay execution to occur sequentially.
–
gappleJun 24 '09 at 20:16

30

Small fix from a validator fanatic: The string '</' is not allowed in JavaScript, because it could be misinterpreted as the end of the script tag (SGML short tag notation). Do '<'+'/script>' instead. Cheers,
–
BoldewynJun 26 '09 at 8:13

6

This example will not work. 1) if Google ajax library is not available it'll have to time out first before failing. This may take a while. In my test of disconnecting my computer from the network it just tried and tried and tried and didn't timeout. 2) if (!jQuery) will throw an error because jQuery is not defined so Javascript doesn't know what to do with it.
–
RedWolvesJun 28 '09 at 13:42

25

To test if jQuery was loaded, (!window.jQuery) works fine, and is shorted then the typeof check.
–
Jörn ZaeffererJul 26 '09 at 12:52

People keep on asking me to remove the type="text/javascript" parts, so to people writing html for older browsers, note that you'll now have to add that in.
–
BenjaminRHMay 28 '13 at 8:48

2

@BenjaminRH: type="text/javascript" was unnecessary in older browsers too, since they all defaulted to Javascript. Really older browsers looked at the language attribute; but even then, Javascript was the default if the attribute was missing.
–
MartijnJul 26 '13 at 16:30

The way it works is to use the google object that calling http://www.google.com/jsapi loads onto the window object. If that object is not present, we are assuming that access to Google is failing. If that is the case, we load a local copy using document.write. (I'm using my own server in this case, please use your own for testing this).

I also test for the presence of window.google.load - I could also do a typeof check to see that things are objects or functions as appropriate. But I think this does the trick.

Here's just the loading logic, since code highlighting seems to fail since I posted the whole HTML page I was testing:

Though I must say, I'm not sure that if this is a concern for your site visitors you should be fiddling with the Google AJAX Libraries API at all.

Fun fact: I tried initially to use a try..catch block for this in various versions but could not find a combination that was as clean as this. I'd be interested to see other implementations of this idea, purely as an exercise.

What is the advantage of using google.load in this situation, rather than loading ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js directly, like Rony suggested? I guess loading it directly catches issues with removed libraries as well (what if Google stops serving JQuery 1.3.2). Furthermore, Rony's version notices network problems AFTER www.google.com/jsapi has been fetched, especially when jsapi has been loaded from cache? One might need to use the google.load callback to be sure (or maybe there's some return value to include the google.load in the if(..)).
–
ArjanJun 27 '09 at 8:32

If one is testing for the presence of Google.com, one could make a network call, or one could check for the presence of the "gatekeeper" object. What I'm doing is checking for the google object and its "load" function. If both of those fail, no google, and I need the local version. Rony's version actually ignores the www.google.com/jsapi URL entirely, so I'm not sure why you indicate that it will have been fetched.
–
artlungJun 27 '09 at 15:26

In the end, all that's required is that the jquery library is loaded. Any Google library is not a requirement. In Rony's answer, one knows for sure if loading from Google (or the cache) succeeded. But in your check for "if (window.google && window.google.load)", the jquery library is still not loaded. The actual loading of the jquery library is not validated?
–
ArjanJun 27 '09 at 18:45

ah, I see how I caused the confusion. "Rony's version notices network problems AFTER www.google.com/jsapi has been fetched" should better read: "Your version does not notice network problems AFTER www.google.com/jsapi has been fetched".
–
ArjanJun 27 '09 at 18:47

2

We've recently switched to using Google as our jQuery host; if we get any bug reports from blocked users, I'll be using a variant of your answer to refactor our client code. Good answer!
–
Jarrod Dixon♦Jul 26 '09 at 7:45

this loads jquery from the google-cdn. afterwards it's checked, if jquery was loaded successfully. if not ("nope"), the local version is loaded. also your personal scripts are loaded - the "both" indicates, that the load-process is iniated independently from the result of the test.

when all load-processes are complete, a function is executed, in the case 'MyApp.init'.

i personally prefer this way of asynchronous script loading. and as i rely on the feature-tests provided by modernizr when building a site, i have it embedded on the site anyway. so there's actually no overhead.

I think you're missing the point of the question - how would you go about loading the moernizr script from a CDN?
–
geo1701Apr 26 '13 at 7:06

1

I can't recommend loading Modernizr from a CDN. One should rather get the smallest custom build from modernizr.com.
–
Emanuel KlugeJun 12 '14 at 10:45

So this option gets +16, compared to the 500/200+ the other options are getting. But this sounds quite good. Is it just not popular due to relying on Modernizer? I happen to using Modernizer in our site anyway, so if this is better than the other answers, can someone let me know? Im quite new to JQuery, so clarification is appreciated.
–
RussellOct 10 '14 at 9:02

There are some great solutions here, but I'll like to take it one step further regarding the local file.

In a scenario when Google does fail, it should load a local source but maybe a physical file on the server isn't necessarily the best option. I bring this up because I'm currently implementing the same solution, only I want to fall back to a local file that gets generated by a data source.

My reasons for this is that I want to have some piece of mind when it comes to keeping track of what I load from Google vs. what I have on the local server. If I want to change versions, I'll want to keep my local copy synced with what I'm trying to load from Google. In an environment where there are many developers, I think the best approach would be to automate this process so that all one would have to do is change a version number in a configuration file.

Here's my proposed solution that should work in theory:

In an application configuration file, I'll store 3 things: absolute url for the library, the url for the js api, and the version number

Write a class which gets the file contents of the library itself (gets the url from app config), stores it in my datasource with the name and version number

Write a handler which pulls my local file out of the db and caches the file until the version number changes.

If it does change (in my app config), my class will pull the file contents based on the version number, save it as a new record in my datasource, then the handler will kick in and serve up the new version.

In theory, if my code is written properly, all I would need to do is change the version number in my app config then viola! You have a fallback solution which is automated, and you don't have to maintain physical files on your server.

What does everyone think? Maybe this is overkill, but it could be an elegant method of maintaining your ajax libraries.

If you're doing all that work just for jQuery, then I'd say it is overkill. However, if you already have some of those components in place for other pieces of your app (e.g. if you already load scripts from a DB) then it looks pretty nice.
–
Michael HarenJan 19 '10 at 13:28

+1 for being thorough and novel, though I'm not convinced the benefit justifies the dev time and complexity.
–
Cory HouseDec 7 '11 at 20:24

Is there really a need for more than one fallback? if both are offline user will be waiting for over a minute before seeing your site
–
geo1701Apr 26 '13 at 7:09

It doesn't take 1 minute for a script to fail to load, does it.
–
Edward OlamisanApr 26 '13 at 17:12

@geo1701 and Edward, There is really no need for a third. Even one fallback has yet has yet to be proven reliable. If the Google API is down, I have not yet seen any guarantees that the first attempt will fail at all. I experienced a case scenario where a CDN never failed to load, holding the page from ever rendering, as mentioned here: stevesouders.com/blog/2013/03/18/http-archive-jquery/…
–
hexalysApr 30 '13 at 0:25

None really. You'd waste bandwidth, might add some milliseconds downloading a second useless copy, but there's not actual harm if they both come through. You should, of course, avoid this using the techniques mentioned above.

will not works if cdn version not loaded, because browser will run through this condition and during it still downloading the rest of javascripts which needs jquery and it returns error. Solution was to load scripts through that condition.

I found one problem in testing scripts in Google Chrome - caching. So for local testing just replace src in else section with something like s.src='my_javascripts.js'+'?'+Math.floor(Math.random()*10001);
–
Mirek KomárekFeb 27 '11 at 9:42

I consider that should escape the last < to \x3C in string. When the browser sees , it considers this to be the end of the script block (since the HTML parser has no idea about JavaScript, it can't distinguish between something that just appears in a string, and something that's actually meant to end the script element). So appearing literally in JavaScript that's inside an HTML page will (in the best case) cause errors, and (in the worst case) be a huge security hole.

I've never heart about Razor, but it looks like an obfuscator, except for that it makes the code longer rather than shorter (it's twice as long as this.
–
maaartinusMay 15 '14 at 23:37

@maaartinus: That's not an apples-to-apples comparison. BenjaminRH's answer, which you refer to, is for a single CDN-hosted script. With the CdnScript helper, you need only one line of code per script. The more scripts you have, the bigger the payoff.
–
Edward BreyMay 17 '14 at 1:50

Sure... it was just a rant. However, I guess that's not the optimal way. If anything fails, I'd ignore CDN completely and switch to the fallback for all scripts. I'm not sure if this is doable as I don't know how the loading exactly works.
–
maaartinusMay 17 '14 at 5:54

@maaartinus: Since each CDN script load can fail independently, you have to check each load separately. There is no reliable method of a single CDN check followed by loading all scripts from CDN vs. locally.
–
Edward BreyMay 17 '14 at 11:37

The case that worries me is a failure of the CDN site leading to wait times for many loads. So I'd like to have something like try { for (Script s : ...) cdnLoad(s); } catch (...) { for (Script s : ...) ownLoad(s); }. Translating this into a bunch of ifs could be a nightmare.
–
maaartinusMay 17 '14 at 11:48

For long term issues, it would be better to log JQuery fallbacks. In the code above, if first CDN is not available JQuery is loaded from another CDN. But you could want to know that erroneous CDN and remove it permanently. (this case is very exceptional case) Also it is better to log fallback issues. So you can send erroneous cases with AJAX. Because of JQuery isn't defined, you should use vanilla javascript for AJAX request.