Some time ago, that same page started to be the target of some bug reports stating that users would occasionally get a blank page when accessing it, and the only way to get the page back was to hit reload. This was happening on Firefox 3.0.7 and newer.

As I wrote on the previous post, our page does an RPC call while loading the header scripts, like this:

After poking around with Firebug, I discovered that all the code below the call was being ignored by the Firefox parser. That would naturally result in an empty body, which lead to a blank page.

I started googling around, and I found two interesting Firefox bug reports, 444322 and 478277.

The first of those bugs mentions the original Javascript evaluating order issue I wrote about before (which was not, in fact, a JavaScript evaluating order, but an issue where the parser would not wait for the result of the RPC call to continue parsing). They also mention the Firefox team released a “fix” for it in 3.0.6, but people kept reporting the issue was not yet fixed (some claimed it in fact got worse), which lead to opening the second bug report. Also, an important fact about this bug is that it seems to happen only when all the page resources are already cached locally by the browser. This includes all the resources loaded in the page header (usually, JavaScript and CSS files).

A few interesting comments (#30 and #34) clarify what’s causing this, and mentions this is a piece of “fragile code”. I don’t know enough about the Firefox code base (which is a nice way to say I know nothing at all) to be sure about this, but I believe the “fix” caused an even bigger problem, where, in some conditions, the parser will enter a state where it will simply eat all the input without parsing it. This leads to the blank page problem.

Well, this is all very interesting, but I had a problem that needed to be solved. So, I hacked. On of the things that avoid this bug to be triggered is at least one of the page resources to not exist on the local cache when the page is loaded. So, I forced this situation to happen. I picked up a small JS file that is loaded before the Ajax call, and configured Apache to add all the necessary headers for browsers to not cache it. This is done with something like this:

It’s becoming more and more common on the web to create pages that, on some situations, display some object over the page content. This could be a photo (using LightBox), a movie (like Apple does on their trailers site for non-HD trailers), a dialog, a color-picker, etc. In that situation, you often want to make the object go away when the user clicks anywhere on the page but the object.

One technique I use often is to create a DIV with no content or background that covers the entire page, place it on the page with z-index above the page content and below the displayed object, and bind it’s onClick event handler to a method that hides the object and the DIV itself. Something like this:

As you can see, the transparent layer (represented here in blue) gets between the object to float above the page content (here, in red) and the page itself.

So, how hard can it be, right? Just do some quick Javascript code or use CSS to set the DIV properly, handle onClick, and you are done… or not. Yes, IE. IE will screw this all up.

When I implemented this recently, I noticed that, on IE, the mouse click would go through the overlay to the page itself, clicking on links or buttons, or simply not hiding the floating object because the layer onClick was never triggered. Even weird, on some areas of the page it worked, and on other areas it didn’t. It depended on what was below it.

After gooling and trial-and-error for a while, I found the solution. The problem is that IE doesn’t like handling clicks on transparent objects, like DIVs with no content or background. So, the solution is… add a background. But wait, if you add a background, it won’t be transparent any more, right? Wrong. There’s a neat trick you can use: create a transparent GIF file with about 200*200 and use it as the background of the DIV (the size is irrelevant for this, but if it’s too small, it will make browsers on old computers slow when reproducing it to fill all the background). IE will work, because from it’s point of view, there is SOMETHING there belonging to the DIV (even if it’s a transparent GIF) and the onClick will be triggered as expected.

I found out the hard way the recently released third main version of Firefox has a serious issue related to the order in which the Javascript code is executed in a page. The bug is somehow related to the files being, or not, cached on the browser.

Wonder Ajax framework adds a lot of Ajax goodies to a standard WebObjects applications. The way a page is built is by placing on the page header all the calls to load Javascript files. There’s a special case, which is when you need a JSON proxy. That is useful if you need to write more powerful Ajax behaviour than all the component trickery Wonder offers you. The JSON proxy will not only load the necessary Javascript file, but run one code line to create the actual proxy and establish the communication with the server. So the page header will have something that looks more or less like this:

When the page loads, all those scripts will be loaded and the new JSONRpcClient() method will be executed during the page load, as expected.

The problem starts when you have a javascript method associated with the onLoad window event. This is done by using the onload attribute in the body element:

<body onLoad="initPage();">

The onLoad function should be triggered when the page finishes loading. This, implicitly, means that the onLoad will be run only after all the inline javascript code (in the page header or body) be run, because that is still considered to be part of the page loading process. So, in our case, first, load all the JS and create the JSON proxy, and then run the onLoad method.

This works on every browser, and also on Firefox 3 on the first page load. The problem is that most of the subsequent page loads on Firefox 3 won’t work, at least if your onLoad method depends on the existence of the JSON proxy. For some reason, when Firefox 3 has all the Javascript files already in the local cache, the onLoad event is triggered too soon, namely before the header scripts are executed and the proxy has been created. This will break all your code executed by the onLoad method that assumes the proxy is already ready to work.

I have tried several solutions for this. The only one I had success with is the following. It’s kind of dirty, but it works. The idea is simple: if the onLoad method is to be executed before the proxy exists, than we wait some miliseconds and try again later until the proxy is ready to be used. So, if your method is like this:

Although AJAX techniques are now widespread among the web-developers community, there’s still something that can’t be done using pure AJAX techiques (whatever that means): file uploads. There’s no way to grab a file using JavaScript and send it to the server using an asynchronous request. I guess the main reason for this is security, you don’t want web sites to steal files from your home directory.

Anyway, people found a relatively popular way to do this. The ideia is having a hidden iframe on the page, and use that iframe as a target of the form where you have the reference to the file to upload. So, when the form submits, the result page will go to the hidden iframe, and the user won’t see a full page refresh. You still have to take care of some details, like hiding the form and showing a nice progress bar with the classic “Hold on, we are uploading” message, and polling for the content of the iframe to check is the file is still in its way, or if it arrived safely.

One nice detail is where do you actually have the hidden iframe. You could just put the iframe on the HTML code of your page, but IMO that’s ugly. The iframe is an artifact that is needed just to serve as a black hole for the form submittion result page. It doesn’t make sense to put it in your HTML code, as it hasn’t anything to do with the content. Also, it’s error prone: you may inadvertently delete it, causing the file upload to missbehave. Or you may need to change the iframe code and having it on HTML will force you to update on all the pages where you use it (and of course, you will forget one).

So, I believe the most elegant solution (if you can use the word “elegance” in the context of this major hack) is to generate the iframe using javascript. You may do this on the page load, when you create the form, whatever. Just do it before the user submits the file! :) Well, it’s easy, right? Something like this does the trick:

That’s cool, right? Just run this and the iframe will be created. Submit the form, the result goes in the hidden iframe, everything works, we are done, let’s go home. Well… all true until you actually test it on Internet Explorer…

If you test this on IE, the result will be a new window being created when you submit, which is clearly not what you want. Well, I had a hard time finding the problem, so here goes: if you create the iframe using JavaScript, IE wont set it’s name. Yes, the “iframe.name = ‘fileUploaderEmptyHole';” line will simply be ignored. It does nothing. So, as you don’t have any frame called ‘fileUploaderEmptyHole’, when you submit the form, it will create a new window named ‘fileUploaderEmptyHole’ and display the result on it.

Yeah! Now you’re thinking “WTF?”. Yes, yes, it’s true. This actually works on IE, with the expected (?) results. Well, you still have to support the other browsers, but you are lucky, as this will throw an exception on all the non-IE browsers. So, it’s just a matter of catching it, and running the decent version of the code:

Those who code Javascript know that the BODY tag has two cool attributes where we can hook methods to be run when the page loads and unloads. Those are the standard onLoad and onUnload attributes.

onLoad does what you expect – runs whatever the attribute is set to after loading the entire page (or, at least, all the HTML and Javascript files).

onUnload is a little more tricky. When you have a function hooked to onUnload attribute, and you click on a link, the function will actually be executed after the browser opens the request to get the new page. This can be catastrophic in many ways. Imagine that you are on a page where you can edit some information in an asynchronous way. You edit all the stuff you want, and the browser only sends the data to the server from time to time, and when you leave the page.

Now imagine that the next page reflects the data changes you just did. You would have some kind of AJAX call on the onUnload attribute that would send the final changes to the server. But these changes will be sent after the browser opens the http request to get the next page, and actually get it from the server. You got a problem: the next page won’t reflect those last updates because it was generated before they were sent to the server!

Happily, there is a solution. Someone (I think Microsoft, but I’m not sure) implemented the onBeforeUnload event. As the name suggests, this is a hook to a method that will be executed before the page unloads, and before the http request to get the next page is open. This is a non-standard attribute, but it’s implemented on all the major browsers (IE, Firefox and Safari). And it’s really useful!