One issue I've always had with the typical templating methods over the years is the waste of bandwidth. Even with smart functionality on the backend combining the dynamic and static information on each page, you're still sending an entire new html page for each URL requested by the client, with a good portion of it's content duplicated from the last page they were on. If there was a way to only send the new information, the body of the page, it would save on bandwidth and could provide a better user experience.

I had the opportunity to create such a system with one of my previous clients. They had specifically requested a Flash-like loading feature on each of their pages, so I suggested using AJAX to load the data and refresh the page. There were several problems that I had to address with this system, though.

Links would be controlled by Javascript, not the browser. Since the address bar wouldn't change between pages, bookmarking and history wouldn't work correctly.

Non-Javascript browsers (robots) wouldn't see the content.

Users might get confused by the lack of 'new page flash'.

To solve these problems, I had to build the backend in a particular way before starting on the Javascript. I created a single static template out of xHTML that included all of the links, header, and footer information. I then made a controller for the dynamic information (which included meta information and the page content), which I outputted as an xml document. Based on the URL, this file would pull information from the database that was unique for the page. A PHP script combined the xml object with the static xHTML page, creating a unique web page for each URL, This, to me, seemed like an optimal templating solution in general, but it also would work great for an AJAX-powered website.

The next step was creating the Javascript to control the frontend. I chose to work with jQuery for its excellent AJAX and DOM manipulation capabilities. The URL structure I used for this project was simple - http://{domain name}/{page name}/. I needed to change the links into something that both Javascript and the browser could use, so I placed a hash mark in front of each {page name}. I could have done a 'return false' within the Javascript, but chose this to show savvy users how the page worked.

function hashLocationFix(url) {

url = url.split('/')

if(url[url.length - 2]=='{domain name}') {

url[url.length - 1] = '#home/';

}

else if(url[url.length - 2].substr(0,1)!='#') {

url[url.length - 2] = '#'+url[url.length - 2];

}

return url.join('/');

}

I looped this function through all internal links on the page. The hash mark usually tells the browser to 'scroll to this div id on the page'. Without the div id, the browser will stay on the page. I also apply this function to window.location.href, so that the site is fully Javascript controlled as soon as the user visits it.

Once I had the links powered, I still had to initiate the AJAX call. Instead of applying an event listener to the address bar, I did a timeout function and checked for change every half a second. This is less buggy and has the same result.

function hashChecker() {

if(hash!=getHash(location.hash)) {

hash = getHash(location.hash);

loadContent(hash);

}

}

Please note: getHash is just a simple function to pull the last word from the url. This function checks to see if there's been any changes to address bar, either by the user or by clicking a link.

Now we have a function that makes all links Javascript-powered and checks to see when to pull content. Because of the use of the hash mark, all pages are bookmark-able and cached in the browser history. Depending on your needs, your AJAX call (mine is called loadContent) may differ, but this just calls that dynamic PHP script with the hash value (which is the same as the get value passed by the mod rewrite) for an xml file, passes it to the frontend, and parses it out. Below is the script I used.

function loadContent(page) {

var clearElements = [

'left_column_header',

'left_column_content',

'right_column_header',

'right_column_content'];

$.each(clearElements,function() {

$('#'+this).text('');

});

$('#loading').show();

$.ajax({

'type':'GET',

'url':'processor.php',

'data':({'page':page}),

'dataType':'html',

'success':function(xml) {

$('#loading').hide();

$(xml).find('content').contents().each(function() {

var content = $(this).html();

if(content.search('DATA')>=0) {

content = content.substr(11,content.length);

}

if(content.search(']]')>=0) {

content = content.substr(0,content.length-6);

}

$('#'+this.tagName.toLowerCase()).html(content);

});

}

});

}

Please note: the DATA search is for CDATA, which is how I pass the fully formed xHTML up from the backend. For a loading icon, and to give users the clear 'The page is changing now' message, I used an animated gif in a hidden 'loading' div that only shows when the AJAX is working. I actually have a sleep function in my backend PHP script to keep the icon up there for a respectable amount of time!

Now, if a non-Javascript browser hits the page, they will surf the same content that a Javascript browser would with almost the same URLs on each page (just a hash mark away). When search engines index these pages, they will see them as separate files - but when a Javascript browser views it, we'll turn on the AJAX and change the URL to have a hash. The only real downside will be indexing of links - if a typical user links to the site using the AJAX link (with a hash), a search engine would interpret this as a link to the home page, not the individual side page. This is a fairly small risk, as most links will be for the main page anyways.

I don't recommend this for most sites - after all, you have to do a bit of workaround to accommodate search engines, which usually doesn't end well. It worked great for this project though, and can be used on just about any small- to medium- sized website. Also, by offloading some of the processing power to the client, I ended up saving quite a bit of bandwidth by this templating system!