AJAX enforces a same domain policy and will only load content from the same address as the the page it’s on. To get around this you can use a server-side language like PHP to load the page and then echo/print it back out (basically a proxy)

Loading a whole page, but getting only a part of it. Although this is possible with jQuery, I won’t use it ever. I would load only the data I want to. Any extra data loading sounds like a bad architecture to me..

If you need to load contents from a page which you show in browser as a “regular” web page, you’d better insert some server side logic in it which will filter the content when there’s XHR (it’s actually one if block in most cases).

But loading a whole page via XHR when you need only one or two blocks from it, well, this is counter logical. You load extra content instead of saving it and reduce response’s size, which in turn will speed up the loading.

Please note that this behavior do load the entire page, and then jquery parse it to return just the selector.. so, i think this is a trick is a “quick’n’dirt” solution, definitely waste bandwith and browser resources ;)

Ah ha, so nothing to do with functionality but everything to do with crawlers and search engines.

Hmmm, crawlers generally use the href attribute of links to index sites. So technically using a hash for each might cause more problems right?

Having the page that just loaded show up in the URL bar doesn’t necessarily mean it works better for SEO, as most crawlers I have encountered search out the href attribute and follow that from the HTML.

The only real reason I can see in using a hash for the URL would be for the end user.

The reason the hash is there is actually not for SEO, its to maintain the use of the back button. Rule #1 when Ajaxing a site is to not break the browser’s functionality, which your code would do by making the back button load the wrong page.

Say I was at google, and ended up clicking onto a site with your code, then I clicked some dynamic content links. If I then wanted to go back to some previous content on your site and hit my browsers back button, I’d be back at google.

By making a.click() trigger a hash change, we update the browsers history with that hash. Then we listen for the hash change event on the page and when you see it change based on either a clicked link or a history navigation action, the site will act in a consistent manner.

I sent this trick to Chris because of the video Jan-Marten mentioned – you should definitely check that out. The difference between this method and Chris’ method from that video is that his requires a #guts div to contain the content you’re trying to load. This method does not, and allows you to write purely semantic HTML without adding superfluous elements for the sake of the javascript.

Once again, very timely. I’m building a website and have been using the Dynamic Page/Replacing Content (http://css-tricks.com/dynamic-page-replacing-content/) item to have a site that loads the background, navigation, etc only once and then loads the content of the rest of the site independently.

But I’ve run into trouble when the content I’m loading has other jQuery goodies like tabs or a content slider in it. Somehow it all turns into a train wreck and stopped working. Maybe this technique will solve that…

I get this as quick solution while working with jQuery and i think it’s quite useful trick, cause when you get into situation where nothing else works, this can be real savior. It’s something you achieve with experience.

“With jQuery, you can load not just the contents of a URL, but a specific CSS selector from within that URL”…

Although, lets not forget that AJAX rocks because it can make your pages work better and faster for the end user. If you just load an entire page using AJAX, only to take one division from that massive page, you are wasting everybody’s bandwidth and time!

Not to say this isn’t a great article though as it is a really handy trick – but please don’t do anything crazy with this knowledge.

Chris, thanks for this. I’m working on a project with some pretty vanilla Ajax (pretty newb at Ajax) and this is just what I was looking for to make it better. I’m encountering issues when I want to run JavaScript from the ajaxed page – could your idea here be part of a solution?

everyone keeps saying to avoid loading the whole page if you only need a small part. The question I have is:

What is the best way to do this? Post a variable to test for in php? Seperate the area you need into a new file that is included in the original file (server side) and send the ajax request there? Or none of the above…

I have a page/application that relies much more heavily on this jquery function than I would like. If I can get this fixed up to use less bandwidth it would be great, I just dont know the best way to get it done. Thanks for any advice in advance.

Pretty cool, though I just found out the hard way that it doesn’t actually “grab all the contents of #area”, but instead selects every *DOM node* that is matched by the “> *” expression. So, for instance, if your markup goes like “Lorem Ipsum Content”, only “content” will be selected.
Pretty logical when you think about it, but it took me a while to figure this out.