How To Use ImportXML in Google Docs

Put me in front of a Mac and it’s almost as if I never learned to use a computer. Put me in front of Google Spreadsheets and all of the time I’ve spent working with Excel feels a little like time wasted, and not in a good way. I’m just not very used to a spreadsheet that isn’t Excel.

Unafraid of a challenge, I recently decided to give Google’s (exceptional) importXML, importFEED and importHTML functions a try – the ability to fetch information from the web to retrieve the data you need. Mostly to make an interesting blog post, but partly out of envy that Excel doesn’t have this function.

It’s frustrating trying to get XML data into Microsoft Excel – unless you’ve got the time and patience to build some basic Macros or VBscript for your requirements. With Google Docs, it’s really easy.

A few resources

If you want to use Google Docs to extract data from the web, it would be a good idea for you to learn a little xPath. “XPath is used to navigate through elements and attributes in an XML document”, or, in simple terms, you can use xPath to fetch little bits of data contained in structured elements like <span>, <div> or links or pretty much anything, really.

Also, there are a few people who have been doing this a while, and probably have sample spreadsheets that blow some of the examples below away – but you have to start somewhere, right? If you’re already an importXML / Google Docs Ninja, maybe go and find something else to do instead of reading this post.

If you’re interested, I made a Google Docs Spreadsheet with all of the examples below:

Does anyone know?

“Does anyone know” is such an interesting search on Twitter – just combine that query with a keyword, like “restaurant” and a location for everyone on Twitter looking for a very specific, thing. Great if you happen to be trading in that thing.

Try a query like this to pull through results from the Twitter search RSS feed:

Which will give you the number of followers you have on your Twitter profile. I added together the total followers that my SEO team have (that’s the three of us) for kicks. I wonder how long it will be before someone totals up all followers counts for all UK agencies? I wonder if there’s a correlation between that data and turnover

Pull price data from the web

I think that, after some mild haranguing, Will might have purchased himself a pair of Etymotic headphones. Perhaps my pitch would have gone slightly more efficiently with a little xPath and Google Product search:

For something like this, a way smarter approach to get pricing data from Amazon would be to use their API – but you get the point with this brief example.

Get all of your (competitors) URLs from their sitemap

I mentioned doing this with Excel to find orphaned pages, but you can have a lot more fun with importXML. For one, theoretically you could go off and fetch all keywords contained in the <title> tag of each of the URLs – an instant keyword strategy!

Pull link data from Blekko

With a query like this: =ImportXML(“http://blekko.com/ws/http://builtvisible.com/+/links+/rss”,”//link”)

Blekko is everyone’s favourite new SEO tool, and fair enough, it is quite cool. As Blekko are happy to push their data out via RSS, we’re able to pull this data into our spreadsheets with ImportXML (to be fair this is really easy with Excel, unless you’d like to create multiple columns with different domain queries.

More Blekko – link data tables

Blekko have a feature that allows for a pretty insightful breakdown of their SEO data on your domain. If you want to pull some of that through in to Google Docs, no problem:

Have fun

This wasn’t a particularly “advanced” post – I did quite enjoy the thought of what to do next with this data, though. Fetch IP addresses, WHOIS details, root domain links or keyword research data with Google Suggest, the Alchemy API, or plain scraping your competitor home pages. If you’re using importXML, I’d really like to hear how.

Anyway, as I mentioned earlier, please feel free to take the queries from here: http://bit.ly/9Fs7aF – improve them, and let me know what you did.

A little update

I got in touch with my friend Tom from Distilled to see if he wanted to contribute. He’s been out in Vegas, but came back with a tip to solve the problem of Google caching a result for around two hours at a time:

Google docs will cache a URL for ~2 hours and so if you want to crawl a URL more often than that then you need to add a modifier to the URL.

I use int(now()*1000) to generate a unique timestamp and then add that into the URL in a dummy query string. E.g.

nice article pretty straight forward but still good to get some ideas of what you can do. And you can always export to Excel.

Have you taken a look at the Google refine product? i have been playing with it but a lack memory is causing me issues its quiet good at quickly filtering data or looking for trends and you can pull data into it as well.

Something else to have a look at is DataSift (from the team at TweetMeMe) as that looks to open up a lot of twitter mashing possibilities.

Hey Matt – definitely. I also think there’s a ton of milage in Yahoo Pipes (which, unless I’m mistaken will happily export xml which can be imported into Google docs). I’ve got a few macros and VBscripts to do these things in Excel but it’s quite amazing how much easier it is in Google Docs. Horrible software, mind you.

Not that easy – if you want to form multiple columns, concatenating different queries to form varying URLs for the appropriate XML response it is still a bit of a pain! You have to create a data file and it’s such a mess around compared to Google Docs. If you have an example though – upload the file and let’s take a look. I’d be delighted to learn!

I coincidentally tried a few of these a few wks ago. I generated a sitemap for a site, stripped out everything until the urls were left in excel.

I then scraped the urls for tag and meta description details which all worked well…

The only thing is the site has 5000 pages +

I was hoping that I could somehow mass edit some title tages in a marathon manner,,,

Unfortunately GDocs doesn’t support much pasting into the spreadsheet and only supports 50 =importxml queries…

Is there anyway to use GDOCS to ref the XPATH code to then create an follow like instance that will affect a sequence of say 500 cells in a column? Otherwise it’s pointless and I’ll have to learn php, RoR +regular expressions – and I don’t want to do that yet. Life is too short!

Whilst I’m here – does anyone find the XPATH tools at liquidXML any good for these SEO scraping functions?

Hmmm does anyone know if GDocs is being flaky for scraping nowadays? Just tried doing a lil GDocs scraping project that I created months ago.

I’m scraping the SERPS using importXML. Now I get the serps results in GDocs – but when I paste these into excel it does something weird and encodes everything.

It used to work a treat a few months back. I could paste the cells into excel exactly as the GDocs spreadsheet displayerd them.
Now it seems to concatenate url results and add weird encoded characters – I’ve tried paste special etc – is GDocs defunct for scraping now?

@Ryan Boots Just had a play with that xpathbuilder – really neat and intuitive – it’s a bit like something I created for Google Docs bulk scraping. Does the BING search return 100 results?
Re. =importxml(“http://www.bing.com/search?q=kiss+my+ass&count=100″, “//div[@class=’sb_tlst’]//h3//a/@href”)

Thank you. Embarrassed to say I am a very late starter into Xpath and Google Docs & having found this post you give some fantastic examples of how to scrape data and use it in a clever way, I have been reading the posts from the guys at Distilled into using Google Docs to perform quick checks for rankings in the SERPS but it seem that I am too late and they have now stopped that function from working, still its great to gain exposure to this and get ideas on how to use these tools thank you.