Where it got interesting was when they did a Matt Cutts-style site review.
When we signed up, we were asked to submit a site for which we were the webmaster. The panel from the Google Search Quality team worked through half a dozen or so of these pointing out areas that could be optimized. I made a few notes about the things I learned:

Use web-sniffer.net to check the HTTP headers of a URL. This will help you determine if your redirects are functioning as expected. The Google Search Quality panel pointed out a few sites which had "soft 404s" i.e. error pages that were not in fact 404s, such as this one from the Surrey Police, which web-sniffer reveals returns a 200:

"Soft 404" page on the Surrey Police website with a nice customized Sherlock Holmes message: "Apologies, we will send out the search dogs to look for the missing page."

In Chrome you can type cache:URL into the address bar to see the cached version of a page in Google's index. For example, cache:http://www.domeheid.com shows the cache of this blog. This is a good way to see how recently your site has been indexed. It's also a good way to view the site structure if you view the text-only version, which gives you the opportunity to check if you have a proper h1, h2...h6 hierarchy with your headers.

Use a 301 redirect for www and non-www canonicalizaton. Some of the sites we looked at weren't redirecting from the non-www version of their page (e.g. example.com) to the www version (e.g. www.example.com). I knew this already, but it reminded me that this should be one of the first things you check in an SEO audit as it splits any link juice between the two versions of the domain. I need to follow this up with a few of my clients...

There's no penalty for duplicate content. The Google team refuted the SEO myth that there were any penalties imposed by Google on duplicate content from the same domain. Google merely hides duplicate content in its SERPs so that users aren't served with multiple versions of the same page (see the screenshot below). If, however, you are a content farm scraping (i.e. plagiarizing) content from other domains and nesting it amongst affiliate links and ads on your own domain, that's another matter. (Read more about the Google Panda / Content Farmer update, which was announced on the Google Webmaster Central Blog on 24 February.)

The duplicate content message displayed in Google SERPs that hides repetitive results from users (with the option to display them if desired).

For cross-posting use rel="canonical". If you are cross-posting content on multiple sites, use the rel="canonical" meta tag to identify the original version of the post (e.g. one member of the audience was republishing blog articles on his site that were originally published on other blogs, with the authors' permission - a type of syndication. I'm not entirely sure if everyone understood each other on this question, so be wary about taking this advice as gospel).

Use the source attribution meta tag for syndicated news content. This came up during the discussion of the previous point about syndicating blog content. At the moment the source attribution meta tag is aimed at Google News publishers.

Google Base. Someone in the audience who was interested in e-commerce mentioned this. I noted it down just because I hadn't heard of it before. It seems to be a way for merchants to add their product feeds to Google Product Search.

It will be interesting to see how quickly the webmasters of these sites that were named, flamed, and shamed go off to fix these issues. Maybe by the time you read this post, these SEO errors will have been rectified.

The overall impression you get from listening to Google employees is that SEO is pretty easy, common-sense stuff. There are no secrets if you follow their guidelines. Just produce good content, think about your user experience, and work your way through their SEO starter guide to make sure you're not doing anything silly. Don't try to game the system.

My main SEO checklist will be as follows:

Title tags

Description meta tags

www and non-www canonicalization with 301 redirects (also check for duplicate variants of the homepage e.g. /home.html, /index.html, /index.php, and so on).

Another thing I noted was that most of the Google team's analysis of a site started with checking the HTTP headers and viewing the site: search using the site:URL query e.g. site:http://www.domeheid.com. This is a good way to review your title tags and snippets. Looking at my site: search, it makes me wish Blogger improved its SEO tools so that I could optimize my title tags and meta descriptions without doing it all manually in the HTML.

Incidentally, it was great that the wi-fi password was posted around on the walls at TechHub, along with the Twitter hashtag. But that's to be expected nowadays when you've got a techy audience.