October 15, 2009

I stumbled across this article on SEO the other day, although I agree with him, I’d like to distance myself from Powazek’s vitriol, I don’t think everyone working in SEO are bloodsucking vampires, systematically destroying the web for their own diabolical ends…I do however want to weigh in on the subject, as its something that we get asked about quite a bit.

Search Engine Optimisation is something that people get hung up on too much and it’s mostly because there is so much misinformation, half-truths and outdated ideas floating around about the subject. The current state of affairs has been brought about as the result of spammers trying to ruin everything for the rest of us, and search providers – mainly Google – desperately trying to keep it all in check.

The problem with many SEO experts is that they engage in practices that have negligible effect on actual search rankings, and in some cases can actually get you punished by the big G.

From my run-ins with various self-proclaimed SEO experts (and they are all self-proclaimed) here is a (non-exhaustive) list of the methods they employ and why I think they’re bunk:

Meta keywords – this is worthy of a post in it’s own right.
I’ve lost count of the number of people I’ve talked to who are convinced that meta keywords mean a damn thing. Thankfully Google has finally put the idea to rest. Stuffing the meta with keywords will most likely drop your site down the rankings as Google will think you’re trying to trick it.

Stuff your content full of keywords you put in the meta tag
While it does make sense to have some keywords in your text, many SEO companies will advise you to stuff it full, until there’s barely any words left in between to hold the sentence together. Remember, it’s your customers who buy your products, not the search engine, text that is jammed with jargon and keywords isn’t pleasant to read and will most likely make people go elsewhere.

Change your content regularlyThis is usually the kicker of the whole shebang. Now, it’s true that Google likes sites that are generating new content, because you’re contributing to keeping the web fresh, however unscrupulous SEO experts go so far as to change a few sentences a month on pages that already exist. Now this isn’t too bad from a technical point of view (it’s not breaking the way the web should work), but its not uncommon to hear of people paying a £200+ retainer on a monthly basis.

Traffic shaping
The above methods range from pointless to reasonably harmless, this is where we start getting into the dark underbelly of SEO practices. Traffic shaping is done by adding ‘rel=”nofollow”‘ to links that they don’t want a search engine’s spider program to go down. This is an attempt to increase certain page’s rank within their own site and while the “nofollow” attribute does have some good uses, it’s generally frowned upon as you’re artificially increasing the apparent usefulness of the page in question.

Link Bombing
Now we’re well and truly in the dark recesses of the SEO world. This is the practice of spreading links all over the web with specific keywords, the idea being that search engines will associate these keywords with that link eventually. These methods usually involve employing 3rd world labour and paying them pennies per link they create.

Spam botsSimilar to link bombing, although usually they operate by posting comments on blogs and in online forums. These are the cases where the nofollow attribute is useful, as it effectively renders the link pointless.

So, that’s some of the nefarious practices employed by SEO agents, what about good SEO?

Powazek writes:

“The problem with SEO is that the good advice is obvious…”

Thing is, its not really. Good SEO practices are all wrapped up conversations about semantics, page order and code execution.

Here’s another non-exhaustive list of what I consider to be good practice.

Header tag order
Headers should always flow down the page in order of importance, the website’s title should always use the <H1> tag, titles for posts (for example) should use <h2> and so on. If you need more than the 6 levels provided by HTML, you’re doing something wrong.

Content at the top
It’s important to get the content of the document as close to the top of the HTML code as possible for two reasons.
1) It makes life easier for the blind: if you have to listen to a screen reader walk you through a bunch of unrelated garbage before you get to the content, it’s just unfair.
2) It ensures that when a search engine takes a cache of your page it actually captures the important bit first.

Menus at the bottom (of the code)
This is related to point 2, the navigation bar should always get put to the bottom of the code. It saves blind people having to listen to the options over and over like an infernal telephone menu system stuck on repeat and also guarantees that Google doesn’t return your menu on its search results. So long as your web designer knows what they’re doing (and I do) the menu will still appear at the top (or where ever you want) of the actual page.

Use semantically correct tags
This is a point that’s going to become increasingly important in a few years as the web moves towards a true relationship-based network of pages. This means using list elements for what they’re intended, rather than trying to make one using carriage returns.

Write good content
Which is hopefully what I’ve been doing here. What’s the point of having a beautiful, semantic, optimised website, when there’s nothing worth looking at when you get there?

So there you have it, I don’t completely disagree with Powazek’s position, I just think that perhaps instead of bleating on about how ‘obvious’ it all is, he could have at least touched on why he though it was all so obvious.