Author: Webcraft Support
(page 1 of 13)

Meta tags were originally intended as a proxy for information about a website’s content. Several of the basic meta tags are listed below, along with a description of their use.

Meta Robots

The Meta Robots tag can be used to control search engine crawler activity (for all of the major engines) on a per-page level. There are several ways to use Meta Robots to control how search engines treat a page:

index/noindex tells the engines whether the page should be crawled and kept in the engines’ index for retrieval. If you opt to use “noindex,” the page will be excluded from the index. By default, search engines assume they can index all pages, so using the “index” value is generally unnecessary.

follow/nofollow tells the engines whether links on the page should be crawled. If you elect to employ “nofollow,” the engines will disregard the links on the page for discovery, ranking purposes, or both. By default, all pages are assumed to have the “follow” attribute.
Example: <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”>

noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.

nosnippet informs the engines that they should refrain from displaying a descriptive block of text next to the page’s title and URL in the search results.

noodp/noydir are specialized tags telling the engines not to grab a descriptive snippet about a page from the Open Directory Project (DMOZ) or the Yahoo! Directory for display in the search results.

The X-Robots-Tag HTTP header directive also accomplishes these same objectives. This technique works especially well for content within non-HTML files, like images.

Employ empathy

Place yourself in the mind of a user and look at your URL. If you can easily and accurately predict the content you’d expect to find on the page, your URL is appropriately descriptive. You don’t need to spell out every last detail in the URL, but a rough idea is a good starting point.

Shorter is better

While a descriptive URL is important, minimizing length and trailing slashes will make your URLs easier to copy and paste (into emails, blog posts, text messages, etc.) and will be fully visible in the search results.

Keyword use is important (but overuse is dangerous)

If your page is targeting a specific term or phrase, make sure to include it in the URL. However, don’t go overboard by trying to stuff in multiple keywords for SEO purposes; overuse will result in less usable URLs and can trip spam filters.

Go static

The best URLs are human-readable and without lots of parameters, numbers, and symbols. Using technologies like mod_rewrite for Apache and ISAPI_rewrite for Microsoft, you can easily transform dynamic URLs like this moz.com/blog?id=123into a more readable static version like this: moz.com/blog/google-fresh-factor. Even single dynamic parameters in a URL can result in lower overall ranking and indexing.

Use hyphens to separate words

Not all web applications accurately interpret separators like underscores (_), plus signs (+), or spaces (%20), so instead use the hyphen character (-) to separate words in a URL, as in the “google-fresh-factor” URL example above.

Where do we get all of this knowledge about keyword demand and keyword referrals? From research sources like these:

Moz Keyword Explorer

Google AdWords Keyword Planner Tool

Google Trends

Microsoft Bing Ads Intelligence

Wordtracker’s Free Basic Keyword Demand

We at Webcraft custom-built the Keyword Explorer tool from the ground up to help streamline and improve how you discover and prioritize keywords. Keyword Explorer provides accurate monthly search volume data, an idea of how difficult it will be to rank for your keyword, estimated click-through rate, and a score representing your potential to rank. It also suggests related keywords for you to research. Because it cuts out a great deal of manual work and is free to try, we recommend starting there.

Google’s AdWords Keyword Planner tool is another common starting point for SEO keyword research. It not only suggests keywords and provides estimated search volume, but also predicts the cost of running paid campaigns for these terms. To determine volume for a particular keyword, be sure to set the Match Type to [Exact] and look under Local Monthly Searches. Remember that these represent total searches. Depending on your ranking and click-through rate, the actual number of visitors you achieve for these keywords will usually be much lower.

Other sources for keyword information exist, as do tools with more advanced data. The Webcraft blog category on Keyword Research is an excellent place to start. If you’re looking for more hands-on instruction, you can also check out Webcraft premium Keyword Research Workshop.

Websites that have earned trusted status are often treated differently from those that have not.

SEO’s have commented on the double standards that exist for judging big brand, high-importance sites compared to newer, independent sites. For the search engines, trust most likely has to do with the links your domain has earned. If you publish low-quality, duplicate content on your personal blog, then buy several links from spammy directories, you’re likely to encounter considerable ranking problems. However, if you post that same content on Wikipedia, even with the same spammy links pointing to the URL, it would likely still rank tremendously well. Such is the power of domain trust and authority.

Trust can also be established through inbound links. A little duplicate content and a few suspicious links are far more likely to be overlooked if your site has earned hundreds of links from high-quality, editorial sources like CNN.com or Cornell.edu.