Website Validator

What are the most common errors on your website? Summarize the HTML errors across and entire domain with the Website Validator.

Domain:

Parameters to CrawlSome URL parameters can change page content. Which parameters should the spider pay attention to when crawling?:

► Click to Add/Edit Value

*Comma or new line separated

Directories and URLs to ExcludeExcluding pages can reduce the load on the crawler and keep you from reaching the URL cap so you can analyze more of your sites. Enter the full path, or a substring of the URLs you wish to exclude.:

► Click to Add/Edit Value

*Comma or new line separated

With These Settings

About the Website Validator

What are the most common errors on your website? The Website Validator crawls a website, runs the contents through an W3 HTML Validator, summarizing the content for you. The Website Validator will break down errors and warnings by type, as well as give you the overall percentage of error free pages. In the Error Summary tab you'll find a detailed description of all the errors encountered on your site. Click over to the Pages With Errors tab to start tackling those problem spots. Keep in mind not all errors in HTML markup are equally problematic, and many errors have little to no impact on how the page is displayed to the user.

This tool uses the webservices of W3C Markup Validation Service and the HTML5 Nu HTML Checker. We use these multiple validators to reduce the load on any one validator. It's possible these two different validators are running two different Nu HTML Checker Instances and thus may report different errors, even when validating the same page.

We request that you be kind to the 3rd party services. When making changes to a URL, use the "See Validation" on the Pages With Errors tab to check just that page rather than rerunning the Site Validator over your entire site again. When making changes to a template shared between pages, we recommend verifying the template is error free by validating a single page that uses the template first before rerunning the Website Validator. Our spider has a 1000 daily page cap, these services may have their own caps.

About the Spider
DatayzeBot, the datayze spider, now respects
the robots exclusion standard.
To specifically allow (or disallow) the crawler to access a page or directory,
create a new set of rules for "DatayzeBot" in your robots.txt file. DatayzeBot will follow the longest matching rule for a specified page, rather than the first matching rule.
If no matching rule is found, DatayzeBot assumes it is
allowed to crawl the page. Not sure if a page is excluded by your robots.txt file? The Index/No Index
app will parse HTML headers, meta tags and robots.txt and summarize the results for you.

Our spider crawls at a leisurely rate of 1 page
ever 1.5 seconds. While the spider doesn't keep track of the contents of the pages it crawls, it does keep track of the number of
requests issued by each visitor. Currently the crawler is limited to 1000 pages per user per day.
Since the DatayzeBot does not index or cache any pages it crawls, rerunning the Website Validator will count against your daily allowed number of page crawls. You can get around the cap by pausing the crawler and resuming it another day.

Interested in Web Development? Try our other tools, like the Site Navigability Analyzer, which can let you see what a spider sees. It can analyze your anchor text diversity and find the length shortest path to any page. The Thin Content Checker can analyze your site's content, let you know the percentage of unique phrases per page, and generate a histogram of page content lengths. A common need among web developers is to know which pages of theirs are being indexed, and thus which are not. Thus we created the Sitemap Index Analyzer.