Robots.txt and Invisible Characters – How One Hidden Character Could Cause SEO Problems

If you’ve read some of my blog posts in the past, then you know I perform a lot of SEO technical audits. As one of the checks during SEO audits, I always analyze a client’s robots.txt file to ensure it’s not blocking important directories or files. If you’re not familiar with robots.txt, it’s a text file that sits in the root directory of your website and should be used to inform the search engine bots which directories or files they should not crawl. You can also add autodiscovery for your xml sitemaps (which is a smart directive to add to a robots.txt file).

Anyway, I came across an interesting situation recently that I wanted to share. My hope is that this post can help some companies avoid a potentially serious SEO issue that was not readily apparent. Actually, the problem could not be detected by the naked eye. And when a problem impacts your robots.txt file, the bots won’t follow your instructions. And when the bots don’t follow instructions, they can potentially be unleashed into content that should never get crawled. Let’s explore this situation in greater detail.

A sample robots.txt file:

Technical SEO – Cloaked Danger in a Robots.txt FileDuring my first check of the robots.txt file, everything looked fine. There were a number of directories being blocked for all search engines. Autodiscovery was added, which was great. All looked good. Then I checked Google Webmaster Tools to perform some manual checks on various files and directories (based on Google’s “Blocked URLs” functionality). Unfortunately, there were a number of errors showing within the analysis section.

The first error message started with the User-agent line (the first line in the file). Googlebot was choking on that line for some reason, but it looked completely fine. And as you can guess, none of the directives listed in the file were being adhered to. This meant that potentially thousands of files would be crawled that shouldn’t be crawled, and all because of a problem that was hiding below the surface… literally.

Blocked URLs reporting in Google Webmaster Tools:

Word Processors and Hidden CharactersSo I started checking several robots.txt tools to see what they would return. Again, the file looked completely fine to me. The first few checks returned errors, but wouldn’t explain exactly what was wrong. And then I came across one that revealed more information. The tool revealed an extra character (hidden character) at the beginning of the robots.txt file. This hidden character was throwing off the format of the file, and the bots were choking on it. And based on the robots syntax being thrown off, the bots wouldn’t follow the instructions. Not good.

I immediately sent this off to my client and their dev team tracked down the hidden character, and created a new robots.txt file. The new file was uploaded pretty quickly (within a few hours). And all checks are fine now. The bots are also adhering to the directives included in robots.txt.

The SEO Problems This Scenario RaisesI think this simple example underscores the fact that there’s not a lot of room for error with technical SEO… it must be precise. In this case, one hidden character in a robots.txt file unleashed the bots on a lot of content that should never be crawled. Sure, there are other mechanisms to make sure content doesn’t get indexed, like the proper use of the meta robots tag, but that’s for another post. For my client, a robots.txt file was created, it looked completely fine, but one character was off (and it was hidden). And that one character forced the bots to choke on the file.

How To Avoid Robots.txt Formatting IssuesI think one person at my client’s company summed up this situation perfectly when she said, “it seems you have little room for error, SEO seems so delicate”. Yes, she’s right (with technical SEO). Below, I’m going to list some simple things you can do to avoid this scenario. If you follow these steps, you could avoid faulty robots.txt files that seem accurate to the naked eye.

1. Text EditorsAlways use a text editor when creating your robots.txt file. Don’t use a word processing application like Microsoft Word. A text editor is meant to create raw text files, and it won’t throw extra characters into your file by accident.

2. Double and Triple Check Your robots.txt DirectivesMake sure each directive does exactly what you think it will do. If you aren’t 100% sure you know, then ask for help. Don’t upload a robots.txt file that could potentially block a bunch of important content (or vice versa).

3. Test Your robots.txt File in Google Webmaster Tools and Via Third Party ToolsMake sure the syntax of your robots.txt file is correct and that it’s blocking the directories and files you want it to. Note, Google Webmaster Tools enables you to copy and paste a new robots file into a form and test it out. I highly recommend you do this BEFORE uploading a new file to your site.

4. Monitor Google Webmaster Tools “Blocked URLs” Reporting
The blocked urls functionality will reveal problems associated with your robots.txt file under the “analysis” section. Remember, this is where I picked up the problem covered in this post.

Extra Characters in Robots.txt – Cloaked in DangerThere you have it. One hidden character bombed a robots.txt file. The problem was hidden to the naked eye, but the bots were choking on it. And depending on your specific site, that one character could have led to thousands of pages getting crawled that shouldn’t be. I hope this post helped you understand that your robots.txt format and syntax are extremely important, that you should double and triple check your file, and that you can test and monitor that file over time. If the wrong file is uploaded to your website, bad things can happen. Avoid this scenario.