This section describes how to traverse the content of the application under test using web spiders/robots/crawlers.

+

−

<br>

+

== Description of the Issue ==

== Description of the Issue ==

−

<br>

+

Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].

−

Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. While their accepted behavior is specified by the web server and its web pages they may accidentally or intentionally retrieve web content not intended to be stored or published.

+

+

As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:

+

<pre>

+

User-agent: *

+

Allow: /searchhistory/

+

Disallow: /news?output=xhtml&

+

Allow: /news?output=xhtml

+

Disallow: /search

+

Disallow: /groups

+

Disallow: /images

+

...

+

</pre>

+

+

The ''User-Agent'' directive refers to the specific web spider/robot/crawler. For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:

+

<pre>

+

User-agent: *

+

</pre>

+

+

The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:

+

<pre>

+

...

+

Disallow: /search

+

Disallow: /groups

+

Disallow: /images

+

...

+

</pre>

+

Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.

<br>

<br>

== Black Box testing and example ==

== Black Box testing and example ==

−

'''Testing for Topic X vulnerabilities:''' <br>

+

'''wget'''<br>

−

...<br>

+

The robots.txt file is retrieved from the web root directory of the web server.

−

'''Result Expected:'''<br>

+

−

...<br><br>

+

For example, to retrieve the robots.txt from www.google.com using ''wget'':

Brief Summary

This section describes how to test the robots.txt file.

Description of the Issue

Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].

The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the GoogleBot crawler while User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below:

User-agent: *

The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:

...
Disallow: /search
Disallow: /groups
Disallow: /images
...

Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.

Black Box testing and example

wget
The robots.txt file is retrieved from the web root directory of the web server.

For example, to retrieve the robots.txt from www.google.com using wget:

Analyze robots.txt using Google Webmaster Tools
Google provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools", which can assist with testing [4] and the procedure is as follows:

1. Sign into Google Webmaster Tools with your Google Account.
2. On the Dashboard, click the URL for the site you want.
3. Click Tools, and then click Analyze robots.txt.