NAME
WWW::RobotRules::Parser - Just Parse robots.txt
SYNOPSIS
use WWW::RobotRules::Parser;
my $p = WWW::RobotRules::Parser->new;
$p->parse($robots_txt_uri, $text);
$p->parse_uri($robots_txt_uri);
DESCRIPTION
WWW::RobotRules::Parser allows you to simply parse robots.txt files as
described in http://www.robotstxt.org/wc/norobots.html. Unlike
WWW::RobotRules (which is very cool), this module does not take into
consideration your user agent name when parsing. It just parses the
structure and returns a hash containing the whole set of rules. You can
then use this to do whatever you like with it.
I mainly wrote this to store away the parsed data structure else where
for later use, without having to specify an user agent.
METHODS
new
Creates a new instance of WWW::RobotRules::Parser
parse($uri, $text)
Given the URI of the robots.txt file and its contents, parses the
content and returns a data structure that looks like the following:
{
'*' => [ '/private', '/also_private' ],
'Another UserAgent' => [ '/dont_look' ]
}
Where the key is the user agent name, and the value is an arrayref of
all paths that are prohibited by that user agent
parse_uri($uri)
Give the URI of the robots.txt file, retrieves and parses the file.
AUTHOR
Copyright (c) 2006 Daisuke Maki All rights reserved.