Spiders are classes which define how a certain site (or a group of sites) will be
scraped, including how to perform the crawl (i.e. follow links) and how to
extract structured data from their pages (i.e. scraping items). In other words,
Spiders are the place where you define the custom behaviour for crawling and
parsing pages for a particular site (or, in some cases, a group of sites).

For spiders, the scraping cycle goes through something like this:

You start by generating the initial Requests to crawl the first URLs, and
specify a callback function to be called with the response downloaded from
those requests.

The first requests to perform are obtained by calling the
start_requests() method which (by default)
generates Request for the URLs specified in the
start_urls and the
parse method as callback function for the
Requests.

In the callback function, you parse the response (web page) and return either
dicts with extracted data, Item objects,
Request objects, or an iterable of these objects.
Those Requests will also contain a callback (maybe
the same) and will then be downloaded by Scrapy and then their
response handled by the specified callback.

In callback functions, you parse the page contents, typically using
Selectors (but you can also use BeautifulSoup, lxml or whatever
mechanism you prefer) and generate items with the parsed data.

Finally, the items returned from the spider will be typically persisted to a
database (in some Item Pipeline) or written to
a file using Feed exports.

Even though this cycle applies (more or less) to any kind of spider, there are
different kinds of default spiders bundled into Scrapy for different purposes.
We will talk about those types here.

This is the simplest spider, and the one from which every other spider
must inherit (including spiders that come bundled with Scrapy, as well as spiders
that you write yourself). It doesn’t provide any special functionality. It just
provides a default start_requests() implementation which sends requests from
the start_urls spider attribute and calls the spider’s method parse
for each of the resulting responses.

A string which defines the name for this spider. The spider name is how
the spider is located (and instantiated) by Scrapy, so it must be
unique. However, nothing prevents you from instantiating more than one
instance of the same spider. This is the most important spider attribute
and it’s required.

If the spider scrapes a single domain, a common practice is to name the
spider after the domain, with or without the TLD. So, for example, a
spider that crawls mywebsite.com would often be called
mywebsite.

An optional list of strings containing domains that this spider is
allowed to crawl. Requests for URLs not belonging to the domain names
specified in this list (or their subdomains) won’t be followed if
OffsiteMiddleware is enabled.

Let’s say your target url is https://www.example.com/1.html,
then add 'example.com' to the list.

A list of URLs where the spider will begin to crawl from, when no
particular URLs are specified. So, the first pages downloaded will be those
listed here. The subsequent URLs will be generated successively from data
contained in the start URLs.

A dictionary of settings that will be overridden from the project wide
configuration when running this spider. It must be defined as a class
attribute since the settings are updated before instantiation.

This method must return an iterable with the first Requests to crawl for
this spider. It is called by Scrapy when the spider is opened for
scraping. Scrapy calls it only once, so it is safe to implement
start_requests() as a generator.

The default implementation generates Request(url,dont_filter=True)
for each url in start_urls.

If you want to change the Requests used to start scraping a domain, this is
the method to override. For example, if you need to start by logging in using
a POST request, you could do:

classMySpider(scrapy.Spider):name='myspider'defstart_requests(self):return[scrapy.FormRequest("http://www.example.com/login",formdata={'user':'john','pass':'secret'},callback=self.logged_in)]deflogged_in(self,response):# here you would extract links to follow and return Requests for# each of them, with another callbackpass

Called when the spider closes. This method provides a shortcut to
signals.connect() for the spider_closed signal.

Let’s see an example:

importscrapyclassMySpider(scrapy.Spider):name='example.com'allowed_domains=['example.com']start_urls=['http://www.example.com/1.html','http://www.example.com/2.html','http://www.example.com/3.html',]defparse(self,response):self.logger.info('A response from %s just arrived!',response.url)

Spiders can receive arguments that modify their behaviour. Some common uses for
spider arguments are to define the start URLs or to restrict the crawl to
certain sections of the site, but they can be used to configure any
functionality of the spider.

Spider arguments are passed through the crawl command using the
-a option. For example:

Keep in mind that spider arguments are only strings.
The spider will not do any parsing on its own.
If you were to set the start_urls attribute from the command line,
you would have to parse it on your own into a list
using something like
ast.literal_eval
or json.loads
and then set it as an attribute.
Otherwise, you would cause iteration over a start_urls string
(a very common python pitfall)
resulting in each character being seen as a separate url.

Scrapy comes with some useful generic spiders that you can use to subclass
your spiders from. Their aim is to provide convenient functionality for a few
common scraping cases, like following all links on a site based on certain
rules, crawling from Sitemaps, or parsing an XML/CSV feed.

For the examples used in the following spiders, we’ll assume you have a project
with a TestItem declared in a myproject.items module:

This is the most commonly used spider for crawling regular websites, as it
provides a convenient mechanism for following links by defining a set of rules.
It may not be the best suited for your particular web sites or project, but
it’s generic enough for several cases, so you can start from it and override it
as needed for more custom functionality, or just implement your own spider.

Apart from the attributes inherited from Spider (that you must
specify), this class supports a new attribute:

Which is a list of one (or more) Rule objects. Each Rule
defines a certain behaviour for crawling the site. Rules objects are
described below. If multiple rules match the same link, the first one
will be used, according to the order they’re defined in this attribute.

link_extractor is a Link Extractor object which
defines how links will be extracted from each crawled page.

callback is a callable or a string (in which case a method from the spider
object with that name will be used) to be called for each link extracted with
the specified link_extractor. This callback receives a response as its first
argument and must return a list containing Item and/or
Request objects (or any subclass of them).

Warning

When writing crawl spider rules, avoid using parse as
callback, since the CrawlSpider uses the parse method
itself to implement its logic. So if you override the parse method,
the crawl spider will no longer work.

cb_kwargs is a dict containing the keyword arguments to be passed to the
callback function.

follow is a boolean which specifies if links should be followed from each
response extracted with this rule. If callback is None follow defaults
to True, otherwise it defaults to False.

process_links is a callable, or a string (in which case a method from the
spider object with that name will be used) which will be called for each list
of links extracted from each response using the specified link_extractor.
This is mainly used for filtering purposes.

process_request is a callable, or a string (in which case a method from
the spider object with that name will be used) which will be called with
every request extracted by this rule, and must return a request or None (to
filter out the request).

importscrapyfromscrapy.spidersimportCrawlSpider,Rulefromscrapy.linkextractorsimportLinkExtractorclassMySpider(CrawlSpider):name='example.com'allowed_domains=['example.com']start_urls=['http://www.example.com']rules=(# Extract links matching 'category.php' (but not matching 'subsection.php')# and follow links from them (since no callback means follow=True by default).Rule(LinkExtractor(allow=('category\.php',),deny=('subsection\.php',))),# Extract links matching 'item.php' and parse them with the spider's method parse_itemRule(LinkExtractor(allow=('item\.php',)),callback='parse_item'),)defparse_item(self,response):self.logger.info('Hi, this is an item page! %s',response.url)item=scrapy.Item()item['id']=response.xpath('//td[@id="item_id"]/text()').re(r'ID: (\d+)')item['name']=response.xpath('//td[@id="item_name"]/text()').extract()item['description']=response.xpath('//td[@id="item_description"]/text()').extract()returnitem

This spider would start crawling example.com’s home page, collecting category
links, and item links, parsing the latter with the parse_item method. For
each item response, some data will be extracted from the HTML using XPath, and
an Item will be filled with it.

XMLFeedSpider is designed for parsing XML feeds by iterating through them by a
certain node name. The iterator can be chosen from: iternodes, xml,
and html. It’s recommended to use the iternodes iterator for
performance reasons, since the xml and html iterators generate the
whole DOM at once in order to parse it. However, using html as the
iterator may be useful when parsing XML with bad markup.

To set the iterator and the tag name, you must define the following class
attributes:

A list of (prefix,uri) tuples which define the namespaces
available in that document that will be processed with this spider. The
prefix and uri will be used to automatically register
namespaces using the
register_namespace() method.

A method that receives the response as soon as it arrives from the spider
middleware, before the spider starts parsing it. It can be used to modify
the response body before parsing it. This method receives a response and
also returns a response (it could be the same or another one).

This method is called for the nodes matching the provided tag name
(itertag). Receives the response and an
Selector for each node. Overriding this
method is mandatory. Otherwise, you spider won’t work. This method
must return either a Item object, a
Request object, or an iterable containing any of
them.

This method is called for each result (item or request) returned by the
spider, and it’s intended to perform any last time processing required
before returning the results to the framework core, for example setting the
item IDs. It receives a list of results and the response which originated
those results. It must return a list of results (Items or Requests).

These spiders are pretty easy to use, let’s have a look at one example:

fromscrapy.spidersimportXMLFeedSpiderfrommyproject.itemsimportTestItemclassMySpider(XMLFeedSpider):name='example.com'allowed_domains=['example.com']start_urls=['http://www.example.com/feed.xml']iterator='iternodes'# This is actually unnecessary, since it's the default valueitertag='item'defparse_node(self,response,node):self.logger.info('Hi, this is a <%s> node!: %s',self.itertag,''.join(node.extract()))item=TestItem()item['id']=node.xpath('@id').extract()item['name']=node.xpath('name').extract()item['description']=node.xpath('description').extract()returnitem

Basically what we did up there was to create a spider that downloads a feed from
the given start_urls, and then iterates through each of its item tags,
prints them out, and stores some random data in an Item.

Receives a response and a dict (representing each row) with a key for each
provided (or detected) header of the CSV file. This spider also gives the
opportunity to override adapt_response and process_results methods
for pre- and post-processing purposes.

Let’s see an example similar to the previous one, but using a
CSVFeedSpider:

fromscrapy.spidersimportCSVFeedSpiderfrommyproject.itemsimportTestItemclassMySpider(CSVFeedSpider):name='example.com'allowed_domains=['example.com']start_urls=['http://www.example.com/feed.csv']delimiter=';'quotechar="'"headers=['id','name','description']defparse_row(self,response,row):self.logger.info('Hi, this is a row!: %r',row)item=TestItem()item['id']=row['id']item['name']=row['name']item['description']=row['description']returnitem