Extensions use the Scrapy settings to manage their
settings, just like any other Scrapy code.

It is customary for extensions to prefix their settings with their own name, to
avoid collision with existing (and future) extensions. For example, an
hypothetic extension to handle Google Sitemaps would use settings like
GOOGLESITEMAP_ENABLED, GOOGLESITEMAP_DEPTH, and so on.

Extensions are loaded and activated at startup by instantiating a single
instance of the extension class. Therefore, all the extension initialization
code must be performed in the class constructor (__init__ method).

To make an extension available, add it to the EXTENSIONS setting in
your Scrapy settings. In EXTENSIONS, each extension is represented
by a string: the full Python path to the extension’s class name. For example:

As you can see, the EXTENSIONS setting is a dict where the keys are
the extension paths, and their values are the orders, which define the
extension loading order. Extensions orders are not as important as middleware
orders though, and they are typically irrelevant, ie. it doesn’t matter in
which order the extensions are loaded because they don’t depend on each other
[1].

However, this feature can be exploited if you need to add an extension which
depends on other extensions already loaded.

[1] This is is why the EXTENSIONS_BASE setting in Scrapy (which
contains all built-in extensions enabled by default) defines all the extensions
with the same order (500).

Not all available extensions will be enabled. Some of them usually depend on a
particular setting. For example, the HTTP Cache extension is available by default
but disabled unless the HTTPCACHE_ENABLED setting is set.

Writing your own extension is easy. Each extension is a single Python class
which doesn’t need to implement any particular method.

All extension initialization code must be performed in the class constructor
(__init__ method). If that method raises the
NotConfigured exception, the extension will be
disabled. Otherwise, the extension will be enabled.

Let’s take a look at the following example extension which just logs a message
every time a domain/spider is opened and closed:

The Extension Manager is responsible for loading and keeping track of installed
extensions and it’s configured through the EXTENSIONS setting which
contains a dictionary of all available extensions and their order similar to
how you configure the downloader middlewares.

Load the available extensions configured in the EXTENSIONS
setting. On a standard run, this method is usually called by the Execution
Manager, but you may need to call it explicitly if you’re dealing with
code outside Scrapy.

1, send a notification e-mail when it exceeds a certain value
2. terminate the Scrapy process when it exceeds a certain value

The notification e-mails can be triggered when a certain warning value is
reached (MEMUSAGE_WARNING_MB) and when the maximum value is reached
(MEMUSAGE_LIMIT_MB) which will also cause the Scrapy process to be
terminated.

This extension is enabled by the MEMUSAGE_ENABLED setting and
can be configured with the following settings:

An integer which specifies a number of seconds. If the spider remains open for
more than that number of second, it will be automatically closed with the
reason closespider_timeout. If zero (or non set), spiders won’t be closed by
timeout.

An integer which specifies a number of items. If the spider scrapes more than
that amount if items and those items are passed by the item pipeline, the
spider will be closed with the reason closespider_itemcount. If zero (or
non set), spiders won’t be closed by number of passed items.

An integer which specifies the maximum number of responses to crawl. If the spider
crawls more than that, the spider will be closed with the reason
closespider_pagecount. If zero (or non set), spiders won’t be closed by
number of crawled responses.

An integer which specifies the maximum number of errors to receive before
closing the spider. If the spider generates more than that number of errors,
it will be closed with the reason closespider_errorcount. If zero (or non
set), spiders won’t be closed by number of errors.

This simple extension can be used to send a notification e-mail every time a
domain has finished scraping, including the Scrapy stats collected. The email
will be sent to all recipients specified in the STATSMAILER_RCPTS
setting.

Dumps the stack trace and Scrapy engine status of a runnning process when a
SIGQUIT or SIGUSR2 signal is received. After the stack trace and engine
status is dumped, the Scrapy process continues running normally.

The dump is sent to standard output.

This extension only works on POSIX-compliant platforms (ie. not Windows).