The DOWNLOADER_MIDDLEWARES setting is merged with the
DOWNLOADER_MIDDLEWARES_BASE setting defined in Scrapy (and not meant
to be overridden) and then sorted by order to get the final sorted list of
enabled middlewares: the first middleware is the one closer to the engine and
the last is the one closer to the downloader. In other words,
the process_request()
method of each middleware will be invoked in increasing
middleware order (100, 200, 300, …) and the process_response() method
of each middleware will be invoked in decreasing order.

To decide which order to assign to your middleware see the
DOWNLOADER_MIDDLEWARES_BASE setting and pick a value according to
where you want to insert the middleware. The order does matter because each
middleware performs a different action and your middleware could depend on some
previous (or subsequent) middleware being applied.

If you want to disable a built-in middleware (the ones defined in
DOWNLOADER_MIDDLEWARES_BASE and enabled by default) you must define it
in your project’s DOWNLOADER_MIDDLEWARES setting and assign None
as its value. For example, if you want to disable the user-agent middleware:

If it returns None, Scrapy will continue processing this request, executing all
other middlewares until, finally, the appropriate downloader handler is called
the request performed (and its response downloaded).

If it returns a Request object, Scrapy will stop calling
process_request methods and reschedule the returned request. Once the newly returned
request is performed, the appropriate middleware chain will be called on
the downloaded response.

If it raises an IgnoreRequest exception, the
process_exception() methods of installed downloader middleware will be called.
If none of them handle the exception, the errback function of the request
(Request.errback) is called. If no code handles the raised exception, it is
ignored and not logged (unlike other exceptions).

If it returns a Response (it could be the same given
response, or a brand-new one), that response will continue to be processed
with the process_response() of the next middleware in the chain.

If it returns a Request object, the middleware chain is
halted and the returned request is rescheduled to be downloaded in the future.
This is the same behavior as if a request is returned from process_request().

If it raises an IgnoreRequest exception, the errback
function of the request (Request.errback) is called. If no code handles the raised
exception, it is ignored and not logged (unlike other exceptions).

Parameters:

request (is a Request object) – the request that originated the response

If it returns None, Scrapy will continue processing this exception,
executing any other process_exception() methods of installed middleware,
until no middleware is left and the default exception handling kicks in.

If it returns a Request object, the returned request is
rescheduled to be downloaded in the future. This stops the execution of
process_exception() methods of the middleware the same as returning a
response would.

Parameters:

request (is a Request object) – the request that generated the exception

exception (an Exception object) – the raised exception

spider (Spider object) – the spider for which this request is intended

If present, this classmethod is called to create a middleware instance
from a Crawler. It must return a new instance
of the middleware. Crawler object provides access to all Scrapy core
components like settings and signals; it is a way for middleware to
access them and hook its functionality into Scrapy.

This page describes all downloader middleware components that come with
Scrapy. For information on how to use them and how to write your own downloader
middleware, see the downloader middleware usage guide.

This middleware enables working with sites that require cookies, such as
those that use sessions. It keeps track of cookies sent by web servers, and
send them back on subsequent requests (from that spider), just like web
browsers do.

The following settings can be used to configure the cookie middleware:

There is support for keeping multiple cookie sessions per spider by using the
cookiejar Request meta key. By default it uses a single cookie jar
(session), but you can pass an identifier to use different ones.

Whether to enable the cookies middleware. If disabled, no cookies will be sent
to web servers.

Notice that if the Request
has meta['dont_merge_cookies'] evaluated to True.
despite the value of COOKIES_ENABLED the cookies will not be
sent to web servers and received cookies in
Response will not be merged with the existing
cookies.

This policy has no awareness of any HTTP Cache-Control directives.
Every request and its corresponding response are cached. When the same
request is seen again, the response is returned without transferring
anything from the Internet.

The Dummy policy is useful for testing spiders faster (without having
to wait for downloads every time) and for trying your spider offline,
when an Internet connection is not available. The goal is to be able to
“replay” a spider run exactly as it ran before.

This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTP
Cache-Control awareness, aimed at production and used in continuous
runs to avoid downloading unmodified data (to save bandwidth and speed up crawls).

what is implemented:

Do not attempt to store responses/requests with no-store cache-control directive set

Do not serve responses from cache if no-cache cache-control directive is set even for fresh responses

pickled_meta - the same metadata in meta but pickled for more
efficient deserialization

The directory name is made from the request fingerprint (see
scrapy.utils.request.fingerprint), and one level of subdirectories is
used to avoid creating too many files into the same directory (which is
inefficient in many file systems). An example directory could be:

A LevelDB storage backend is also available for the HTTP cache middleware.

This backend is not recommended for development because only one process can
access LevelDB databases at the same time, so you can’t run a crawl and open
the scrapy shell in parallel for the same spider.

The directory to use for storing the (low-level) HTTP cache. If empty, the HTTP
cache will be disabled. If a relative path is given, is taken relative to the
project data dir. For more info see: Default structure of Scrapy projects.

A spider may wish to have all responses available in the cache, for
future use with Cache-Control: max-stale, for instance. The
DummyPolicy caches all responses but never revalidates them, and
sometimes a more nuanced policy is desirable.

This setting still respects Cache-Control: no-store directives in responses.
If you don’t want that, filter no-store out of the Cache-Control headers in
responses you feedto the cache middleware.

Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but get
upset at the traffic a spider can generate if it respects those
directives. This allows to selectively ignore Cache-Control directives
that are known to be unimportant for the sites being crawled.

We assume that the spider will not issue Cache-Control directives
in requests unless it actually needs them, so directives in requests are
not filtered.

This middleware sets the HTTP proxy to use for requests, by setting the
proxy meta value for Request objects.

Like the Python standard library modules urllib and urllib2, it obeys
the following environment variables:

http_proxy

https_proxy

no_proxy

You can also set the meta key proxy per-request, to a value like
http://some_proxy_server:port or http://username:password@some_proxy_server:port.
Keep in mind this value will take precedence over http_proxy/https_proxy
environment variables, and it will also ignore no_proxy environment variable.

If Request.meta has dont_redirect
key set to True, the request will be ignored by this middleware.

If you want to handle some redirect status codes in your spider, you can
specify these in the handle_httpstatus_list spider attribute.

For example, if you want the redirect middleware to ignore 301 and 302
responses (and pass them through to your spider) you can do this:

classMySpider(CrawlSpider):handle_httpstatus_list=[301,302]

The handle_httpstatus_list key of Request.meta can also be used to specify which response codes to
allow on a per-request basis. You can also set the meta key
handle_httpstatus_all to True if you want to allow any response code
for a request.

A middleware to retry failed requests that are potentially caused by
temporary problems such as a connection timeout or HTTP 500 error.

Failed pages are collected on the scraping process and rescheduled at the
end, once the spider has finished crawling all regular (non failed) pages.
Once there are no more failed pages to retry, this middleware sends a signal
(retry_complete), so other extensions could connect to that signal.

The RetryMiddleware can be configured through the following
settings (see the settings documentation for more info):

Scrapy finds ‘AJAX crawlable’ pages for URLs like
'http://example.com/!#foo=bar' even without this middleware.
AjaxCrawlMiddleware is necessary when URL doesn’t contain '!#'.
This is often a case for ‘index’ or ‘main’ website pages.