Scrapy uses signals extensively to notify when certain events occur. You can
catch some of those signals in your Scrapy project (using an extension, for example) to perform additional tasks or extend Scrapy
to add functionality not provided out of the box.

Even though signals provide several arguments, the handlers that catch them
don’t need to accept all of them - the signal dispatching mechanism will only
deliver the arguments that the handler receives.

You can connect to signals (or send your own) through the
Signals API.

Here is a simple example showing how you can catch signals and perform some action:

reason (str) – a string which describes the reason why the spider was closed. If
it was closed because the spider has completed scraping, the reason
is 'finished'. Otherwise, if the spider was manually closed by
calling the close_spider engine method, then the reason is the one
passed in the reason argument of that method (which defaults to
'cancelled'). If the engine was shutdown (for example, by hitting
Ctrl-C to stop it) the reason will be 'shutdown'.

Scheduling some requests in your spider_idle handler does
not guarantee that it can prevent the spider from being closed,
although it sometimes can. That’s because the spider may still remain idle
if all the scheduled requests are rejected by the scheduler (e.g. filtered
due to duplication).