The first line will print the currently active project, if you’re inside a
Scrapy project. In this, it was run from outside a project. If run from inside
a project it would have printed something like this:

You use the scrapy tool from inside your projects to control and manage
them.

For example, to create a new spider:

scrapy genspider mydomain mydomain.com

Some Scrapy commands (like crawl) must be run from inside a Scrapy
project. See the commands reference below for more
information on which commands must be run from inside projects, and which not.

Also keep in mind that some commands may have slightly different behaviours
when running them from inside projects. For example, the fetch command will use
spider-overridden behaviours (such as the user_agent attribute to override
the user-agent) if the url being fetched is associated with some specific
spider. This is intentional, as the fetch command is meant to be used to
check how spiders are downloading pages.

This section contains a list of the available built-in commands with a
description and some usage examples. Remember you can always get more info
about each command by running:

scrapy<command>-h

And you can see all available commands with:

scrapy-h

There are two kinds of commands, those that only work from inside a Scrapy
project (Project-specific commands) and those that also work without an active
Scrapy project (Global commands), though they may behave slightly different
when running from inside a project (as they would use the project overridden
settings).

This is just a convenient shortcut command for creating spiders based on
pre-defined templates, but certainly not the only way to create spiders. You
can just create the spider source code files yourself, instead of using this
command.

Downloads the given URL using the Scrapy downloader and writes the contents to
standard output.

The interesting thing about this command is that it fetches the page how the
spider would download it. For example, if the spider has an USER_AGENT
attribute which overrides the User Agent, it will use that one.

So this command can be used to “see” how your spider would fetch a certain page.

If used outside a project, no particular per-spider behaviour would be applied
and it will just use the default Scrapy downloader settings.

Opens the given URL in a browser, as your Scrapy spider would “see” it.
Sometimes spiders see pages differently from regular users, so this can be used
to check what the spider “sees” and confirm it’s what you expect.