Item Loaders provide a convenient mechanism for populating scraped Items. Even though Items can be populated using their own
dictionary-like API, Item Loaders provide a much more convenient API for
populating them from a scraping process, by automating some common tasks like
parsing the raw extracted data before assigning it.

In other words, Items provide the container of
scraped data, while Item Loaders provide the mechanism for populating that
container.

Item Loaders are designed to provide a flexible, efficient and easy mechanism
for extending and overriding different field parsing rules, either by spider,
or by source format (HTML, XML, etc) without becoming a nightmare to maintain.

To use an Item Loader, you must first instantiate it. You can either
instantiate it with a dict-like object (e.g. Item or dict) or without one, in
which case an Item is automatically instantiated in the Item Loader constructor
using the Item class specified in the ItemLoader.default_item_class
attribute.

Then, you start collecting values into the Item Loader, typically using
Selectors. You can add more than one value to
the same item field; the Item Loader will know how to “join” those values later
using a proper processing function.

fromscrapy.loaderimportItemLoaderfrommyproject.itemsimportProductdefparse(self,response):l=ItemLoader(item=Product(),response=response)l.add_xpath('name','//div[@class="product_name"]')l.add_xpath('name','//div[@class="product_title"]')l.add_xpath('price','//p[@id="price"]')l.add_css('stock','p#stock]')l.add_value('last_updated','today')# you can also use literal valuesreturnl.load_item()

By quickly looking at that code, we can see the name field is being
extracted from two different XPath locations in the page:

//div[@class="product_name"]

//div[@class="product_title"]

In other words, data is being collected by extracting it from two XPath
locations, using the add_xpath() method. This is the
data that will be assigned to the name field later.

Afterwards, similar calls are used for price and stock fields
(the latter using a CSS selector with the add_css() method),
and finally the last_update field is populated directly with a literal value
(today) using a different method: add_value().

An Item Loader contains one input processor and one output processor for each
(item) field. The input processor processes the extracted data as soon as it’s
received (through the add_xpath(), add_css() or
add_value() methods) and the result of the input processor is
collected and kept inside the ItemLoader. After collecting all data, the
ItemLoader.load_item() method is called to populate and get the populated
Item object. That’s when the output processor is
called with the data previously collected (and processed using the input
processor). The result of the output processor is the final value that gets
assigned to the item.

Let’s see an example to illustrate how the input and output processors are
called for a particular field (the same applies for any other field):

Data from xpath1 is extracted, and passed through the input processor of
the name field. The result of the input processor is collected and kept in
the Item Loader (but not yet assigned to the item).

Data from xpath2 is extracted, and passed through the same input
processor used in (1). The result of the input processor is appended to the
data collected in (1) (if any).

This case is similar to the previous ones, except that the data is extracted
from the css CSS selector, and passed through the same input
processor used in (1) and (2). The result of the input processor is appended to the
data collected in (1) and (2) (if any).

This case is also similar to the previous ones, except that the value to be
collected is assigned directly, instead of being extracted from a XPath
expression or a CSS selector.
However, the value is still passed through the input processors. In this
case, since the value is not iterable it is converted to an iterable of a
single element before passing it to the input processor, because input
processor always receive iterables.

The data collected in steps (1), (2), (3) and (4) is passed through
the output processor of the name field.
The result of the output processor is the value assigned to the name
field in the item.

It’s worth noticing that processors are just callable objects, which are called
with the data to be parsed, and return a parsed value. So you can use any
function as input or output processor. The only requirement is that they must
accept one (and only one) positional argument, which will be an iterator.

Note

Both input and output processors must receive an iterator as their
first argument. The output of those functions can be anything. The result of
input processors will be appended to an internal list (in the Loader)
containing the collected values (for that field). The result of the output
processors is the value that will be finally assigned to the item.

The other thing you need to keep in mind is that the values returned by input
processors are collected internally (in lists) and then passed to output
processors to populate the fields.

As seen in the previous section, input and output processors can be declared in
the Item Loader definition, and it’s very common to declare input processors
this way. However, there is one more place where you can specify the input and
output processors to use: in the Item Field
metadata. Here is an example:

The Item Loader Context is a dict of arbitrary key/values which is shared among
all input and output processors in the Item Loader. It can be passed when
declaring, instantiating or using Item Loader. They are used to modify the
behaviour of the input/output processors.

For example, suppose you have a function parse_length which receives a text
value and extracts a length from it:

By accepting a loader_context argument the function is explicitly telling
the Item Loader that it’s able to receive an Item Loader context, so the Item
Loader passes the currently active context when calling it, and the processor
function (parse_length in this case) can thus use them.

There are several ways to modify Item Loader context values:

By modifying the currently active Item Loader context
(context attribute):

loader=ItemLoader(product)loader.context['unit']='cm'

On Item Loader instantiation (the keyword arguments of Item Loader
constructor are stored in the Item Loader context):

loader=ItemLoader(product,unit='cm')

On Item Loader declaration, for those input/output processors that support
instantiating them with an Item Loader context. MapCompose is one of
them:

The value is first passed through get_value() by giving the
processors and kwargs, and then passed through the
field input processor and its result
appended to the data collected for that field. If the field already
contains collected data, the new data is added.

The given field_name can be None, in which case values for
multiple fields may be added. And the processed value should be a dict
with field_name mapped to values.

The class used to construct the selector of this
ItemLoader, if only a response is given in the constructor.
If a selector is given in the constructor this attribute is ignored.
This attribute is sometimes overridden in subclasses.

The Selector object to extract data from.
It’s either the selector given in the constructor or one created from
the response given in the constructor using the
default_selector_class. This attribute is meant to be
read-only.

Instead, you can create a nested loader with the footer selector and add values
relative to the footer. The functionality is the same but you avoid repeating
the footer selector.

Example:

loader=ItemLoader(item=Item())# load stuff not in the footerfooter_loader=loader.nested_xpath('//footer')footer_loader.add_xpath('social','a[@class = "social"]/@href')footer_loader.add_xpath('email','a[@class = "email"]/@href')# no need to call footer_loader.load_item()loader.load_item()

You can nest loaders arbitrarily and they work with either xpath or css selectors.
As a general guideline, use nested loaders when they make your code simpler but do
not go overboard with nesting or your parser can become difficult to read.

As your project grows bigger and acquires more and more spiders, maintenance
becomes a fundamental problem, especially when you have to deal with many
different parsing rules for each spider, having a lot of exceptions, but also
wanting to reuse the common processors.

Item Loaders are designed to ease the maintenance burden of parsing rules,
without losing flexibility and, at the same time, providing a convenient
mechanism for extending and overriding them. For this reason Item Loaders
support traditional Python class inheritance for dealing with differences of
specific spiders (or groups of spiders).

Suppose, for example, that some particular site encloses their product names in
three dashes (e.g. ---PlasmaTV---) and you don’t want to end up scraping
those dashes in the final product names.

Here’s how you can remove those dashes by reusing and extending the default
Product Item Loader (ProductLoader):

Another case where extending Item Loaders can be very helpful is when you have
multiple source formats, for example XML and HTML. In the XML version you may
want to remove CDATA occurrences. Here’s an example of how to do it:

As for output processors, it is more common to declare them in the field metadata,
as they usually depend only on the field and not on each specific site parsing
rule (as input processors do). See also:
Declaring Input and Output Processors.

There are many other possible ways to extend, inherit and override your Item
Loaders, and different Item Loaders hierarchies may fit better for different
projects. Scrapy only provides the mechanism; it doesn’t impose any specific
organization of your Loaders collection - that’s up to you and your project’s
needs.

Even though you can use any callable function as input and output processors,
Scrapy provides some commonly used processors, which are described below. Some
of them, like the MapCompose (which is typically used as input
processor) compose the output of several functions executed in order, to
produce the final parsed value.

Returns the first non-null/non-empty value from the values received,
so it’s typically used as an output processor to single-valued fields.
It doesn’t receive any constructor arguments, nor does it accept Loader contexts.

A processor which is constructed from the composition of the given
functions. This means that each input value of this processor is passed to
the first function, and the result of that function is passed to the second
function, and so on, until the last function returns the output value of
this processor.

By default, stop process on None value. This behaviour can be changed by
passing keyword argument stop_on_none=False.

Each function can optionally receive a loader_context parameter. For
those which do, this processor will pass the currently active Loader
context through that parameter.

The keyword arguments passed in the constructor are used as the default
Loader context values passed to each function call. However, the final
Loader context values passed to functions are overridden with the currently
active Loader context accessible through the ItemLoader.context()
attribute.

A processor which is constructed from the composition of the given
functions, similar to the Compose processor. The difference with
this processor is the way internal results are passed among functions,
which is as follows:

The input value of this processor is iterated and the first function is
applied to each element. The results of these function calls (one for each element)
are concatenated to construct a new iterable, which is then used to apply the
second function, and so on, until the last function is applied to each
value of the list of values collected so far. The output values of the last
function are concatenated together to produce the output of this processor.

Each particular function can return a value or a list of values, which is
flattened with the list of values returned by the same function applied to
the other input values. The functions can also return None in which
case the output of that function is ignored for further processing over the
chain.

This processor provides a convenient way to compose functions that only
work with single values (instead of iterables). For this reason the
MapCompose processor is typically used as input processor, since
data is often extracted using the
extract() method of selectors, which returns a list of unicode strings.

Queries the value using the json path provided to the constructor and returns the output.
Requires jmespath (https://github.com/jmespath/jmespath.py) to run.
This processor takes only one input at a time.