Project Versions

The SQLAlchemy Object Relational Mapper presents a method of associating
user-defined Python classes with database tables, and instances of those
classes (objects) with rows in their corresponding tables. It includes a
system that transparently synchronizes all changes in state between objects
and their related rows, called a unit of work, as well as a system
for expressing database queries in terms of the user defined classes and their
defined relationships between each other.

The ORM is in contrast to the SQLAlchemy Expression Language, upon which the
ORM is constructed. Whereas the SQL Expression Language, introduced in
SQL Expression Language Tutorial, presents a system of representing the primitive
constructs of the relational database directly without opinion, the ORM
presents a high level and abstracted pattern of usage, which itself is an
example of applied usage of the Expression Language.

While there is overlap among the usage patterns of the ORM and the Expression
Language, the similarities are more superficial than they may at first appear.
One approaches the structure and content of data from the perspective of a
user-defined domain model which is transparently
persisted and refreshed from its underlying storage model. The other
approaches it from the perspective of literal schema and SQL expression
representations which are explicitly composed into messages consumed
individually by the database.

A successful application may be constructed using the Object Relational Mapper
exclusively. In advanced situations, an application constructed with the ORM
may make occasional usage of the Expression Language directly in certain areas
where specific database interactions are required.

The following tutorial is in doctest format, meaning each >>> line
represents something you can type at a Python command prompt, and the
following text represents the expected return value.

The echo flag is a shortcut to setting up SQLAlchemy logging, which is
accomplished via Python’s standard logging module. With it enabled, we’ll
see all the generated SQL produced. If you are working through this tutorial
and want less output generated, set it to False. This tutorial will format
the SQL behind a popup window so it doesn’t get in our way; just click the
“SQL” links to see what’s being generated.

The return value of create_engine() is an instance of
Engine, and it represents the core interface to the
database, adapted through a dialect that handles the details
of the database and DBAPI in use. In this case the SQLite
dialect will interpret instructions to the Python built-in sqlite3
module.

The Engine has not actually tried to connect to the database yet; that happens
only the first time it is asked to perform a task against the database. We can illustrate
this by asking it to perform a simple SELECT statement:

As the Engine.execute() method is called, the Engine establishes a connection to the
SQLite database, which is then used to emit the SQL. The connection is then returned to an internal
connection pool where it will be reused on subsequent statement executions. While we illustrate direct usage of the
Engine here, this isn’t typically necessary when using the ORM, where the Engine,
once created, is used behind the scenes by the ORM as we’ll see shortly.

When using the ORM, the configurational process starts by describing the database
tables we’ll be dealing with, and then by defining our own classes which will
be mapped to those tables. In modern SQLAlchemy,
these two tasks are usually performed together,
using a system known as Declarative, which allows us to create
classes that include directives to describe the actual database table they will
be mapped to.

Classes mapped using the Declarative system are defined in terms of a base class which
maintains a catalog of classes and
tables relative to that base - this is known as the declarative base class. Our
application will usually have just one instance of this base in a commonly
imported module. We create the base class using the declarative_base()
function, as follows:

Now that we have a “base”, we can define any number of mapped classes in terms
of it. We will start with just a single table called users, which will store
records for the end-users using our application.
A new class called User will be the class to which we map this table. The
imports we’ll need to accomplish this include objects that represent the components
of our table, including the Column class which represents a database column,
as well as the Integer and String classes that
represent basic datatypes used in columns:

The above User class establishes details about the table being mapped, including the name of the table denoted
by the __tablename__ attribute, a set of columns id, name, fullname and password,
where the id column will also be the primary key of the table. While its certainly possible
that some database tables don’t have primary key columns (as is also the case with views, which can
also be mapped), the ORM in order to actually map to a particular table needs there
to be at least one column denoted as a primary key column; multiple-column, i.e. composite, primary keys
are of course entirely feasible as well.

We define a constructor via __init__() and also a __repr__() method - both are optional. The
class of course can have any number of other methods and attributes as required by the application,
as it’s basically just a plain Python class. Inheriting from Base is also only a requirement
of the declarative configurational system, which itself is optional and relatively open ended; at its
core, the SQLAlchemy ORM only requires that a class be a so-called “new style class”, that is, it inherits
from object in Python 2, in order to be mapped. All classes in Python 3 are “new style” classes.

The Non Opinionated Philosophy

In our User mapping example, it was required that we identify the name of the table
in use, as well as the names and characteristics of all columns which we care about,
including which column or columns
represent the primary key, as well as some basic information about the types in use.
SQLAlchemy never makes assumptions about these decisions - the developer must
always be explicit about specific conventions in use. However, that doesn’t mean the
task can’t be automated. While this tutorial will keep things explicit, developers are
encouraged to make use of helper functions as well as “Declarative Mixins” to
automate their tasks in large scale applications. The section Mixin and Custom Base Classes
introduces many of these techniques.

With our User class constructed via the Declarative system, we have defined information about
our table, known as table metadata, as well as a user-defined class which is linked to this
table, known as a mapped class. Declarative has provided for us a shorthand system for what in SQLAlchemy is
called a “Classical Mapping”, which specifies these two units separately and is discussed
in Classical Mappings. The table
is actually represented by a datastructure known as Table, and the mapping represented
by a Mapper object generated by a function called mapper(). Declarative performs both of
these steps for us, making available the
Table it has created via the __table__ attribute:

and while rarely needed, making available the Mapper object via the __mapper__ attribute:

>>> User.__mapper__<Mapper at 0x...; User>

The Declarative base class also contains a catalog of all the Table objects
that have been defined called MetaData, available via the .metadata
attribute. In this example, we are defining
new tables that have yet to be created in our SQLite database, so one helpful feature
the MetaData object offers is the ability to issue CREATE TABLE statements
to the database for all tables that don’t yet exist. We illustrate this
by calling the MetaData.create_all() method, passing in our Engine
as a source of database connectivity. We will see that special commands are
first emitted to check for the presence of the users table, and following that
the actual CREATETABLE statement:

Users familiar with the syntax of CREATE TABLE may notice that the
VARCHAR columns were generated without a length; on SQLite and Postgresql,
this is a valid datatype, but on others, it’s not allowed. So if running
this tutorial on one of those databases, and you wish to use SQLAlchemy to
issue CREATE TABLE, a “length” may be provided to the String type as
below:

Column(String(50))

The length field on String, as well as similar precision/scale fields
available on Integer, Numeric, etc. are not referenced by
SQLAlchemy other than when creating tables.

Additionally, Firebird and Oracle require sequences to generate new
primary key identifiers, and SQLAlchemy doesn’t generate or assume these
without being instructed. For that, you use the Sequence construct:

We include this more verbose table definition separately
to highlight the difference between a minimal construct geared primarily
towards in-Python usage only, versus one that will be used to emit CREATE
TABLE statements on a particular set of backends with more stringent
requirements.

The id attribute, which while not defined by our __init__() method,
exists with a value of None on our User instance due to the id
column we declared in our mapping. By
default, the ORM creates class attributes for all columns present
in the table being mapped. These class attributes exist as
descriptors, and
define instrumentation for the mapped class. The
functionality of this instrumentation includes the ability to fire on change
events, track modifications, and to automatically load new data from the database when
needed.

Since we have not yet told SQLAlchemy to persist EdJones within the
database, its id is None. When we persist the object later, this attribute
will be populated with a newly generated value.

The default __init__() method

Note that in our User example we supplied an __init__() method,
which receives name, fullname and password as positional arguments.
The Declarative system supplies for us a default constructor if one is
not already present, which accepts keyword arguments of the same name
as that of the mapped attributes. Below we define User without
specifying a constructor:

We’re now ready to start talking to the database. The ORM’s “handle” to the
database is the Session. When we first set up
the application, at the same level as our create_engine()
statement, we define a Session class which
will serve as a factory for new Session
objects:

This custom-made Session class will create
new Session objects which are bound to our
database. Other transactional characteristics may be defined when calling
sessionmaker() as well; these are described in a later
chapter. Then, whenever you need to have a conversation with the database, you
instantiate a Session:

>>> session=Session()

The above Session is associated with our
SQLite-enabled Engine, but it hasn’t opened any connections yet. When it’s first
used, it retrieves a connection from a pool of connections maintained by the
Engine, and holds onto it until we commit all changes and/or close the
session object.

Session Creational Patterns

The business of acquiring a Session has a good deal of variety based
on the variety of types of applications and frameworks out there.
Keep in mind the Session is just a workspace for your objects,
local to a particular database connection - if you think of
an application thread as a guest at a dinner party, the Session
is the guest’s plate and the objects it holds are the food
(and the database...the kitchen?)! Hints on
how Session is integrated into an application are at
Session Frequently Asked Questions.

At this point, we say that the instance is pending; no SQL has yet been issued
and the object is not yet represented by a row in the database. The
Session will issue the SQL to persist EdJones as soon as is needed, using a process known as a flush. If we
query the database for EdJones, all pending information will first be
flushed, and the query is issued immediately thereafter.

For example, below we create a new Query object
which loads instances of User. We “filter by” the name attribute of
ed, and indicate that we’d like only the first result in the full list of
rows. A User instance is returned which is equivalent to that which we’ve
added:

In fact, the Session has identified that the
row returned is the same row as one already represented within its
internal map of objects, so we actually got back the identical instance as
that which we just added:

>>> ed_userisour_userTrue

The ORM concept at work here is known as an identity map
and ensures that
all operations upon a particular row within a
Session operate upon the same set of data.
Once an object with a particular primary key is present in the
Session, all SQL queries on that
Session will always return the same Python
object for that particular primary key; it also will raise an error if an
attempt is made to place a second, already-persisted object with the same
primary key within the session.

commit() flushes whatever remaining changes remain to the
database, and commits the transaction. The connection resources referenced by
the session are now returned to the connection pool. Subsequent operations
with this session will occur in a new transaction, which will again
re-acquire connection resources when first needed.

If we look at Ed’s id attribute, which earlier was None, it now has a value:

After the Session inserts new rows in the
database, all newly generated identifiers and database-generated defaults
become available on the instance, either immediately or via
load-on-first-access. In this case, the entire row was re-loaded on access
because a new transaction was begun after we issued commit(). SQLAlchemy
by default refreshes data from a previous transaction the first time it’s
accessed within a new transaction, so that the most recent state is available.
The level of reloading is configurable as is described in Using the Session.

Session Object States

As our User object moved from being outside the Session, to
inside the Session without a primary key, to actually being
inserted, it moved between three out of four
available “object states” - transient, pending, and persistent.
Being aware of these states and what they mean is always a good idea -
be sure to read Quickie Intro to Object States for a quick overview.

A Query object is created using the
query() method on
Session. This function takes a variable
number of arguments, which can be any combination of classes and
class-instrumented descriptors. Below, we indicate a
Query which loads User instances. When
evaluated in an iterative context, the list of User objects present is
returned:

SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password
FROM users ORDER BY users.id
()

edEdJoneswendyWendyWilliamsmaryMaryContraryfredFredFlinstone

The Query also accepts ORM-instrumented
descriptors as arguments. Any time multiple class entities or column-based
entities are expressed as arguments to the
query() function, the return result
is expressed as tuples:

SELECT users.name AS users_name,
users.fullname AS users_fullname
FROM users
()

edEdJoneswendyWendyWilliamsmaryMaryContraryfredFredFlinstone

The tuples returned by Query are named
tuples, supplied by the KeyedTuple class, and can be treated much like an
ordinary Python object. The names are
the same as the attribute’s name for an attribute, and the class name for a
class:

You can control the names of individual column expressions using the
label() construct, which is available from
any ColumnElement-derived object, as well as any class attribute which
is mapped to one (such as User.name):

The Query object is fully generative, meaning
that most method calls return a new Query
object upon which further criteria may be added. For example, to query for
users named “ed” with a full name of “Ed Jones”, you can call
filter() twice, which joins criteria using
AND:

SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password
FROM users
WHERE users.name LIKE ? AND users.id = ? ORDER BY users.id
('%ed', 99)

SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password
FROM users
WHERE id and name=? ORDER BY users.id
(224, 'fred')

<User('fred','Fred Flinstone','blah')>

To use an entirely string-based statement, using
from_statement(); just ensure that the
columns clause of the statement contains the column names normally used by the
mapper (below illustrated using an asterisk):

Query is constructed like the rest of SQLAlchemy, in that it tries
to always allow “falling back” to a less automated, lower level approach to things.
Accepting strings for all SQL fragments is a big part of that, so that
you can bypass the need to organize SQL constructs if you know specifically
what string output you’d like.
But when using literal strings, the Query no longer knows anything about
that part of the SQL construct being emitted, and has no ability to
transform it to adapt to new contexts.

For example, suppose we selected User objects and ordered by the name
column, using a string to indicate name:

SELECT users.id AS users_id, users.name AS users_name
FROM users ORDER BY name
()

[(1,u'ed'),(4,u'fred'),(3,u'mary'),(2,u'wendy')]

Perfectly fine. But suppose, before we got a hold of the Query,
some sophisticated transformations were applied to it, such as below
where we use from_self(), a particularly advanced
method, to retrieve pairs of user names with
different numbers of characters:

The Query now represents a select from a subquery, where
User is represented twice both inside and outside of the subquery.
Telling the Query to order by “name” doesn’t really give
us much guarantee which “name” it’s going to order on. In this
case it assumes “name” is against the outer “aliased” User construct:

SELECT anon_1.users_id AS anon_1_users_id,
anon_1.users_name AS anon_1_users_name,
users_1.name AS users_1_name
FROM (SELECT users.id AS users_id, users.name AS users_name
FROM users) AS anon_1, users AS users_1
WHERE anon_1.users_name < users_1.name
AND length(users_1.name) != length(anon_1.users_name)
ORDER BY name
()

Only if we use the SQL element directly, in this case User.name
or ua.name, do we give Query enough information to know
for sure which “name” we’d like to order on, where we can see we get different results
for each:

SELECT anon_1.users_id AS anon_1_users_id,
anon_1.users_name AS anon_1_users_name,
users_1.name AS users_1_name
FROM (SELECT users.id AS users_id, users.name AS users_name
FROM users) AS anon_1, users AS users_1
WHERE anon_1.users_name < users_1.name
AND length(users_1.name) != length(anon_1.users_name)
ORDER BY users_1.name
()

SELECT anon_1.users_id AS anon_1_users_id,
anon_1.users_name AS anon_1_users_name,
users_1.name AS users_1_name
FROM (SELECT users.id AS users_id, users.name AS users_name
FROM users) AS anon_1, users AS users_1
WHERE anon_1.users_name < users_1.name
AND length(users_1.name) != length(anon_1.users_name)
ORDER BY anon_1.users_name
()

SELECT count(*) AS count_1
FROM (SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password
FROM users
WHERE users.name LIKE ?) AS anon_1
('%ed',)

2

The count() method is used to determine
how many rows the SQL statement would return. Looking
at the generated SQL above, SQLAlchemy always places whatever it is we are
querying into a subquery, then counts the rows from that. In some cases
this can be reduced to a simpler SELECTcount(*)FROMtable, however
modern versions of SQLAlchemy don’t try to guess when this is appropriate,
as the exact SQL can be emitted using more explicit means.

For situations where the “thing to be counted” needs
to be indicated specifically, we can specify the “count” function
directly using the expression func.count(), available from the
func construct. Below we
use it to return the count of each distinct user name:

Let’s consider how a second table, related to User, can be mapped and
queried. Users in our system
can store any number of email addresses associated with their username. This
implies a basic one to many association from the users to a new
table which stores email addresses, which we will call addresses. Using
declarative, we define this table along with its mapped class, Address:

The above class introduces the ForeignKey construct, which is a
directive applied to Column that indicates that values in this
column should be constrained to be values present in the named remote
column. This is a core feature of relational databases, and is the “glue” that
transforms an otherwise unconnected collection of tables to have rich
overlapping relationships. The ForeignKey above expresses that
values in the addresses.user_id column should be constrained to
those values in the users.id column, i.e. its primary key.

A second directive, known as relationship(),
tells the ORM that the Address class itself should be linked
to the User class, using the attribute Address.user.
relationship() uses the foreign key
relationships between the two tables to determine the nature of
this linkage, determining that Address.user will be many-to-one.
A subdirective of relationship() called backref() is
placed inside of relationship(), providing details about
the relationship as expressed in reverse, that of a collection of Address
objects on User referenced by User.addresses. The reverse
side of a many-to-one relationship is always one-to-many.
A full catalog of available relationship() configurations
is at Basic Relational Patterns.

The two complementing relationships Address.user and User.addresses
are referred to as a bidirectional relationship, and is a key
feature of the SQLAlchemy ORM. The section Linking Relationships with Backref
discusses the “backref” feature in detail.

Arguments to relationship() which concern the remote class
can be specified using strings, assuming the Declarative system is in
use. Once all mappings are complete, these strings are evaluated
as Python expressions in order to produce the actual argument, in the
above case the User class. The names which are allowed during
this evaluation include, among other things, the names of all classes
which have been created in terms of the declared base. Below we illustrate creation
of the same “addresses/user” bidirectional relationship in terms of User instead of
Address:

See the docstring for relationship() for more detail on argument style.

Did you know ?

a FOREIGN KEY constraint in most (though not all) relational databases can
only link to a primary key column, or a column that has a UNIQUE constraint.

a FOREIGN KEY constraint that refers to a multiple column primary key, and itself
has multiple columns, is known as a “composite foreign key”. It can also
reference a subset of those columns.

FOREIGN KEY columns can automatically update themselves, in response to a change
in the referenced column or row. This is known as the CASCADE referential action,
and is a built in function of the relational database.

FOREIGN KEY can refer to its own table. This is referred to as a “self-referential”
foreign key.

Now when we create a User, a blank addresses collection will be
present. Various collection types, such as sets and dictionaries, are possible
here (see Customizing Collection Access for details), but by
default, the collection is a Python list.

>>>jack=User('jack','Jack Bean','gjffdd')>>>jack.addresses[]

We are free to add Address objects on our User object. In this case we
just assign a full list directly:

When using a bidirectional relationship, elements added in one direction
automatically become visible in the other direction. This behavior occurs
based on attribute on-change events and is evaluated in Python, without
using any SQL:

Let’s add and commit JackBean to the database. jack as well
as the two Address members in the corresponding addresses
collection are both added to the session at once, using a process
known as cascading:

SELECT addresses.id AS addresses_id,
addresses.email_address AS
addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE ? = addresses.user_id ORDER BY addresses.id
(5,)

[<Address('jack@google.com')>,<Address('j25@yahoo.com')>]

When we accessed the addresses collection, SQL was suddenly issued. This
is an example of a lazy loading relationship. The addresses collection
is now loaded and behaves just like an ordinary list. We’ll cover ways
to optimize the loading of this collection in a bit.

Now that we have two tables, we can show some more features of Query,
specifically how to create queries that deal with both tables at the same time.
The Wikipedia page on SQL JOIN offers a good introduction to
join techniques, several of which we’ll illustrate here.

To construct a simple implicit join between User and Address,
we can use Query.filter() to equate their related columns together.
Below we load the User and Address entities at once using this method:

Query.join() knows how to join between User
and Address because there’s only one foreign key between them. If there
were no foreign keys, or several, Query.join()
works better when one of the following forms are used:

query.join(Address,User.id==Address.user_id)# explicit conditionquery.join(User.addresses)# specify relationship from left to rightquery.join(Address,User.addresses)# same, with explicit targetquery.join('addresses')# same, using a string

As you would expect, the same idea is used for “outer” joins, using the
outerjoin() function:

query.outerjoin(User.addresses)# LEFT OUTER JOIN

The reference documentation for join() contains detailed information
and examples of the calling styles accepted by this method; join()
is an important method at the center of usage for any SQL-fluent application.

When querying across multiple tables, if the same table needs to be referenced
more than once, SQL typically requires that the table be aliased with
another name, so that it can be distinguished against other occurrences of
that table. The Query supports this most
explicitly using the aliased construct. Below we join to the Address
entity twice, to locate a user who has two distinct email addresses at the
same time:

The Query is suitable for generating statements
which can be used as subqueries. Suppose we wanted to load User objects
along with a count of how many Address records each user has. The best way
to generate SQL like this is to get the count of addresses grouped by user
ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so that we
get rows back for those users who don’t have any addresses, e.g.:

SELECT users.*, adr_count.address_count FROM users LEFT OUTER JOIN
(SELECT user_id, count(*) AS address_count
FROM addresses GROUP BY user_id) AS adr_count
ON users.id=adr_count.user_id

Using the Query, we build a statement like this
from the inside out. The statement accessor returns a SQL expression
representing the statement generated by a particular
Query - this is an instance of a select()
construct, which are described in SQL Expression Language Tutorial:

The func keyword generates SQL functions, and the subquery() method on
Query produces a SQL expression construct
representing a SELECT statement embedded within an alias (it’s actually
shorthand for query.statement.alias()).

Once we have our statement, it behaves like a
Table construct, such as the one we created for
users at the start of this tutorial. The columns on the statement are
accessible through an attribute called c:

SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password,
anon_1.address_count AS anon_1_address_count
FROM users LEFT OUTER JOIN
(SELECT addresses.user_id AS user_id, count(?) AS address_count
FROM addresses GROUP BY addresses.user_id) AS anon_1
ON users.id = anon_1.user_id
ORDER BY users.id
('*',)

Above, we just selected a result that included a column from a subquery. What
if we wanted our subquery to map to an entity ? For this we use aliased()
to associate an “alias” of a mapped class to a subquery:

The EXISTS keyword in SQL is a boolean operator which returns True if the
given expression contains any rows. It may be used in many scenarios in place
of joins, and is also useful for locating rows which do not have a
corresponding row in a related table.

SELECT addresses.id AS addresses_id,
addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE NOT (EXISTS (SELECT 1
FROM users
WHERE users.id = addresses.user_id AND users.name = ?))
('jack',)

Recall earlier that we illustrated a lazy loading operation, when
we accessed the User.addresses collection of a User and SQL
was emitted. If you want to reduce the number of queries (dramatically, in many cases),
we can apply an eager load to the query operation. SQLAlchemy
offers three types of eager loading, two of which are automatic, and a third
which involves custom criterion. All three are usually invoked via functions known
as query options which give additional instructions to the Query on how
we would like various attributes to be loaded, via the Query.options() method.

In this case we’d like to indicate that User.addresses should load eagerly.
A good choice for loading a set of objects as well as their related collections
is the orm.subqueryload() option, which emits a second SELECT statement
that fully loads the collections associated with the results just loaded.
The name “subquery” originates from the fact that the SELECT statement
constructed directly via the Query is re-used, embedded as a subquery
into a SELECT against the related table. This is a little elaborate but
very easy to use:

The other automatic eager loading function is more well known and is called
orm.joinedload(). This style of loading emits a JOIN, by default
a LEFT OUTER JOIN, so that the lead object as well as the related object
or collection is loaded in one step. We illustrate loading the same
addresses collection in this way - note that even though the User.addresses
collection on jack is actually populated right now, the query
will emit the extra join regardless:

SELECT users.id AS users_id,
users.name AS users_name,
users.fullname AS users_fullname,
users.password AS users_password,
addresses_1.id AS addresses_1_id,
addresses_1.email_address AS addresses_1_email_address,
addresses_1.user_id AS addresses_1_user_id
FROM users
LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
WHERE users.name = ? ORDER BY addresses_1.id
('jack',)

Note that even though the OUTER JOIN resulted in two rows, we still only got
one instance of User back. This is because Query applies a “uniquing”
strategy, based on object identity, to the returned entities. This is specifically
so that joined eager loading can be applied without affecting the query results.

While joinedload() has been around for a long time, subqueryload()
is a newer form of eager loading. subqueryload() tends to be more appropriate
for loading related collections while joinedload() tends to be better suited
for many-to-one relationships, due to the fact that only one row is loaded
for both the lead and the related object.

joinedload() is not a replacement for join()

The join created by joinedload() is anonymously aliased such that
it does not affect the query results. An Query.order_by()
or Query.filter() call cannot reference these aliased
tables - so-called “user space” joins are constructed using
Query.join(). The rationale for this is that joinedload() is only
applied in order to affect how related objects or collections are loaded
as an optimizing detail - it can be added or removed with no impact
on actual results. See the section The Zen of Eager Loading for
a detailed description of how this is used.

A third style of eager loading is when we are constructing a JOIN explicitly in
order to locate the primary rows, and would like to additionally apply the extra
table to a related object or collection on the primary object. This feature
is supplied via the orm.contains_eager() function, and is most
typically useful for pre-loading the many-to-one object on a query that needs
to filter on that same object. Below we illustrate loading an Address
row as well as the related User object, filtering on the User named
“jack” and using orm.contains_eager() to apply the “user” columns to the Address.user
attribute:

SELECT count(*) AS count_1
FROM (SELECT addresses.id AS addresses_id,
addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE addresses.email_address IN (?, ?)) AS anon_1
('jack@google.com', 'j25@yahoo.com')

2

Uh oh, they’re still there ! Analyzing the flush SQL, we can see that the
user_id column of each address was set to NULL, but the rows weren’t
deleted. SQLAlchemy doesn’t assume that deletes cascade, you have to tell it
to do so.

We will configure cascade options on the User.addresses relationship
to change the behavior. While SQLAlchemy allows you to add new attributes and
relationships to mappings at any point in time, in this case the existing
relationship needs to be removed, so we need to tear down the mappings
completely and start again - we’ll close the Session:

# only one address remainssql>>>session.query(Address).filter(...Address.email_address.in_(['jack@google.com','j25@yahoo.com'])...).count()

DELETE FROM addresses WHERE addresses.id = ?
(2,)
SELECT count(*) AS count_1
FROM (SELECT addresses.id AS addresses_id,
addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE addresses.email_address IN (?, ?)) AS anon_1
('jack@google.com', 'j25@yahoo.com')

1

Deleting Jack will delete both Jack and the remaining Address associated
with the user:

SELECT count(*) AS count_1
FROM (SELECT addresses.id AS addresses_id,
addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE addresses.email_address IN (?, ?)) AS anon_1
('jack@google.com', 'j25@yahoo.com')

0

More on Cascades

Further detail on configuration of cascades is at Cascades.
The cascade functionality can also integrate smoothly with
the ONDELETECASCADE functionality of the relational database.
See Using Passive Deletes for details.

We’re moving into the bonus round here, but lets show off a many-to-many
relationship. We’ll sneak in some other features too, just to take a tour.
We’ll make our application a blog application, where users can write
BlogPost items, which have Keyword items associated with them.

For a plain many-to-many, we need to create an un-mapped Table construct
to serve as the association table. This looks like the following:

Above, we can see declaring a Table directly is a little different
than declaring a mapped class. Table is a constructor function, so
each individual Column argument is separated by a comma. The
Column object is also given its name explicitly, rather than it being
taken from an assigned attribute name.

Next we define BlogPost and Keyword, with a relationship() linked
via the post_keywords table:

Above, the many-to-many relationship is BlogPost.keywords. The defining
feature of a many-to-many relationship is the secondary keyword argument
which references a Table object representing the
association table. This table only contains columns which reference the two
sides of the relationship; if it has any other columns, such as its own
primary key, or foreign keys to other tables, SQLAlchemy requires a different
usage pattern called the “association object”, described at
Association Object.

We would also like our BlogPost class to have an author field. We will
add this as another bidirectional relationship, except one issue we’ll have is
that a single user might have lots of blog posts. When we access
User.posts, we’d like to be able to filter results further so as not to
load the entire collection. For this we use a setting accepted by
relationship() called lazy='dynamic', which
configures an alternate loader strategy on the attribute. To use it on the
“reverse” side of a relationship(), we use the
backref() function: