This guide introduces what’s new in SQLAlchemy version 0.8,
and also documents changes which affect users migrating
their applications from the 0.7 series of SQLAlchemy to 0.8.

SQLAlchemy releases are closing in on 1.0, and each new
version since 0.5 features fewer major usage changes. Most
applications that are settled into modern 0.7 patterns
should be movable to 0.8 with no changes. Applications that
use 0.6 and even 0.5 patterns should be directly migratable
to 0.8 as well, though larger applications may want to test
with each interim version.

SQLAlchemy 0.8 will target Python 2.5 and forward;
compatibility for Python 2.4 is being dropped.

The internals will be able to make usage of Python ternaries
(that is, xifyelsez) which will improve things
versus the usage of yandxorz, which naturally has
been the source of some bugs, as well as context managers
(that is, with:) and perhaps in some cases
try:/except:/else: blocks which will help with code
readability.

SQLAlchemy will eventually drop 2.5 support as well - when
2.6 is reached as the baseline, SQLAlchemy will move to use
2.6/3.3 in-place compatibility, removing the usage of the
2to3 tool and maintaining a source base that works with
Python 2 and 3 at the same time.

0.8 features a much improved and capable system regarding
how relationship() determines how to join between two
entities. The new system includes these features:

The primaryjoin argument is no longer needed when
constructing a relationship() against a class that
has multiple foreign key paths to the target. Only the
foreign_keys argument is needed to specify those
columns which should be included:

Above, the Folder refers to its parent Folder
joining from account_id to itself, and parent_id
to folder_id. When SQLAlchemy constructs an auto-
join, no longer can it assume all columns on the “remote”
side are aliased, and all columns on the “local” side are
not - the account_id column is on both sides. So
the internal relationship mechanics were totally rewritten
to support an entirely different system whereby two copies
of account_id are generated, each containing different
annotations to determine their role within the
statement. Note the join condition within a basic eager
load:

SELECT
folder.account_id AS folder_account_id,
folder.folder_id AS folder_folder_id,
folder.parent_id AS folder_parent_id,
folder.name AS folder_name,
folder_1.account_id AS folder_1_account_id,
folder_1.folder_id AS folder_1_folder_id,
folder_1.parent_id AS folder_1_parent_id,
folder_1.name AS folder_1_name
FROM folder
LEFT OUTER JOIN folder AS folder_1
ON
folder_1.account_id = folder.account_id
AND folder.folder_id = folder_1.parent_id
WHERE folder.folder_id = ? AND folder.account_id = ?

Previously difficult custom join conditions, like those involving
functions and/or CASTing of types, will now function as
expected in most cases:

The new relationship() mechanics make use of a
SQLAlchemy concept known as annotations. These annotations
are also available to application code explicitly via
the foreign() and remote() functions, either
as a means to improve readability for advanced configurations
or to directly inject an exact configuration, bypassing
the usual join-inspection heuristics:

Lots of SQLAlchemy users are writing systems that require
the ability to inspect the attributes of a mapped class,
including being able to get at the primary key columns,
object relationships, plain attributes, and so forth,
typically for the purpose of building data-marshalling
systems, like JSON/XML conversion schemes and of course form
libraries galore.

Originally, the Table and Column model were the
original inspection points, which have a well-documented
system. While SQLAlchemy ORM models are also fully
introspectable, this has never been a fully stable and
supported feature, and users tended to not have a clear idea
how to get at this information.

0.8 now provides a consistent, stable and fully
documented API for this purpose, including an inspection
system which works on mapped classes, instances, attributes,
and other Core and ORM constructs. The entrypoint to this
system is the core-level inspect() function.
In most cases, the object being inspected
is one already part of SQLAlchemy’s system,
such as Mapper, InstanceState,
Inspector. In some cases, new objects have been
added with the job of providing the inspection API in
certain contexts, such as AliasedInsp and
AttributeState.

The Query.with_polymorphic() method allows the user to
specify which tables should be present when querying against
a joined-table entity. Unfortunately the method is awkward
and only applies to the first entity in the list, and
otherwise has awkward behaviors both in usage as well as
within the internals. A new enhancement to the
aliased() construct has been added called
with_polymorphic() which allows any entity to be
“aliased” into a “polymorphic” version of itself, freely
usable anywhere:

Mapper and instance events can now be associated with an unmapped
superclass, where those events will be propagated to subclasses
as those subclasses are mapped. The propagate=True flag
should be used. This feature allows events to be associated
with a declarative base class:

fromsqlalchemy.ext.declarativeimportdeclarative_baseBase=declarative_base()@event.listens_for("load",Base,propagate=True)defon_load(target,context):print"New instance loaded:",target# on_load() will be applied to SomeClassclassSomeClass(Base):__tablename__='sometable'# ...

A key feature of Declarative is the ability to refer
to other mapped classes using their string name. The
registry of class names is now sensitive to the owning
module and package of a given class. The classes
can be referred to via dotted name in expressions:

The “deferred reflection” example has been moved to a
supported feature within Declarative. This feature allows
the construction of declarative mapped classes with only
placeholder Table metadata, until a prepare() step
is called, given an Engine with which to reflect fully
all tables and establish actual mappings. The system
supports overriding of columns, single and joined
inheritance, as well as distinct bases-per-engine. A full
declarative configuration can now be created against an
existing table that is assembled upon engine creation time
in one step:

While the SQL expressions used with Query.filter(),
such as User.id==5, have always been compatible for
use with core constructs such as select(), the mapped
class itself would not be recognized when passed to select(),
Select.select_from(), or Select.correlate().
A new SQL registration system allows a mapped class to be
accepted as a FROM clause within the core:

fromsqlalchemyimportselectstmt=select([User]).where(User.id==5)

Above, the mapped User class will expand into
Table to which User is mapped.

In particular, updates to joined-inheritance
entities are supported, provided the target of the UPDATE is local to the
table being filtered on, or if the parent and child tables
are mixed, they are joined explicitly in the query. Below,
given Engineer as a joined subclass of Person:

A behavioral change that should improve efficiency for those
users using SAVEPOINT via Session.begin_nested() - upon
rollback(), only those objects that were made dirty
since the last flush will be expired, the rest of the
Session remains intact. This because a ROLLBACK to a
SAVEPOINT does not terminate the containing transaction’s
isolation, so no expiry is needed except for those changes
that were not flushed in the current transaction.

The Core has to date never had any system of adding support
for new SQL operators to Column and other expression
constructs, other than the ColumnOperators.op() method
which is “just enough” to make things work. There has also
never been any system in place for Core which allows the
behavior of existing operators to be overridden. Up until
now, the only way operators could be flexibly redefined was
in the ORM layer, using column_property() given a
comparator_factory argument. Third party libraries
like GeoAlchemy therefore were forced to be ORM-centric and
rely upon an array of hacks to apply new opertions as well
as to get them to propagate correctly.

The new operator system in Core adds the one hook that’s
been missing all along, which is to associate new and
overridden operators with types. Since after all, it’s
not really a column, CAST operator, or SQL function that
really drives what kinds of operations are present, it’s the
type of the expression. The implementation details are
minimal - only a few extra methods are added to the core
ColumnElement type so that it consults its
TypeEngine object for an optional set of operators.
New or revised operations can be associated with any type,
either via subclassing of an existing type, by using
TypeDecorator, or “globally across-the-board” by
attaching a new TypeEngine.Comparator object to an existing type
class.

New features which have come from this immediately include
support for Postgresql’s HSTORE type, as well as new
operations associated with Postgresql’s ARRAY
type. It also paves the way for existing types to acquire
lots more operators that are specific to those types, such
as more string, integer and date operators.

The Insert.values() method now supports a list of dictionaries,
which will render a multi-VALUES statement such as
VALUES(<row1>),(<row2>),.... This is only relevant to backends which
support this syntax, including Postgresql, SQLite, and MySQL. It is
not the same thing as the usual executemany() style of INSERT which
remains unchanged:

users.insert().values([{"name":"some name"},{"name":"some other name"},{"name":"yet another name"},])

SQL expressions can now be associated with types. Historically,
TypeEngine has always allowed Python-side functions which
receive both bound parameters as well as result row values, passing
them through a Python side conversion function on the way to/back from
the database. The new feature allows similar
functionality, except on the database side:

select() now has a method Select.correlate_except()
which specifies “correlate on all FROM clauses except those
specified”. It can be used for mapping scenarios where
a related subquery should correlate normally, except
against a particular target selectable:

Support for Postgresql’s HSTORE type is now available as
postgresql.HSTORE. This type makes great usage
of the new operator system to provide a full range of operators
for HSTORE types, including index access, concatenation,
and containment methods such as
has_key(),
has_any(), and
matrix():

The postgresql.ARRAY type will accept an optional
“dimension” argument, pinning it to a fixed number of
dimensions and greatly improving efficiency when retrieving
results:

# old way, still works since PG supports N-dimensions per row:Column("my_array",postgresql.ARRAY(Integer))# new way, will render ARRAY with correct number of [] in DDL,# will process binds and results more efficiently as we don't need# to guess how many levels deep to goColumn("my_array",postgresql.ARRAY(Integer,dimensions=2))

The type also introduces new operators, using the new type-specific
operator framework. New operations include indexed access:

SQLite has no built-in DATE, TIME, or DATETIME types, and
instead provides some support for storage of date and time
values either as strings or integers. The date and time
types for SQLite are enhanced in 0.8 to be much more
configurable as to the specific format, including that the
“microseconds” portion is optional, as well as pretty much
everything else.

“COLLATE” supported across all dialects; in particular MySQL, Postgresql, SQLite¶

The “collate” keyword, long accepted by the MySQL dialect, is now established
on all String types and will render on any backend, including
when features such as MetaData.create_all() and cast() is used:

The consideration of a “pending” object as an “orphan” has been made more aggressive¶

This is a late add to the 0.8 series, however it is hoped that the new behavior
is generally more consistent and intuitive in a wider variety of
situations. The ORM has since at least version 0.4 included behavior
such that an object that’s “pending”, meaning that it’s
associated with a Session but hasn’t been inserted into the database
yet, is automatically expunged from the Session when it becomes an “orphan”,
which means it has been de-associated with a parent object that refers to it
with delete-orphan cascade on the configured relationship(). This
behavior is intended to approximately mirror the behavior of a persistent
(that is, already inserted) object, where the ORM will emit a DELETE for such
objects that become orphans based on the interception of detachment events.

The behavioral change comes into play for objects that
are referred to by multiple kinds of parents that each specify delete-orphan; the
typical example is an association object that bridges two other kinds of objects
in a many-to-many pattern. Previously, the behavior was such that the
pending object would be expunged only when de-associated with all of its parents.
With the behavioral change, the pending object
is expunged as soon as it is de-associated from any of the parents that it was
previously associated with. This behavior is intended to more closely
match that of persistent objects, which are deleted as soon
as they are de-associated from any parent.

The rationale for the older behavior dates back
at least to version 0.4, and was basically a defensive decision to try to alleviate
confusion when an object was still being constructed for INSERT. But the reality
is that the object is re-associated with the Session as soon as it is
attached to any new parent in any case.

It’s still possible to flush an object
that is not associated with all of its required parents, if the object was either
not associated with those parents in the first place, or if it was expunged, but then
re-associated with a Session via a subsequent attachment event but still
not fully associated. In this situation, it is expected that the database
would emit an integrity error, as there are likely NOT NULL foreign key columns
that are unpopulated. The ORM makes the decision to let these INSERT attempts
occur, based on the judgment that an object that is only partially associated with
its required parents but has been actively associated with some of them,
is more often than not a user error, rather than an intentional
omission which should be silently skipped - silently skipping the INSERT here would
make user errors of this nature very hard to debug.

The old behavior, for applications that might have been relying upon it, can be re-enabled for
any Mapper by specifying the flag legacy_is_orphan as a mapper
option.

The new behavior allows the following test case to work:

fromsqlalchemyimportColumn,Integer,String,ForeignKeyfromsqlalchemy.ormimportrelationship,backreffromsqlalchemy.ext.declarativeimportdeclarative_baseBase=declarative_base()classUser(Base):__tablename__='user'id=Column(Integer,primary_key=True)name=Column(String(64))classUserKeyword(Base):__tablename__='user_keyword'user_id=Column(Integer,ForeignKey('user.id'),primary_key=True)keyword_id=Column(Integer,ForeignKey('keyword.id'),primary_key=True)user=relationship(User,backref=backref("user_keywords",cascade="all, delete-orphan"))keyword=relationship("Keyword",backref=backref("user_keywords",cascade="all, delete-orphan"))# uncomment this to enable the old behavior# __mapper_args__ = {"legacy_is_orphan": True}classKeyword(Base):__tablename__='keyword'id=Column(Integer,primary_key=True)keyword=Column('keyword',String(64))fromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportSession# note we're using Postgresql to ensure that referential integrity# is enforced, for demonstration purposes.e=create_engine("postgresql://scott:tiger@localhost/test",echo=True)Base.metadata.drop_all(e)Base.metadata.create_all(e)session=Session(e)u1=User(name="u1")k1=Keyword(keyword="k1")session.add_all([u1,k1])uk1=UserKeyword(keyword=k1,user=u1)# previously, if session.flush() were called here,# this operation would succeed, but if session.flush()# were not called here, the operation fails with an# integrity error.# session.flush()delu1.user_keywords[0]session.commit()

Some use cases require that it work this way. However,
other use cases require that the item is not yet part of
the session, such as when a query, intended to load some
state required for an instance, emits autoflush first and
would otherwise prematurely flush the target object. Those
use cases should use the new “before_attach” event:

To allow a wider variety of correlation scenarios, the behavior of
Select.correlate() and Query.correlate() has changed slightly
such that the SELECT statement will omit the “correlated” target from the
FROM clause only if the statement is actually used in that context. Additionally,
it’s no longer possible for a SELECT statement that’s placed as a FROM
in an enclosing SELECT statement to “correlate” (i.e. omit) a FROM clause.

This change only makes things better as far as rendering SQL, in that it’s no
longer possible to render illegal SQL where there are insufficient FROM
objects relative to what’s being selected:

This change is not expected to impact any existing applications, as
the correlation behavior remains identical for properly constructed
expressions. Only an application that relies, most likely within a
testing scenario, on the invalid string output of a correlated
SELECT used in a non-correlating context would see any change.

The methods MetaData.create_all() and MetaData.drop_all()
will now accept a list of Table objects that is empty,
and will not emit any CREATE or DROP statements. Previously,
an empty list was interepreted the same as passing None
for a collection, and CREATE/DROP would be emitted for all
items unconditionally.

This is a bug fix but some applications may have been relying upon
the previous behavior.

The InstrumentationEvents series of event targets have
documented that the events will only be fired off according to
the actual class passed as a target. Through 0.7, this wasn’t the
case, and any event listener applied to InstrumentationEvents
would be invoked for all classes mapped. In 0.8, additional
logic has been added so that the events will only invoke for those
classes sent in. The propagate flag here is set to True
by default as class instrumentation events are typically used to
intercept classes that aren’t yet created.

SQL Server doesn’t allow an equality comparison to a scalar
SELECT, that is, “x = (SELECT something)”. The MSSQL dialect
would convert this to an IN. The same thing would happen
however upon a comparison like “(SELECT something) = x”, and
overall this level of guessing is outside of SQLAlchemy’s
usual scope so the behavior is removed.

The Session.is_modified() method accepts an argument
passive which basically should not be necessary, the
argument in all cases should be the value True - when
left at its default of False it would have the effect of
hitting the database, and often triggering autoflush which
would itself change the results. In 0.8 the passive
argument will have no effect, and unloaded attributes will
never be checked for history since by definition there can
be no pending state change on an unloaded attribute.

Users of the expression system know that Select.apply_labels()
prepends the table name to each column name, affecting the
names that are available from Select.c:

s=select([table1]).apply_labels()s.c.table1_col1s.c.table1_col2

Before 0.8, if the Column had a different Column.key, this
key would be ignored, inconsistently versus when
Select.apply_labels() were not used:

# before 0.8table1=Table('t1',metadata,Column('col1',Integer,key='column_one'))s=select([table1])s.c.column_one# would be accessible like thiss.c.col1# would raise AttributeErrors=select([table1]).apply_labels()s.c.table1_column_one# would raise AttributeErrors.c.table1_col1# would be accessible like this

All other behavior regarding “name” and “key” are the same,
including that the rendered SQL will still use the form
<tablename>_<colname> - the emphasis here was on
preventing the Column.key contents from being rendered into the
SELECT statement so that there are no issues with
special/ non-ascii characters used in the Column.key.

A relationship() that is many-to-one or many-to-many and
specifies “cascade=’all, delete-orphan’”, which is an
awkward but nonetheless supported use case (with
restrictions) will now raise an error if the relationship
does not specify the single_parent=True option.
Previously it would only emit a warning, but a failure would
follow almost immediately within the attribute system in any
case.

0.7 added a new event called column_reflect, provided so
that the reflection of columns could be augmented as each
one were reflected. We got this event slightly wrong in
that the event gave no way to get at the current
Inspector and Connection being used for the
reflection, in the case that additional information from the
database is needed. As this is a new event not widely used
yet, we’ll be adding the inspector argument into it
directly:

The MySQL dialect does two calls, one very expensive, to
load all possible collations from the database as well as
information on casing, the first time an Engine
connects. Neither of these collections are used for any
SQLAlchemy functions, so these calls will be changed to no
longer be emitted automatically. Applications that might
have relied on these collections being present on
engine.dialect will need to call upon
_detect_collations() and _detect_casing() directly.

A very old behavior, the column names in RowProxy were
always compared case-insensitively:

>>> row=result.fetchone()>>> row['foo']==row['FOO']==row['Foo']True

This was for the benefit of a few dialects which in the
early days needed this, like Oracle and Firebird, but in
modern usage we have more accurate ways of dealing with the
case-insensitive behavior of these two platforms.

Going forward, this behavior will be available only
optionally, by passing the flag `case_sensitive=False`
to `create_engine()`, but otherwise column names
requested from the row must match as far as casing.

InstrumentationManager and alternate class instrumentation is now an extension¶

The sqlalchemy.orm.interfaces.InstrumentationManager
class is moved to
sqlalchemy.ext.instrumentation.InstrumentationManager.
The “alternate instrumentation” system was built for the
benefit of a very small number of installations that needed
to work with existing or unusual class instrumentation
systems, and generally is very seldom used. The complexity
of this system has been exported to an ext. module. It
remains unused until once imported, typically when a third
party library imports InstrumentationManager, at which
point it is injected back into sqlalchemy.orm by
replacing the default InstrumentationFactory with
ExtendedInstrumentationRegistry.

SQLSoup is a handy package that presents an alternative
interface on top of the SQLAlchemy ORM. SQLSoup is now
moved into its own project and documented/released
separately; see https://bitbucket.org/zzzeek/sqlsoup.

SQLSoup is a very simple tool that could also benefit from
contributors who are interested in its style of usage.

The older “mutable” system within the SQLAlchemy ORM has
been removed. This refers to the MutableType interface
which was applied to types such as PickleType and
conditionally to TypeDecorator, and since very early
SQLAlchemy versions has provided a way for the ORM to detect
changes in so-called “mutable” data structures such as JSON
structures and pickled objects. However, the
implementation was never reasonable and forced a very
inefficient mode of usage on the unit-of-work which caused
an expensive scan of all objects to take place during flush.
In 0.7, the sqlalchemy.ext.mutable extension was
introduced so that user-defined datatypes can appropriately
send events to the unit of work as changes occur.

Today, usage of MutableType is expected to be low, as
warnings have been in place for some years now regarding its
inefficiency.

We had left in an alias sqlalchemy.exceptions to attempt
to make it slightly easier for some very old libraries that
hadn’t yet been upgraded to use sqlalchemy.exc. Some
users are still being confused by it however so in 0.8 we’re
taking it out entirely to eliminate any of that confusion.