The database abstraction layer

DAL

Dependencies

web2py comes with a Database Abstraction Layer (DAL), an API that maps Python objects into database objects such as queries, tables, and records. The DAL dynamically generates the SQL in real time using the specified dialect for the database back end, so that you do not have to write SQL code or learn different SQL dialects (the term SQL is used generically), and the application will be portable among different types of databases. At the time of this writing, the supported databases are SQLite (which comes with Python and thus web2py), PostgreSQL, MySQL, Oracle, MSSQL, FireBird, DB2, Informix, Ingres, MongoDB, and the Google App Engine (SQL and NoSQL). Experimentally we support more databases. Please check on the web2py web site and mailing list for more recent adapters. Google NoSQL is treated as a particular case in Chapter 13.

The Windows binary distribution works out of the box with SQLite and MySQL. The Mac binary distribution works out of the box with SQLite. To use any other database back-end, run from the source distribution and install the appropriate driver for the required back end.

database drivers

Once the proper driver is installed, start web2py from source, and it will find the driver. Here is a list of drivers:

Connection strings

connection strings

A connection with the database is established by creating an instance of the DAL object:

1

>>>db=DAL('sqlite://storage.db',pool_size=0)

db is not a keyword; it is a local variable that stores the connection object DAL. You are free to give it a different name. The constructor of DAL requires a single argument, the connection string. The connection string is the only web2py code that depends on a specific back-end database. Here are examples of connection strings for specific types of supported back-end databases (in all cases, we assume the database is running from localhost on its default port and is named "test"):

SQLite

sqlite://storage.db

MySQL

mysql://username:password@localhost/test

PostgreSQL

postgres://username:password@localhost/test

MSSQL

mssql://username:password@localhost/test

FireBird

firebird://username:password@localhost/test

Oracle

oracle://username/password@test

DB2

db2://username:password@test

Ingres

ingres://username:password@localhost/test

Sybase

sybase://username:password@localhost/test

Informix

informix://username:password@test

Teradata

teradata://DSN=dsn;UID=user;PWD=pass;DATABASE=name

Cubrid

cubrid://username:password@localhost/test

SAPDB

sapdb://username:password@localhost/test

IMAP

imap://user:password@server:port

MongoDB

mongodb://username:password@localhost/test

Google/SQL

google:sql

Google/NoSQL

google:datastore

Notice that in SQLite the database consists of a single file. If it does not exist, it is created. This file is locked every time it is accessed. In the case of MySQL, PostgreSQL, MSSQL, FireBird, Oracle, DB2, Ingres and Informix the database "test" must be created outside web2py. Once the connection is established, web2py will create, alter, and drop tables appropriately.

It is also possible to set the connection string to None. In this case DAL will not connect to any back-end database, but the API can still be accessed for testing. Examples of this will be discussed in Chapter 7.

Some times you may need to generate SQL as if you had a connection but without actually connecting to the database. This can be done with

1

db=DAL('...',do_connect=False)

In this case you will be able to call _select, _insert, _update, and _delete to generate SQL but not call select, insert, update, and delete. In most of the cases you can use do_connect=False even without having the required database drivers.

Notice that by default web2py uses utf8 character encoding for databases. If you work with existing databases that behave differently, you have to change it with the optional parameter db_codec like

1

db=DAL('...',db_codec='latin1')

otherwise you'll get UnicodeDecodeErrors tickets.

Connection pooling

connection pooling

The second argument of the DAL constructor is the pool_size; it defaults to zero.

As it is rather slow to establish a new database connection for each request, web2py implements a mechanism for connection pooling. Once a connection is established and the page has been served and the transaction completed, the connection is not closed but goes into a pool. When the next http request arrives, web2py tries to obtain a connection from the pool and use that for the new transaction. If there are no available connections in the pool, a new connection is established.

The pool_size parameter is ignored by SQLite and Google App Engine.

Connections in the pools are shared sequentially among threads, in the sense that they may be used by two different but not simultaneous threads. There is only one pool for each web2py process.

When web2py starts, the pool is always empty. The pool grows up to the minimum between the value of pool_size and the max number of concurrent requests. This means that if pool_size=10 but our server never receives more than 5 concurrent requests, then the actual pool size will only grow to 5. If pool_size=0 then connection pooling is not used.

Connection pooling is ignored for SQLite, since it would not yield any benefit.

Connection failures

If web2py fails to connect to the database it waits 1 seconds and tries again up to 5 times before declaring a failure. In case of connection pooling it is possible that a pooled connection that stays open but unused for some time is closed by the database end. Thanks to the retry feature web2py tries to re-establish these dropped connections.

When using connection pooling a connection is used, put back in the pool and then recycled. It is possible that while the connection is idle in pool the connection is closed by the database server. This can be because of a malfunction or a timeout. When this happens web2py detects it and re-establish the connection.

Replicated databases

The first argument of DAL(...) can be a list of URIs. In this case web2py tries to connect to each of them. The main purpose for this is to deal with multiple database servers and distribute the workload among them). Here is a typical use case:

1

db=DAL(['mysql://...1','mysql://...2','mysql://...3'])

In this case the DAL tries to connect to the first and, on failure, it will try the second and the third. This can also be used to distribute load in a database master-slave configuration. We will talk more about this in Chapter 13 in the context of scalability.

Reserved keywords

reserved Keywords

There is also another argument that can be passed to the DAL constructor to check table names and column names against reserved SQL keywords in target back-end databases.

This argument is check_reserved and it defaults to None.

This is a list of strings that contain the database back-end adapter names.

The adapter name is the same as used in the DAL connection string. So if you want to check against PostgreSQL and MSSQL then your connection string would look as follows:

1
2

db=DAL('sqlite://storage.db',check_reserved=['postgres','mssql'])

The DAL will scan the keywords in the same order as of the list.

There are two extra options "all" and "common". If you specify all, it will check against all known SQL keywords. If you specify common, it will only check against common SQL keywords such as SELECT, INSERT, UPDATE, etc.

For supported back-ends you may also specify if you would like to check against the non-reserved SQL keywords as well. In this case you would append _nonreserved to the name. For example:

1

check_reserved=['postgres','postgres_nonreserved']

The following database backends support reserved words checking.

PostgreSQL

postgres(_nonreserved)

MySQL

mysql

FireBird

firebird(_nonreserved)

MSSQL

mssql

Oracle

oracle

DAL, Table, Field

The best way to understand the DAL API is to try each function yourself. This can be done interactively via the web2py shell, although ultimately, DAL code goes in the models and controllers.

Start by creating a connection. For the sake of example, you can use SQLite. Nothing in this discussion changes when you change the back-end engine.

1

>>>db=DAL('sqlite://storage.db')

The database is now connected and the connection is stored in the global variable db.

At any time you can retrieve the connection string.

_uri

1
2

>>>printdb._urisqlite://storage.db

and the database name

_dbname

1
2

>>>printdb._dbnamesqlite

The connection string is called a _uri because it is an instance of a Uniform Resource Identifier.

The DAL allows multiple connections with the same database or with different databases, even databases of different types. For now, we will assume the presence of a single database since this is the most common situation.

define_table

Field

type

length

default

requires

required

unique

notnull

ondelete

uploadfield

uploadseparate

migrate

sql.log

The most important method of a DAL is define_table:

1

>>>db.define_table('person',Field('name'))

It defines, stores and returns a Table object called "person" containing a field (column) "name". This object can also be accessed via db.person, so you do not need to catch the return value.

Do not declare a field called "id", because one is created by web2py anyway. Every table has a field called "id" by default. It is an auto-increment integer field (starting at 1) used for cross-reference and for making every record unique, so "id" is a primary key. (Note: the id's starting at 1 is back-end specific. For example, this does not apply to the Google App Engine NoSQL.)

named id field

Optionally you can define a field of type='id' and web2py will use this field as auto-increment id field. This is not recommended except when accessing legacy database tables. With some limitation, you can also use different primary keys and this is discussed in the section on "Legacy databases and keyed tables".

Tables can be defined only once but you can force web2py to redefine an existing table:

The redefinition may trigger a migration if field content is different.

Because usually in web2py models are executed before controllers, it is possible that some table are defined even if not needed. It is therefore necessary to speed up the code by making table definitions lazy. This is done by setting the DAL(...,lazy_tables=True) attributes. Tables will be actually created only when accessed.

Record representation

It is optional but recommended to specify a format representation for records:

To set the db.othertable.person.represent attribute for all fields referencing this table. This means that SQLTABLE will not show references by id but will use the format preferred representation instead.

Not all of them are relevant for every field. "length" is relevant only for fields of type "string". "uploadfield" and "authorize" are relevant only for fields of type "upload". "ondelete" is relevant only for fields of type "reference" and "upload".

length sets the maximum length of a "string", "password" or "upload" field. If length is not specified a default value is used but the default value is not guaranteed to be backward compatible. To avoid unwanted migrations on upgrades, we recommend that you always specify the length for string, password and upload fields.

default sets the default value for the field. The default value is used when performing an insert if a value is not explicitly specified. It is also used to pre-populate forms built from the table using SQLFORM. Note, rather than being a fixed value, the default can instead be a function (including a lambda function) that returns a value of the appropriate type for the field. In that case, the function is called once for each record inserted, even when multiple records are inserted in a single transaction.

required tells the DAL that no insert should be allowed on this table if a value for this field is not explicitly specified.

requires is a validator or a list of validators. This is not used by the DAL, but it is used by SQLFORM. The default validators for the given types are shown in the following table:

field type

default field validators

string

IS_LENGTH(length) default length is 512

text

IS_LENGTH(65536)

blob

None

boolean

None

integer

IS_INT_IN_RANGE(-1e100, 1e100)

double

IS_FLOAT_IN_RANGE(-1e100, 1e100)

decimal(n,m)

IS_DECIMAL_IN_RANGE(-1e100, 1e100)

date

IS_DATE()

time

IS_TIME()

datetime

IS_DATETIME()

password

None

upload

None

reference <table>

IS_IN_DB(db,table.field,format)

list:string

None

list:integer

None

list:reference <table>

IS_IN_DB(db,table.field,format,multiple=True)

json

IS_JSON()

bigint

None

big-id

None

big-reference

None

Decimal requires and returns values as Decimal objects, as defined in the Python decimal module. SQLite does not handle the decimal type so internally we treat it as a double. The (n,m) are the number of digits in total and the number of digits after the decimal point respectively.

The big-id and, big-reference are only supported by some of the database engines and are experimental. They are not normally used as field types unless for legacy tables, however, the DAL constructor has a bigint_id argument that when set to True makes the id fields and reference fields big-id and big-reference respectively.

The list: fields are special because they are designed to take advantage of certain denormalization features on NoSQL (in the case of Google App Engine NoSQL, the field types ListProperty and StringListProperty) and back-port them all the other supported relational databases. On relational databases lists are stored as a text field. The items are separated by a | and each | in string item is escaped as a ||. They are discussed in their own section.

The json field type is pretty much explanatory. It can store any json serializable object. It is designed to work specifically for MongoDB and backported to the other database adapters for portability.

Notice that requires=... is enforced at the level of forms, required=True is enforced at the level of the DAL (insert), while notnull, unique and ondelete are enforced at the level of the database. While they sometimes may seem redundant, it is important to maintain the distinction when programming with the DAL.

ondelete

ondelete translates into the "ON DELETE" SQL statement. By default it is set to "CASCADE". This tells the database that when it deletes a record, it should also delete all records that refer to it. To disable this feature, set ondelete to "NO ACTION" or "SET NULL".

notnull=True translates into the "NOT NULL" SQL statement. It prevents the database from inserting null values for the field.

unique=True translates into the "UNIQUE" SQL statement and it makes sure that values of this field are unique within the table. It is enforced at the database level.

uploadfield applies only to fields of type "upload". A field of type "upload" stores the name of a file saved somewhere else, by default on the filesystem under the application "uploads/" folder. If uploadfield is set, then the file is stored in a blob field within the same table and the value of uploadfield is the name of the blob field. This will be discussed in more detail later in the context of SQLFORM.

uploadfolder defaults to the application's "uploads/" folder. If set to a different path, files will uploaded to a different folder. For example, uploadfolder=os.path.join(request.folder,'static/temp') will upload files to the web2py/applications/myapp/static/temp folder.

uploadseparate if set to True will upload files under different subfolders of the uploadfolder folder. This is optimized to avoid too many files under the same folder/subfolder. ATTENTION: You cannot change the value of uploadseparate from True to False without breaking the system. web2py either uses the separate subfolders or it does not. Changing the behavior after files have been uploaded will prevent web2py from being able to retrieve those files. If this happens it is possible to move files and fix the problem but this is not described here.

uploadfs allows you specify a different file system where to upload files, including an Amazon S3 storage or a remote FTP storage. This option requires PyFileSystem installed. uploadfs must point to PyFileSystem.

PyFileSystem

uploadfs

widget must be one of the available widget objects, including custom widgets, for example: SQLFORM.widgets.string.widget. A list of available widgets will be discussed later. Each field type has a default widget.

label is a string (or something that can be serialized to a string) that contains the label to be used for this field in auto-generated forms.

comment is a string (or something that can be serialized to a string) that contains a comment associated with this field, and will be displayed to the right of the input field in the autogenerated forms.

writable if a field is writable, it can be edited in autogenerated create and update forms.

readable if a field is readable, it will be visible in read-only forms. If a field is neither readable nor writable, it will not be displayed in create and update forms.

update contains the default value for this field when the record is updated.

compute is an optional function. If a record is inserted or updated, the compute function will be executed and the field will be populated with the function result. The record is passed to the compute function as a dict, and the dict will not include the current value of that, or any other compute field.

authorize can be used to require access control on the corresponding field, for "upload" fields only. It will be discussed more in detail in the context of Authentication and Authorization.

autodelete determines if the corresponding uploaded file should be deleted when the record referencing the file is deleted. For "upload" fields only.

represent can be None or can point to a function that takes a field value and returns an alternate representation for the field value. Examples:

"blob" fields are also special. By default, binary data is encoded in base64 before being stored into the actual database field, and it is decoded when extracted. This has the negative effect of using 25% more storage space than necessary in blob fields, but has two advantages. On average it reduces the amount of data communicated between web2py and the database server, and it makes the communication independent of back-end-specific escaping conventions.

Most attributes of fields and tables can be modified after they are defined:

A field also has methods. Some of them are used to build queries and we will see them later. A special method of the field object is validate and it calls the validators for the field.

print db.person.name.validate('John')

which returns a tuple (value, error). error is None if the input passes validation.

Migrations

migrations

define_table checks whether or not the corresponding table exists. If it does not, it generates the SQL to create it and executes the SQL. If the table does exist but differs from the one being defined, it generates the SQL to alter the table and executes it. If a field has changed type but not name, it will try to convert the data (If you do not want this, you need to redefine the table twice, the first time, letting web2py drop the field by removing it, and the second time adding the newly defined field so that web2py can create it.). If the table exists and matches the current definition, it will leave it alone. In all cases it will create the db.person object that represents the table.

We refer to this behavior as a "migration". web2py logs all migrations and migration attempts in the file "databases/sql.log".

The first argument of define_table is always the table name. The other unnamed arguments are the fields (Field). The function also takes an optional last argument called "migrate" which must be referred to explicitly by name as in:

1

>>>db.define_table('person',Field('name'),migrate='person.table')

The value of migrate is the filename (in the "databases" folder for the application) where web2py stores internal migration information for this table. These files are very important and should never be removed while the corresponding tables exist. In cases where a table has been dropped and the corresponding file still exist, it can be removed manually. By default, migrate is set to True. This causes web2py to generate the filename from a hash of the connection string. If migrate is set to False, the migration is not performed, and web2py assumes that the table exists in the datastore and it contains (at least) the fields listed in define_table. The best practice is to give an explicit name to the migrate table.

There may not be two tables in the same application with the same migrate filename.

The DAL class also takes a "migrate" argument, which determines the default value of migrate for calls to define_table. For example,

1

>>>db=DAL('sqlite://storage.db',migrate=False)

will set the default value of migrate to False whenever db.define_table is called without a migrate argument.

Notice that web2py only migrates new columns, removed columns, and changes in column type (not in sqlite). web2py does not migrate changes in attributes such as changes in the values of default, unique, notnull, and ondelete.

Migrations can be disabled for all tables at the moment of connection:

db = DAL(...,migrate_enabled=False)

This is the recommended behavior when two apps share the same database. Only one of the two apps should perform migrations, the other should disabled them.

Fixing broken migrations

fake_migrate

There are two common problems with migrations and there are ways to recover from them.

One problem is specific with SQLite. SQLite does not enforce column types and cannot drop columns. This means that if you have a column of type string and you remove it, it is not really removed. If you add the column again with a different type (for example datetime) you end up with a datetime column that contains strings (junk for practical purposes). web2py does not complain about this because it does not know what is in the database, until it tries to retrieve records and fails.

If web2py returns an error in the gluon.sql.parse function when selecting records, this is the problem: corrupted data in a column because of the above issue.

The solution consists in updating all records of the table and updating the values in the column in question with None.

The other problem is more generic but typical with MySQL. MySQL does not allow more than one ALTER TABLE in a transaction. This means that web2py must break complex transactions into smaller ones (one ALTER TABLE at the time) and commit one piece at the time. It is therefore possible that part of a complex transaction gets committed and one part fails, leaving web2py in a corrupted state. Why would part of a transaction fail? Because, for example, it involves altering a table and converting a string column into a datetime column, web2py tries to convert the data, but the data cannot be converted. What happens to web2py? It gets confused about what exactly is the table structure actually stored in the database.

The solution consists of disabling migrations for all tables and enabling fake migrations:

1

db.define_table(....,migrate=True,fake_migrate=True)

This will rebuild web2py metadata about the table according to the table definition. Try multiple table definitions to see which one works (the one before the failed migration and the one after the failed migration). Once successful remove the fake_migrate=True attribute.

Before attempting to fix migration problems it is prudent to make a copy of "applications/yourapp/databases/*.table" files.

Migration problems can also be fixed for all tables at once:

1

db=DAL(...,fake_migrate_all=True)

Although if this fails, it will not help in narrowing down the problem.

insert

Given a table, you can insert records

insert

1
2
3
4

>>>db.person.insert(name="Alex")1>>>db.person.insert(name="Bob")2

Insert returns the unique "id" value of each record inserted.

You can truncate the table, i.e., delete all records and reset the counter of the id.

truncate

1

>>>db.person.truncate()

Now, if you insert a record again, the counter starts again at 1 (this is back-end specific and does not apply to Google NoSQL):

1
2

>>>db.person.insert(name="Alex")1

Notice you can pass parameters to truncate, for example you can tell SQLITE to restart the id counter.

It takes a list of dictionaries of fields to be inserted and performs multiple inserts at once. It returns the IDs of the inserted records. On the supported relational databases there is no advantage in using this function as opposed to looping and performing individual inserts but on Google App Engine NoSQL, there is a major speed advantage.

commit and rollback

No create, drop, insert, truncate, delete, or update operation is actually committed until you issue the commit command

commit

1

>>>db.commit()

To check it let's insert a new record:

1
2

>>>db.person.insert(name="Bob")2

and roll back, i.e., ignore all operations since the last commit:

rollback

1

>>>db.rollback()

If you now insert again, the counter will again be set to 2, since the previous insert was rolled back.

1
2

>>>db.person.insert(name="Bob")2

Code in models, views and controllers is enclosed in web2py code that looks like this:

There is no need to ever call commit or rollback explicitly in web2py unless one needs more granular control.

Raw SQL

Timing queries

All queries are automatically timed by web2py. The variable db._timings is a list of tuples. Each tuple contains the raw SQL query as passed to the database driver and the time it took to execute in seconds. This variable can be displayed in views using the toolbar:

{{=response.toolbar()}}

executesql

In this case, the return values are not parsed or transformed by the DAL, and the format depends on the specific database driver. This usage with selects is normally not needed, but it is more common with indexes. executesql takes four optional arguments: placeholders, as_dict, fields and colnames. placeholders is an optional sequence of values to be substituted in or, if supported by the DB driver, a dictionary with keys matching named placeholders in your SQL.

If as_dict is set to True, and the results cursor returned by the DB driver will be converted to a sequence of dictionaries keyed with the db field names. Results returned with as_dict = True are the same as those returned when applying .as_list() to a normal select.

1

[{field1:value1,field2:value2},{field1:value1b,field2:value2b}]

The fields argument is a list of DAL Field objects that match the fields returned from the DB. The Field objects should be part of one or more Table objects defined on the DAL object. The fields list can include one or more DAL Table objects in addition to or instead of including Field objects, or it can be just a single table (not in a list). In that case, the Field objects will be extracted from the table(s).

Instead of specifying the fields argument, the colnames argument can be specified as a list of field names in tablename.fieldname format. Again, these should represent tables and fields defined on the DAL object.

It is also possible to specify both fields and the associated colnames. In that case, fields can also include DAL Expression objects in addition to Field objects. For Field objects in "fields", the associated colnames must still be in tablename.fieldname format. For Expression objects in fields, the associated colnames can be any arbitrary labels.

Notice, the DAL Table objects referred to by fields or colnames can be dummy tables and do not have to represent any real tables in the database. Also, note that the fields and colnames must be in the same order as the fields in the results cursor returned from the DB.

_lastsql

Whether SQL was executed manually using executesql or was SQL generated by the DAL, you can always find the SQL code in db._lastsql. This is useful for debugging purposes:

web2py never generates queries using the "*" operator. web2py is always explicit when selecting fields.

drop

Finally, you can drop tables and all data will be lost:

drop

1

>>>db.person.drop()

Indexes

Currently the DAL API does not provide a command to create indexes on tables, but this can be done using the executesql command. This is because the existence of indexes can make migrations complex, and it is better to deal with them explicitly. Indexes may be needed for those fields that are used in recurrent queries.

>>>db=DAL('sqlite://storage.db')>>>db.define_table('person',Field('name'))>>>db.executesql('CREATE INDEX IF NOT EXISTS myidx ON person (name);')

Other database dialects have very similar syntaxes but may not support the optional "IF NOT EXISTS" directive.

Legacy databases and keyed tables

web2py can connect to legacy databases under some conditions.

The easiest way is when these conditions are met:

Each table must have a unique auto-increment integer field called "id"

Records must be referenced exclusively using the "id" field.

When accessing an existing table, i.e., a table not created by web2py in the current application, always set migrate=False.

If the legacy table has an auto-increment integer field but it is not called "id", web2py can still access it but the table definition must contain explicitly as Field('....','id') where ... is the name of the auto-increment integer field.

keyed table

Finally if the legacy table uses a primary key that is not an auto-increment id field it is possible to use a "keyed table", for example:

Note that currently this is only available for DB2, MS-SQL, Ingres and Informix, but others can be easily added.

At the time of writing, we cannot guarantee that the primarykey attribute works with every existing legacy table and every supported database backend. For simplicity, we recommend, if possible, creating a database view that has an auto-increment id field.

Distributed transaction

distributed transactions

At the time of writing this feature is only supported
by PostgreSQL, MySQL and Firebird, since they expose API for two-phase commits.

Assuming you have two (or more) connections to distinct PostgreSQL databases, for example:

1
2

db_a=DAL('postgres://...')db_b=DAL('postgres://...')

In your models or controllers, you can commit them concurrently with:

1

DAL.distributed_transaction_commit(db_a,db_b)

On failure, this function rolls back and raises an Exception.

In controllers, when one action returns, if you have two distinct connections and you do not call the above function, web2py commits them separately. This means there is a possibility that one of the commits succeeds and one fails. The distributed transaction prevents this from happening.

More on uploads

Consider the following model:

1
2

>>>db.define_table('myfile',Field('image','upload',default='path/'))

In the case of an 'upload' field, the default value can optionally be set to a path (an absolute path or a path relative to the current app folder) and the default image will be set to a copy of the file at the path. A new copy is made for each new record that does not specify an image.

Normally an insert is handled automatically via a SQLFORM or a crud form (which is a SQLFORM) but occasionally you already have the file on the filesystem and want to upload it programmatically. This can be done in this way:

It is also possible to insert a file in a simpler way and have the insert method call store automatically:

1
2

>>>stream=open(filename,'rb')>>>db.myfile.insert(image=stream)

In this case the filename is obtained from the stream object if available.

The store method of the upload field object takes a file stream and a filename. It uses the filename to determine the extension (type) of the file, creates a new temp name for the file (according to web2py upload mechanism) and loads the file content in this new temp file (under the uploads folder unless specified otherwise). It returns the new temp name, which is then stored in the image field of the db.myfile table.

Note, if the file is to be stored in an associated blob field rather than the file system, the store() method will not insert the file in the blob field (because store() is called before the insert), so the file must be explicitly inserted into the blob field:

You can store the table in a variable. For example, with variable person, you could do:

Table

1

>>>person=db.person

You can also store a field in a variable such as name. For example, you could also do:

Field

1

>>>name=person.name

You can even build a query (using operators like ==, !=, <, >, <=, >=, like, belongs) and store the query in a variable q such as in:

Query

1

>>>q=name=='Alex'

When you call db with a query, you define a set of records. You can store it in a variable s and write:

Set

1

>>>s=db(q)

Notice that no database query has been performed so far. DAL + Query simply define a set of records in this db that match the query. web2py determines from the query which table (or tables) are involved and, in fact, there is no need to specify that.

select

Given a Set, s, you can fetch the records with the command select:

Rows

select

1

>>>rows=s.select()

Row

It returns an iterable object of class pydal.objects.Rows whose elements are Row objects. pydal.objects.Row objects act like dictionaries, but their elements can also be accessed as attributes, like gluon.storage.Storage.The former differ from the latter because its values are read-only.

The Rows object allows looping over the result of the select and printing the selected field values for each row:

1
2
3

>>>forrowinrows:printrow.id,row.name1Alex

You can do all the steps in one statement:

1
2
3

>>>forrowindb(db.person.name=='Alex').select():printrow.nameAlex

ALL

The select command can take arguments. All unnamed arguments are interpreted as the names of the fields that you want to fetch. For example, you can be explicit on fetching field "id" and field "name":

Fetching a Row

Apparently similar to db.mytable[id] the above syntax is more flexible and safer. First of all it checks whether id is an int (or str(id) is an int) and returns None if not (it never raises an exception). It also allows to specify multiple conditions that the record must meet. If they are not met, it also returns None.

Recursive selects

recursive selects

Consider the previous table person and a new table "thing" referencing a "person":

where ._id is a reference to the primary key of the table. Normally db.thing._id is the same as db.thing.id and we will assume that in most of this book.

_id

For each Row of things it is possible to fetch not just fields from the selected table (thing) but also from linked tables (recursively):

1

>>>forthinginthings:printthing.name,thing.owner.name

Here thing.owner.name requires one database select for each thing in things and it is therefore inefficient. We suggest using joins whenever possible instead of recursive selects, nevertheless this is convenient and practical when accessing individual records.

You can also do it backwards, by selecting the things referenced by a person:

i.e. the Set of things referenced by the current person. This syntax breaks down if the referencing table has multiple references to the referenced table. In this case one needs to be more explicit and use a full Query.

Serializing Rows in views

Given the following action containing a query

SQLTABLE

1
2

defindex()returndict(rows=db(query).select())

The result of a select can be displayed in a view with the following syntax:

1
2
3

{{extend'layout.html'}}<h1>Records</h1>{{=rows}}

Which is equivalent to:

1
2
3

{{extend'layout.html'}}<h1>Records</h1>{{=SQLTABLE(rows)}}

SQLTABLE converts the rows into an HTML table with a header containing the column names and one row per record. The rows are marked as alternating class "even" and class "odd". Under the hood, Rows is first converted into a SQLTABLE object (not to be confused with Table) and then serialized. The values extracted from the database are also formatted by the validators associated to the field and then escaped.

Yet it is possible and sometimes convenient to call SQLTABLE explicitly.

The SQLTABLE constructor takes the following optional arguments:

linkto the URL or an action to be used to link reference fields (default to None)

upload the URL or the download action to allow downloading of uploaded files (default to None)

headers a dictionary mapping field names to their labels to be used as headers (default to {}). It can also be an instruction. Currently we support headers='fieldname:capitalize'.

truncate the number of characters for truncating long values in the table (default is 16)

columns the list of fieldnames to be shown as columns (in tablename.fieldname format). Those not listed are not displayed (defaults to all).

**attributes generic helper attributes to be passed to the most external TABLE object.

SQLTABLE is useful but there are times when one needs more. SQLFORM.grid is an extension of SQLTABLE that creates a table with search features and pagination, as well as ability to open detailed records, create, edit and delete records. SQLFORM.smartgrid is a further generalization that allows all of the above but also creates buttons to access referencing records.

Here is an example of usage of SQLFORM.grid:

1
2

defindex():returndict(grid=SQLFORM.grid(query))

and the corresponding view:

{{extend 'layout.html'}}
{{=grid}}

SQLFORM.grid and SQLFORM.smartgrid should be preferred to SQLTABLE because they are more powerful although higher level and therefore more constraining. They will be explained in more detail in chapter 8.

orderby, groupby, limitby, distinct, having

The select command takes five optional arguments: orderby, groupby, limitby, left and cache. Here we discuss the first three.

Notice that query1 filters records to be displayed, query2 filters records to be grouped.

distinct

With the argument distinct=True, you can specify that you only want to select distinct records. This has the same effect as grouping using all specified fields except that it does not require sorting. When using distinct it is important not to select ALL fields, and in particular not to select the "id" field, else all records will always be distinct.

Due to Python restrictions in overloading "and" and "or" operators, these cannot be used in forming queries. The binary operators "&" and "|" must be used instead. Note that these operators (unlike "and" and "or") have higher precedence than comparison operators, so the "extra" parentheses in the above examples are mandatory. Similarly, the unary operator "~" has higher precedence than comparison operators, so ~-negated comparisons must also be parenthesized.

It is also possible to build queries using in-place logical operators:

count, isempty, delete, update

You can count records in a set:

count

isempty

1
2

>>>printdb(db.person.id>0).count()3

Notice that count takes an optional distinct argument which defaults to False, and it works very much like the same argument for select. count has also a cache argument that works very much like the equivalent argument of the select method.

Sometimes you may need to check if a table is empty. A more efficient way than counting is using the isempty method:

1
2

>>>printdb(db.person.id>0).isempty()False

or equivalently:

1
2

>>>printdb(db.person).isempty()False

You can delete records in a set:

delete

1

>>>db(db.person.id>3).delete()

And you can update all records in a set by passing named arguments corresponding to the fields that need to be updated:

update

1

>>>db(db.person.id>3).update(name='Ken')

Expressions

The value assigned an update statement can be an expression. For example consider this model

The update_record method is available only if the table's id field is included in the select, and cacheable is not set to True.

Inserting and updating from a dictionary

A common issue consists of needing to insert or update records in a table where the name of the table, the field to be updated, and the value for the field are all stored in variables. For example: tablename, fieldname, and value.

The insert can be done using the following syntax:

db[tablename].insert(**{fieldname:value})

:

The update of record with given id can be done with:

_id

1

db(db[tablename]._id==id).update(**{fieldname:value})

Notice we used table._id instead of table.id. In this way the query works even for tables with a field of type "id" which has a name other than "id".

find, exclude, sort

find

exclude

sort

There are times when one needs to perform two selects and one contains a subset of a previous select. In this case it is pointless to access the database again. The find, exclude and sort objects allow you to manipulate a Rows objects and generate another one without accessing the database. More specifically:

find returns a new set of Rows filtered by a condition and leaves the original unchanged.

exclude returns a new set of Rows filtered by a condition and removes them from the original Rows.

sort returns a new set of Rows sorted by a condition and leaves the original unchanged.

All these methods take a single argument, a function that acts on each individual row.

and if there is John his birthplace will be updated else a new record will be created.

validate_and_insert, validate_and_update

validate_and_insert

validate_and_update

The function

1

ret=db.mytable.validate_and_insert(field='value')

works very much like

1

id=db.mytable.insert(field='value')

except that it calls the validators for the fields before performing the insert and bails out if the validation does not pass. If validation does not pass the errors can be found in ret.error. If it passes, the id of the new record is in ret.id. Mind that normally validation is done by the form processing logic so this function is rarely needed.

Similarly

1

ret=db(query).validate_and_update(field='value')

works very much the same as

1

num=db(query).update(field='value')

except that it calls the validators for the fields before performing the update. Notice that it only works if query involves a single table. The number of updated records can be found in res.updated and errors will be ret.errors.

smart_query (experimental)

There are times when you need to parse a query using natural language such as

The first argument must be a list of tables or fields that should be allowed in the search. It raises a RuntimeError if the search string is invalid. This functionality can be used to build RESTful interfaces (see chapter 10) and it is used internally by the SQLFORM.grid and SQLFORM.smartgrid.

In the smartquery search string, a field can be identified by fieldname only and or by tablename.fieldname. Strings may be delimited by double quotes if they contain spaces.

Computed fields

compute

DAL fields may have a compute attribute. This must be a function (or lambda) that takes a Row object and returns a value for the field. When a new record is modified, including both insertions and updates, if a value for the field is not provided, web2py tries to compute from the other field values using the compute function. Here is an example:

Notice that the computed value is stored in the db and it is not computed on retrieval, as in the case of virtual fields, described later. Two typical applications of computed fields are:

in wiki applications, to store the processed input wiki text as HTML, to avoid re-processing on every request

for searching, to compute normalized values for a field, to be used for searching.

Virtual fields

virtual fields

Virtual fields are also computed fields (as in the previous subsection) but they differ from those because they are virtual in the sense that they are not stored in the db and they are computed each time records are extracted from the database. They can be used to simplify the user's code without using additional storage but they cannot be used for searching.

New style virtual fields

web2py provides a new and easier way to define virtual fields and lazy virtual fields. This section is marked experimental because they APIs may still change a little from what is described here.

Here we will consider the same example as in the previous subsection. In particular we consider the following model:

In this case row.discounted_total is not a value but a function. The function takes the same arguments as the function passed to the Method constructor except for row which is implicit (think of it as self for rows objects).

The lazy field in the example above allows one to compute the total price for each item:

>>> for row in db(db.item).select(): print row.discounted_total()

And it also allows to pass an optional discount percentage (15%):

>>> for row in db(db.item).select(): print row.discounted_total(15)

Virtual and Method fields can also be defined in place when a table is defined:

Mind that virtual fields do not have the same attributes as the other fields (default, readable, requires, etc) and they do not appear in the list of db.table.fields and are not visualized by default in tables (TABLE) and grids (SQLFORM.grid, SQLFORM.smartgrid).

Old style virtual fields

In order to define one or more virtual fields, you can also define a container class, instantiate it and link it to a table or to a select. For example, consider the following table:

Notice that each method of the class that takes a single argument (self) is a new virtual field. self refers to each one row of the select. Field values are referred by full path as in self.item.unit_price. The table is linked to the virtual fields by appending an instance of the class to the table's virtualfields attribute.

Notice how in this case the syntax is different. The virtual field accesses both self.item.unit_price and self.order_item.quantity which belong to the join select. The virtual field is attached to the rows of the table using the setvirtualfields method of the rows object. This method takes an arbitrary number of named arguments and can be used to set multiple virtual fields, defined in multiple classes, and attach them to multiple tables:

Table "thing" has two fields, the name of the thing and the owner of the thing. The "owner" field id a reference field. A reference type can be specified in two equivalent ways:

1
2

Field('owner','reference person')Field('owner',db.person)

The latter is always converted to the former. They are equivalent except in the case of lazy tables, self references or other types of cyclic references where the former notation is the only allowed notation.

When a field type is another table, it is intended that the field reference the other table by its id. In fact, you can print the actual type value and get:

Because a thing has a reference to a person, a person can have many things, so a record of table person now acquires a new attribute thing, which is a Set, that defines the things of that person. This allows looping over all persons and fetching their things easily:

Inner joins

Another way to achieve a similar result is by using a join, specifically an INNER JOIN. web2py performs joins automatically and transparently when the query links two or more tables as in the following example:

Observe that web2py did a join, so the rows now contain two records, one from each table, linked together. Because the two records may have fields with conflicting names, you need to specify the table when extracting a field value from a row. This means that while before you could do:

1

row.name

and it was obvious whether this was the name of a person or a thing, in the result of a join you have to be more explicit and say:

Left outer join

Notice that Carl did not appear in the list above because he has no things. If you intend to select on persons (whether they have things or not) and their things (if they have any), then you need to perform a LEFT OUTER JOIN. This is done using the argument "left" of the select command. Here is an example:

does the left join query. Here the argument of db.thing.on is the condition required for the join (the same used above for the inner join). In the case of a left join, it is necessary to be explicit about which fields to select.

Multiple left joins can be combined by passing a list or tuple of db.mytable.on(...) to the left attribute.

Grouping and counting

When doing joins, sometimes you want to group rows according to certain criteria and count them. For example, count the number of things owned by every person. web2py allows this as well. First, you need a count operator. Second, you want to join the person table with the thing table by owner. Third, you want to select all rows (person + thing), group them by person, and count them while grouping:

Notice the count operator (which is built-in) is used as a field. The only issue here is in how to retrieve the information. Each row clearly contains a person and the count, but the count is not a field of a person nor is it a table. So where does it go? It goes into the storage object representing the record with a key equal to the query expression itself. The count method of the Field object has an optional distinct argument. When set to True it specifies that only distinct values of the field in question are to be counted.

Many to many

many-to-many

In the previous examples, we allowed a thing to have one owner but one person could have many things. What if Boat was owned by Alex and Curt? This requires a many-to-many relation, and it is realized via an intermediate table that links a person to a thing via an ownership relation.

A lighter alternative to Many 2 Many relations is tagging. Tagging is discussed in the context of the IS_IN_DB validator. Tagging works even on database backends that do not support JOINs like the Google App Engine NoSQL.

list:<type>, and contains

list:string

list:integer

list:reference

contains

multiple

tags

web2py provides the following special field types:

1
2
3

list:stringlist:integerlist:reference<table>

They can contain lists of strings, of integers and of references respectively.

On Google App Engine NoSQL list:string is mapped into StringListProperty, the other two are mapped into ListProperty(int). On relational databases they all mapped into text fields which contain the list of items separated by |. For example [1,2,3] is mapped into |1|2|3|.

For lists of string the items are escaped so that any | in the item is replaced by a ||. Anyway this is an internal representation and it is transparent to the user.

As usual the requirements are enforced at the level of forms, not at the level of insert.

For list:<type> fields the contains(value) operator maps into a non trivial query that checks for lists containing the value. The contains operator also works for regular string and text fields and it maps into a LIKE '%value%'.

The list:reference and the contains(value) operator are particularly useful to de-normalize many-to-many relations. Here is an example:

Also notice that this field gets a default represent attribute which represents the list of references as a comma-separated list of formatted references. This is used in read forms and SQLTABLEs.

While list:reference has a default validator and a default representation, list:integer and list:string do not. So these two need an IS_IN_SET or an IS_IN_DB validator if you want to use them in forms.

Other operators

web2py has other operators that provide an API to access equivalent SQL operators. Let's define another table "log" to store security events, their event_time and severity, where the severity is an integer number.

As before, insert a few events, a "port scan", an "xss injection" and an "unauthorized login". For the sake of the example, you can log events with the same event_time but with different severities (1, 2, and 3 respectively).

The DAL also allows a nested select as the argument of the belongs operator. The only caveat is that the nested select has to be a _select, not a select, and only one field has to be selected explicitly, the one that defines the set.

In this case lazy is a nested expression that computes the id of person "Jonathan". The two lines result in one single SQL query.

sum, avg, min, max and len

sum

avg

min

max

Previously, you have used the count operator to count records. Similarly, you can use the sum operator to add (sum) the values of a specific field from a group of records. As in the case of count, the result of a sum is retrieved via the store object:

1
2
3

>>>sum=db.log.severity.sum()>>>printdb().select(sum).first()[sum]6

You can also use avg, min, and max to the average, minimum, and maximum value respectively for the selected records. For example:

1
2
3

>>>max=db.log.severity.max()>>>printdb().select(max).first()[max]3

.len() computes the length of a string, text or boolean fields.

Expressions can be combined to form more complex expressions. For example here we are computing the sum of the length of all the severity strings in the logs, increased of one:

Substrings

One can build an expression to refer to a substring. For example, we can group things whose name starts with the same three characters and select only one from each group:

1

db(db.thing).select(distinct=db.thing.name[:3])

Default values with coalesce and coalesce_zero

There are times when you need to pull a value from database but also need a default values if the value for a record is set to NULL. In SQL there is a keyword, COALESCE, for this. web2py has an equivalent coalesce method:

Generating raw sql

raw SQL

Sometimes you need to generate the SQL but not execute it. This is easy to do with web2py since every command that performs database IO has an equivalent command that does not, and simply returns the SQL that would have been executed. These commands have the same names and syntax as the functional ones, but they start with an underscore:

When importing, web2py looks for the field names in the CSV header. In this example, it finds two columns: "person.id" and "person.name". It ignores the "person." prefix, and it ignores the "id" fields. Then all records are appended and assigned new ids. Both of these operations can be performed via the appadmin web interface.

CSV (all tables at once)

In web2py, you can backup/restore an entire database with two commands:

To export:

1

>>>db.export_to_csv_file(open('somefile.csv','wb'))

To import:

1

>>>db.import_from_csv_file(open('somefile.csv','rb'))

This mechanism can be used even if the importing database is of a different type than the exporting database. The data is stored in "somefile.csv" as a CSV file where each table starts with one line that indicates the tablename, and another line with the fieldnames:

1
2

TABLEtablenamefield1,field2,field3,...

Two tables are separated \r\n\r\n. The file ends with the line

1

END

The file does not include uploaded files if these are not stored in the database. In any case it is easy enough to zip the "uploads" folder separately.

When importing, the new records will be appended to the database if it is not empty. In general the new imported records will not have the same record id as the original (saved) records but web2py will restore references so they are not broken, even if the id values may change.

If a table contains a field called "uuid", this field will be used to identify duplicates. Also, if an imported record has the same "uuid" as an existing record, the previous record will be updated.

Each record is identified by an ID and referenced by that ID. If you have two copies of the database used by distinct web2py installations, the ID is unique only within each database and not across the databases. This is a problem when merging records from different databases.

In order to make a record uniquely identifiable across databases, they must:

have a unique id (UUID),

have an event_time (to figure out which one is more recent if multiple copies),

Note, in the above table definitions, the default value for the two 'uuid' fields is set to a lambda function, which returns a UUID (converted to a string). The lambda function is called once for each record inserted, ensuring that each record gets a unique UUID, even if multiple records are inserted in a single transaction.

3. Create a controller action to import a saved copy of the other database and sync records:

1
2
3
4
5
6
7
8
9
10
11
12
13
14

defimport_and_sync():form=FORM(INPUT(_type='file',_name='data'),INPUT(_type='submit'))ifform.process(session=None).accepted:db.import_from_csv_file(form.vars.data.file,unique=False)# for every tablefortableindb.tables:# for every uuid, delete all but the latestitems=db(db[table]).select(db[table].id,db[table].uuid,orderby=db[table].modified_on,groupby=db[table].uuid)foriteminitems:db((db[table].uuid==item.uuid)&(db[table].id!=item.id)).delete()returndict(form=form)

Notice that session=None disables the CSRF protection since this URL is intended to be accessed from outside.

4. Create an index manually to make the search by uuid faster.

Notice that steps 2 and 3 work for every database model; they are not specific for this example.

XML-RPC

Alternatively, you can use XML-RPC to export/import the file.

If the records reference uploaded files, you also need to export/import the content of the uploads folder. Notice that files therein are already labeled by UUIDs so you do not need to worry about naming conflicts and references.

HTML and XML (one Table at a time)

DALRows objects

DALRows objects also have an xml method (like helpers) that serializes it to XML/HTML:

For more information consult the official Python documentation [quoteall]

Caching selects

The select method also takes a cache argument, which defaults to None. For caching purposes, it should be set to a tuple where the first element is the cache model (cache.ram, cache.disk, etc.), and the second element is the expiration time in seconds.

In the following example, you see a controller that caches a select on the previously defined db.log table. The actual select fetches data from the back-end database no more frequently than once every 60 seconds and stores the result in cache.ram. If the next call to this controller occurs in less than 60 seconds since the last database IO, it simply fetches the previous data from cache.ram.

The select method has an optional cacheable argument, normally set to False. When cacheable=True the resulting Rows is serializable but The Rows lack update_record and delete_record methods.

If you do not need these methods you can speed up selects a lot by setting the cacheable attribute:

1

rows=db(query).select(cacheable=True)

The results of a select are normally complex, un-pickleable objects; they cannot be stored in a session and cannot be cached in any other way than the one explained here unless the cache attribute is set or cacheable=True.

When the cache argument is set but cacheable=False (default) only the database results are cached, not the actual Rows object. When the cache argument is used in conjunction with cacheable=True the entire Rows object is cached and this results in much faster caching:

1

rows=db(query).select(cache=(cache.ram,3600),cacheable=True)

Self-Reference and aliases

self reference

alias

It is possible to define tables with fields that refer to themselves, here is an example:

In general db.tablename and "reference tablename" are equivalent field types, but the latter is the only one allowed for self.references.

with_alias

If the table refers to itself, then it is not possible to perform a JOIN to select a person and its parents without use of the SQL "AS" keyword. This is achieved in web2py using the with_alias. Here is an example:

The archive_db=db tells web2py to store the archive table in the same database as the stored_item table. The archive_name sets the name for the archive table. The archive table has the same fields as the original table stored_item except that unique fields are no longer unique (because it needs to store multiple versions) and has an extra field which name is specified by current_record and which is a reference to the current record in the stored_item table.

When records are deleted, they are not really deleted. A deleted record is copied in the stored_item_archive table (like when it is modified) and the is_active field is set to False. By enabling record versioning web2py sets a custom_filter on this table that hides all fields in table stored_item where the is_active field is set to False. The is_active parameter in the _enable_record_versioning method allows to specify the name of the field used by the custom_filter to determine if the field was deleted or not.

custom_filters are ignored by the appadmin interface.

Common fields and multi-tenancy

common fields

multi tenancy

db._common_fields is a list of fields that should belong to all the tables. This list can also contain tables and it is understood as all fields from the table. For example occasionally you find yourself in need to add a signature to all your tables but the `auth tables. In this case, after you db.define_tables() but before defining any other table, insert

db._common_fields.append(auth.signature)

One field is special: "request_tenant". This field does not exist but you can create it and add it to any of your tables (or them all):

For every table with a field called db._request_tenant, all records for all queries are always automatically filtered by:

1

db.table.request_tenant==db.table.request_tenant.default

and for every record insert, this field is set to the default value. In the example above we have chosen

default = request.env.http_host

i.e. we have chose to ask our app to filter all tables in all queries with

db.table.request_tenant == request.env.http_host

This simple trick allow us to turn any application into a multi-tenant application. i.e. even if we run one instance of the app and we use one single database, if the app is accessed under two or more domains (in the example the domain name is retrieved from request.env.http_host) the visitors will see different data depending on the domain. Think of running multiple web stores under different domains with one app and one database.

You can turn off multi tenancy filters using:

ignore_common_filters

1

rows=db(query,ignore_common_filters=True).select()

Common filters

A common filter is a generalization of the above multi-tenancy idea. It provides an easy way to prevent repeating of the same query. Consider for example the following table:

It serves both as a way to avoid repeating the "db.blog_post.is_public==True" phrase in each blog post search, and also as a security enhancement, that prevents you from forgetting to disallow viewing of none public posts.

In case you actually do want items left out by the common filter (for example, allowing the admin to see none public posts), you can either remove the filter:

db.blog_post._common_filter = None

or ignore it:

db(query, ignore_common_filters=True).select(...)

Custom Field types (experimental)

SQLCustomType

It is possible to define new/custom field types. For example we consider here the example if a field that contains binary data in compressed form:

SQLCustomType is a field type factory. Its type argument must be one of the standard web2py types. It tells web2py how to treat the field values at the web2py level. native is the name of the field as far as the database is concerned. Allowed names depend on the database engine. encoder is an optional transformation function applied when the data is stored and decoder is the optional reversed transformation function.

This feature is marked as experimental. In practice it has been in web2py for a long time and it works but it can make the code not portable, for example when the native type is database specific. It does not work on Google App Engine NoSQL.

Using DAL without define tables

i.e. import the DAL, Field, connect and specify the folder which contains the .table files (the app/databases folder).

To access the data and its attributes we still have to define all the tables we are going to access with db.define_tables(...).

If we just need access to the data but not to the web2py table attributes, we get away without re-defining the tables but simply asking web2py to read the necessary info from the metadata in the .table files:

PostGIS, SpatiaLite, and MS Geo (experimental)

PostGIS

StatiaLite

Geo Extensions

geometry

geoPoint

geoLine

geoPolygon

The DAL supports geographical APIs using PostGIS (for PostgreSQL), spatialite (for SQLite), and MSSQL and Spatial Extensions. This is a feature that was sponsored by the Sahana project and implemented by Denes Lengyel.

DAL provides geometry and geography fields types and the following functions:

After running the script you can simply switch the connection string in the model and everything should work out of the box. The new data should be there.

This script provides various command line options that allows you to move data from one application to another, move all tables or only some tables, clear the data in the tables. for more info try:

python scripts/cpdb.py -h

Note on new DAL and adapters

The source code of the Database Abstraction Layer was completely rewritten in 2010. While it stays backward compatible, the rewrite made it more modular and easier to extend. Here we explain the main logic.

Their use has been explained in the previous sections, except for BaseAdapter. When the methods of a Table or Set object need to communicate with the database they delegate to methods of the adapter the task to generate the SQL and or the function call.

Gotchas

SQLite does not support dropping and altering columns. That means that web2py migrations will work up to a point. If you delete a field from a table, the column will remain in the database but be invisible to web2py. If you decide to reinstate the column, web2py will try re-create it and fail. In this case you must set fake_migrate=True so that metadata is rebuilt without attempting to add the column again. Also, for the same reason, SQLite is not aware of any change of column type. If you insert a number in a string field, it will be stored as string. If you later change the model and replace the type "string" with type "integer", SQLite will continue to keep the number as a string and this may cause problem when you try to extract the data.

MySQL does not support multiple ALTER TABLE within a single transaction. This means that any migration process is broken into multiple commits. If something happens that causes a failure it is possible to break a migration (the web2py metadata are no longer in sync with the actual table structure in the database). This is unfortunate but it can be prevented (migrate one table at the time) or it can be fixed a posteriori (revert the web2py model to what corresponds to the table structure in database, set fake_migrate=True and after the metadata has been rebuilt, set fake_migrate=False and migrate the table again).

Google SQL has the same problems as MySQL and more. In particular table metadata itself must be stored in the database in a table that is not migrated by web2py. This is because Google App Engine has a read-only file system. Web2py migrations in Google:SQL combined with the MySQL issue described above can result in metadata corruption. Again, this can be prevented (my migrating the table at once and then setting migrate=False so that the metadata table is not accessed any more) or it can fixed a posteriori (my accessing the database using the Google dashboard and deleting any corrupted entry from the table called web2py_filesystem.

limitby

MSSQL does not support the SQL OFFSET keyword. Therefore the database cannot do pagination. When doing a limitby=(a,b) web2py will fetch the first b rows and discard the first a. This may result in a considerable overhead when compared with other database engines.

Oracle also does not support pagination. It does not support neither the OFFSET nor the LIMIT keywords. Web2py achieves pagination by translating a db(...).select(limitby=(a,b)) into a complex three-way nested select (as suggested by official Oracle documentation). This works for simple select but may break for complex selects involving aliased fields and or joins.

MSSQL has problems with circular references in tables that have ONDELETE CASCADE. This is an MSSSQL bug and you work around it by setting the ondelete attribute for all reference fields to "NO ACTION". You can also do it once and for all before you define tables:

MSSQL also has problems with arguments passed to the DISTINCT keyword and therefore while this works,

db(query).select(distinct=True)

this does not

db(query).select(distinct=db.mytable.myfield)

Google NoSQL (Datastore) does not allow joins, left joins, aggregates, expression, OR involving more than one table, the ‘like’ operator searches in "text" fields. Transactions are limited and not provided automatically by web2py (you need to use the Google API run_in_transaction which you can look up in the Google App Engine documentation online). Google also limits the number of records you can retrieve in each one query (1000 at the time of writing). On the Google datastore record IDs are integer but they are not sequential. While on SQL the "list:string" type is mapped into a "text" type, on the Google Datastore it is mapped into a ListStringProperty. Similarly "list:integer" and "list:reference" are mapped into "ListProperty". This makes searches for content inside these fields types are more efficient on Google NoSQL than on SQL databases.