Introduction

pgasync is a Twisted-based PostgreSQL
client library which fully conforms to the DB API 2.0 specification, and
provides adbapi compatibility. It's
designed to run quickly and scale well. It provides connection pooling
and persistence, database types, and safe conversions.

For the uninitiated: If you're into
Twisted,
this project is potentially very interesting to you. If you don't
use Twisted, or have never heard of it, you probably won't care.

Errback handling fixes.
Removed useless explicit conversion of integers to NUMBER when no special formatting is done on them.
Exposed __int__, __float__ and friends in NUMBER just in case a NUMBER instance is tossed to format().
You can now do %03d style formatting in format() and
a trailing semicolon can be included on queries.
Added boolean support (thanks: Andrea Arcangeli).
Experimental Unicode Support (thanks Matt Goodall).
Unix socket support added (thanks Stephen Early).
convertBinary() function to get stored binary back out of the database.

connection.cursor() is no longer deferred. Pre-connection queries are now queued.
Transaction awareness was implemented to avoid unnecessary BEGINs and ROLLBACKs.
connection.exFetch was added for convenience.

Execute() params were fixed to be optional,
queries are now whitespace-stripped when looking for 'SELECT'
to create a cursor,
docstrings were corrected to reflect the new param style.
(credit: Matt Goodall - mg)

Proper deferred error handling was added,
the ability to manually release() back into the pool was implemented,
format arguments can now take a python datetime object,
and a simple nevow/wiki example is now included in the distribution.

Installation

pgasync uses distutils. To install, simply
execute the following as root in the unpacked distribution directory:

# python setup.py install

This will usually "just work", but there are two places that you may
run into problems here: first,
you may encounter a compilation error on cache.c.
If that's the case, you can try to use
Pyrex
to regenerate cache.c
from the cache.pyx file in
pgasync/. See the README for
more detail.

The other problem that may occur is that compilation of convert.c
fails or gives warnings about "implicit declaration" of the funcions
htons and/or htonl.
If this does happen, I'd appreciate an email about what platform/os you're on so I can
track down the proper include file. Linux and FreeBSD should work.

Notes

If you observe these notes,
you'll have a good understanding of the couple of places
that it differentiates from standard sync db api or adbapi libraries,
the bit of special treatment/thought it needs.

Keep a ConnectionPool object global if you'd like, perhaps
using site.remember in Nevow. Execute run* functions
using this pool, and create Connection objects for transactional
work using pool.connect() when you need to. *Don't* try to keep
a global Connection object.

runIteration is there, but it's usage is not recommended due
to several limitations. pgasync's runIteration will not call your
passed function in a thread, so don't block! Also, you will need
to add callbacks to any SELECT queries, so existing runIteration
functions that assume a sync-like db api will not work!

pgasync uses query queues. This means that you can
say execute("query") in a tight loop 100 times without
worrying about blocking, or waiting for a callback before
the next query gets executed. The protocol layer will
queue outgoing queries (and callbacks) and process them
when the backend is ready. This means saying
execute("query"); fetchone() without making fetchone
in execute's callback is no problem whatsoever. Only
utilize callbacks when you really care about the timing,
completion, or results of a query.

By default, pgasync will close unused connections
that are idle for a certain number of seconds. It will
obey ConnectionPool's min and max attribute. Values of
zero mean "no minimum" and "no maximum." The default is
no minimum, with a maximum of 20 concurrent backend connections.

Always do cursor.release() when you're done using a cursor.
If you don't, you won't reuse connections! release() is queued,
so you can call it even when you still have queries pending.

In pgasync, a connection can only have one cursor at
a time. This means the recommended way to utilize pgasync
is to create a cursor from a fresh connection every time.
The ConnectionPool object will do this for you via
runOperation, runQuery, runInteraction.
The "Connection" object is very, very light and inexpensive
in pgasync--it's just an abstaction of the pool. In fact,
connect() doesn't even block. Cursor
creation is the key. So always connect() or pool.connect(),
then conn.cursor().

Subversion

Public subversion read access is available:

svn co svn://svn.jamwt.com/pgasync/trunk/pgasync

Author

pgasync was written by Jamie Turner.
I'd appreciate hearing about any bugs or suggestions.

I'm often on #twisted.web
on irc.freenode.net during working hours, pacific time.
You can subscribe to the mailing list using the address below.