On Sun, 2003-10-05 at 07:08, Sidnei da Silva wrote:
> I have Sybase support lying on a branch, but I wont be able to test it
> more throughly nor merge the branch before Oct 15, so that's going to
> have to wait for the next version :)
If it's not polished, I don't really care so long as it doesn't effect
other code. We'll just document it as experimental. The same can go
for Oracle.
Ian

On Sun, Oct 05, 2003 at 03:35:55AM -0500, Ian Bicking wrote:
| I don't think there's much left before 0.5, and there's a lot of
| bugfixes that should be released in a proper version. In Firebird and
| SQLite there is the string quoting issue, which has to get fixed (along
| with booleans). Are there any other things that I'm forgetting about?
|
| Also, someone mentioned PyGreSQL support at some point, but I lost the
| details. I tried adding it, but it acted weird, and I didn't try very
| hard. Can anyone test what's in CVS and see what needs to be done?
| Does anyone care about PoPy?
|
| Firebird also still has the cursor issue, but I think it's otherwise in
| good shape.
I have Sybase support lying on a branch, but I wont be able to test it
more throughly nor merge the branch before Oct 15, so that's going to
have to wait for the next version :)
--
Sidnei da Silva <sidnei@...>
dreamcatching :: making your dreams come true
http://dreamcatcher.homeunix.org
Unix is a Registered Bell of AT&T Trademark Laboratories.
-- Donn Seeley

I don't think there's much left before 0.5, and there's a lot of
bugfixes that should be released in a proper version. In Firebird and
SQLite there is the string quoting issue, which has to get fixed (along
with booleans). Are there any other things that I'm forgetting about?
Also, someone mentioned PyGreSQL support at some point, but I lost the
details. I tried adding it, but it acted weird, and I didn't try very
hard. Can anyone test what's in CVS and see what needs to be done?
Does anyone care about PoPy?
Firebird also still has the cursor issue, but I think it's otherwise in
good shape.
Ian

On 2003-10-04 22:40:30 +0200 Ian Bicking <ianb@...> wrote:
hello,
> That's not really true anymore for CVS -- transactions aren't really
> working properly in 0.4. CVS has an expire method on SQLObject
> instances, which is called when an object is rolled back -- since 0.4
> didn't have this you couldn't use the attribute caching with
> transactions (since it wouldn't be accurate after a rollback).
ok :) - i am currently using the CVS version because i only got a traceback
when using transactions under 0.4.
btw, there's a typo in the 'transaction' chapter of the documentation
('_cacheValue' instead of '_cacheValues').
> However, while rollback expires objects in the transaction, commit
> doesn't expire objects outside of the transaction. But I'm not really
> sure how that should work anyway.
are there any reasons to mix queries which use transactions with queries
which doesn't in an application?
> Seems like you could go further, and just do:
>
> Transaction._abort = Transaction.rollback
> Transaction._finish = Transaction.commit
> Transaction._begin = lambda self: None
>
> Then (if I read your proxy correctly) the transaction can be used
> directly. Though really that should be done in a subclass of
> Transaction, and then mess with the .transaction() method to use a
> different class.
subclassing from both Transaction and TM isn't that easy/clean because
method names are overlapping (e.g. commit()). just adding the methods to the
class doesn't work either, because the transaction has to register itself
via the _register method first (which calls TM's commit() method). (i have
attached a new shorter version of the transaction proxy which i am currently
using.)
> As for passing the transaction, yes, it's a little annoying. I don't
> know how Zope does this for Z SQL Methods, but the problem is pretty
> much the same. Does Zope use Acquisition for this, or maybe using one
> transaction per thread?
i guess it's using acquisition, but i haven't looked into the source yet.
cu
/gst
<oodb.py>

On Sat, 2003-10-04 at 15:08, Günther Starnberger wrote:
> hello,
>
> i just played a little bit around with sqlobject and have a few questions:
>
> 1) according to the documentation "_cacheValue = False" should be used when
> using transactions. what is the reason for this (especially when using
> serialized transactions i can't see any cause for problems when using
> caching)?
That's not really true anymore for CVS -- transactions aren't really
working properly in 0.4. CVS has an expire method on SQLObject
instances, which is called when an object is rolled back -- since 0.4
didn't have this you couldn't use the attribute caching with
transactions (since it wouldn't be accurate after a rollback).
However, while rollback expires objects in the transaction, commit
doesn't expire objects outside of the transaction. But I'm not really
sure how that should work anyway.
> 2) are sqlobject calls (e.g. connection.transaction()) threadsafe or do i
> need to take care of this?
Yes, these should be threadsafe, though it depends on the exact context
of course -- SQLObject can't address all concurrency issues. That call
in particular should be threadsafe. For the most part all SQLObject
calls should be threadsafe, but that doesn't mean your instances will be
threadsafe, depending on how you use them.
I haven't really made use of the DB-API threadsafety variable. Though
generally SQLObject is assuming a lowest-common-denominator, where
threads cannot concurrently use the same connection (threadsafety level
1). There is one specified lower level (the whole module is not
threadsafe), but I don't think any supported databases have that
problem.
> 3) btw, i created a small proxy class to bind sqlobject transactions to zope
> transactions (it's attached to this mail if anyone is interested). to use
> this class it's required to run zope under python2.2 (that's already the
> default in debian/unstable).
>
> usage example:
> def frob(self, REQUEST):
> 'testmethod'
> trans = oodb.new_transaction()
> person = oodb.Person6.select(connection = trans)
>
> (manually passing the transaction to each method is a little bit ugly, but i
> haven't figured out yet how to do this automatically.)
Seems like you could go further, and just do:
Transaction._abort = Transaction.rollback
Transaction._finish = Transaction.commit
Transaction._begin = lambda self: None
Then (if I read your proxy correctly) the transaction can be used
directly. Though really that should be done in a subclass of
Transaction, and then mess with the .transaction() method to use a
different class.
As for passing the transaction, yes, it's a little annoying. I don't
know how Zope does this for Z SQL Methods, but the problem is pretty
much the same. Does Zope use Acquisition for this, or maybe using one
transaction per thread?
Modeling has a notion of an "editing context" which can make sense.
Really that's just a different phrasing of the same concept -- but
instead of passing the connection to your class, your class gets passed
to the connection.
Ian

hello,
i just played a little bit around with sqlobject and have a few questions:
1) according to the documentation "_cacheValue = False" should be used when
using transactions. what is the reason for this (especially when using
serialized transactions i can't see any cause for problems when using
caching)?
2) are sqlobject calls (e.g. connection.transaction()) threadsafe or do i
need to take care of this?
3) btw, i created a small proxy class to bind sqlobject transactions to zope
transactions (it's attached to this mail if anyone is interested). to use
this class it's required to run zope under python2.2 (that's already the
default in debian/unstable).
usage example:
def frob(self, REQUEST):
'testmethod'
trans = oodb.new_transaction()
person = oodb.Person6.select(connection = trans)
(manually passing the transaction to each method is a little bit ugly, but i
haven't figured out yet how to do this automatically.)
cu
/gst
<oodb.py>

On Friday, October 3, 2003, at 02:44 PM, Ian Bicking wrote:
> On Friday, October 3, 2003, at 01:40 PM, Brad Bollenbach wrote:
>>> Ideally I'd like to be able to perform this type of query using
>>> Python's "in" operator. From reading through SQLBuilder.py this
>>> seems to be already overloaded to support SQL "something IN
>>> ('blah1', 'blah2')" operations. Is there any way this could be
>>> inteligently overloaded to perform like %% queries on strings and IN
>>> queries on tuples?
>>
>> Python's "in" operator checks for the existence of a value in a
>> sequence. What you're trying to do is ask for all the objects (or all
>> the SQLObjects, as it were :) that meet a certain criteria. The
>> semantics are different.
>
> Not exactly -- SQLBuilder (which you get at with SomeSQLObject.q)
> captures Python expressions (to the degree possible), which can then
> be turned into SQL clauses. In this particular case ("in") I don't
> think Python makes it possible to do this, but several operators can
> be overloaded like this.
Perhaps the semantics aren't really completely different, but as you
say, 1. Python would actually have to allow this (it may or may not, I
haven't checked), 2. it would have to be written like:
Foo.select('bar' in Foo.q.baz)
instead of the normal:
Foo.select(Foo.q.baz == 'baz')
used for direct comparisons. This would be too inconsistent (and
perhaps too easy to confuse with writing the other way around), IMHO.
--
Brad Bollenbach
BBnet.ca

On Friday, October 3, 2003, at 01:40 PM, Brad Bollenbach wrote:
>> Ideally I'd like to be able to perform this type of query using
>> Python's "in" operator. From reading through SQLBuilder.py this seems
>> to be already overloaded to support SQL "something IN ('blah1',
>> 'blah2')" operations. Is there any way this could be inteligently
>> overloaded to perform like %% queries on strings and IN queries on
>> tuples?
>
> Python's "in" operator checks for the existence of a value in a
> sequence. What you're trying to do is ask for all the objects (or all
> the SQLObjects, as it were :) that meet a certain criteria. The
> semantics are different.
Not exactly -- SQLBuilder (which you get at with SomeSQLObject.q)
captures Python expressions (to the degree possible), which can then be
turned into SQL clauses. In this particular case ("in") I don't think
Python makes it possible to do this, but several operators can be
overloaded like this.
Ian

On Friday, October 3, 2003, at 10:36 AM, Simon Willison wrote:
> I just spent some time trying to figure out how to execute the
> equivalent of a "where col like '%text%'" query using SQLObject. I'm
> now using the following:
>
> results = Story.select(CONTAINSSTRING(Story.q.body, query))
>
> This is a bit of an eye-sore, especially compared to the neat
> Story.q.body.startswith('blah') and Story.q.body.endswith('blah')
> shortcuts.
>
> Ideally I'd like to be able to perform this type of query using
> Python's "in" operator. From reading through SQLBuilder.py this seems
> to be already overloaded to support SQL "something IN ('blah1',
> 'blah2')" operations. Is there any way this could be inteligently
> overloaded to perform like %% queries on strings and IN queries on
> tuples?
Hmm... well, there's two string situations, (str in col) and (col in
str). (str in col) means "col LIKE '%str%'", and (col in str) means
"str LIKE '%'||col||'%'". And then (col in tuple) means "col IN
(tuple...)".
But I don't think (col in str) can work, because we can't override
str's __contains__. Which is why we can't really do that with tuples
either (and hence the IN() function -- though that function could be
extended to take strings in addition to tuples.
> Alternatively, a contains() method similar to startswith() and
> endswith() would be useful.
That seems better. Since we can't make the "in" operator work for all
cases, I think it will just be more confusing if it works some places
and not others. Easier to just add another method.
Ian

On Friday, October 3, 2003, at 11:36 AM, Simon Willison wrote:
> I just spent some time trying to figure out how to execute the
> equivalent of a "where col like '%text%'" query using SQLObject. I'm
> now using the following:
>
> results = Story.select(CONTAINSSTRING(Story.q.body, query))
>
> This is a bit of an eye-sore, especially compared to the neat
> Story.q.body.startswith('blah') and Story.q.body.endswith('blah')
> shortcuts.
You're in luck. select() takes a string as a where argument. So:
Story.select("body like '%something%'")
(replaced with proper variables and escaping the %'s as necessary, if
applicable.)
The startswith and endwith are just methods of Python string objects,
and because SQLObject is (mostly) transparent, string columns act like
Python strings.
> Ideally I'd like to be able to perform this type of query using
> Python's "in" operator. From reading through SQLBuilder.py this seems
> to be already overloaded to support SQL "something IN ('blah1',
> 'blah2')" operations. Is there any way this could be inteligently
> overloaded to perform like %% queries on strings and IN queries on
> tuples?
Python's "in" operator checks for the existence of a value in a
sequence. What you're trying to do is ask for all the objects (or all
the SQLObjects, as it were :) that meet a certain criteria. The
semantics are different.
> Alternatively, a contains() method similar to startswith() and
> endswith() would be useful.
Python strings already have a find() method, but I think you meant to
pass a string as the where clause.
Hope that helps,
--
Brad Bollenbach
BBnet.ca

I just spent some time trying to figure out how to execute the
equivalent of a "where col like '%text%'" query using SQLObject. I'm now
using the following:
results = Story.select(CONTAINSSTRING(Story.q.body, query))
This is a bit of an eye-sore, especially compared to the neat
Story.q.body.startswith('blah') and Story.q.body.endswith('blah') shortcuts.
Ideally I'd like to be able to perform this type of query using Python's
"in" operator. From reading through SQLBuilder.py this seems to be
already overloaded to support SQL "something IN ('blah1', 'blah2')"
operations. Is there any way this could be inteligently overloaded to
perform like %% queries on strings and IN queries on tuples?
Alternatively, a contains() method similar to startswith() and
endswith() would be useful.
Cheers,
Simon

On Thursday, October 2, 2003, at 12:43 PM, Scott Chapman wrote:
> In SQLObject.html, it shows the following:
>
> p = Person.new(firstName="John", lastName="Doe")
> #>> QueryIns:
> # INSERT INTO person (last_name, middle_initial, first_name)
> # VALUES ('Doe', NULL, 'John')
> #
> #-- Not quite optimized, we don't remember the values we used to
> #-- create the object, so they get re-fetched from the database:
> #>> QueryOne:
> # SELECT last_name, middle_initial, first_name
> # FROM person
> # WHERE id = 1
>
> Where does it learn that the new Person has an ID of 1?
>
> I tried the same commands manually and the insert doesn't return the
> ID. How
> does SQLObject know what the ID is?
It's database-specific, and usually uses some DB-API extension. On
MySQL there's a lastrowid method (or something like that) that you can
get the ID from after an insert, and Postgres has a lastoid attribute
(or something like that). SQLite too. Firebird uses
generators/sequences, and it gets the ID and then uses it in the
insert. Oracle will do the same. So SQLObject is papering over this
diversity.
Ian

On Thursday 02 October 2003 12:27, Luke Opperman wrote:
> > Where does it learn that the new Person has an ID of 1?
> >
> > I tried the same commands manually and the insert doesn't return the ID.
> > How does SQLObject know what the ID is?
>
> Depends on the database/dbdriver. You'll find the implementation in
> SQlObject/DBConnection.py, _queryInsertID() for each DBConnection-class.
> Some Python db interfaces provide a non-standard option for getting this
> from the cursor (MySQL, SQLite), some require a second query of the
> database (Postgres using OIDs, for instance).
I'm using Postgres. It seems like OID's would be a good replacement for ID in
this case (save some processing). However, I think they can roll over and
that might cause a serious problem.
Thanks for the answer.
Cordially,
Scott

>
> Where does it learn that the new Person has an ID of 1?
>
> I tried the same commands manually and the insert doesn't return the ID.
> How does SQLObject know what the ID is?
Depends on the database/dbdriver. You'll find the implementation in
SQlObject/DBConnection.py, _queryInsertID() for each DBConnection-class. Some
Python db interfaces provide a non-standard option for getting this from the
cursor (MySQL, SQLite), some require a second query of the database (Postgres
using OIDs, for instance).
- Luke

In SQLObject.html, it shows the following:
p = Person.new(firstName="John", lastName="Doe")
#>> QueryIns:
# INSERT INTO person (last_name, middle_initial, first_name)
# VALUES ('Doe', NULL, 'John')
#
#-- Not quite optimized, we don't remember the values we used to
#-- create the object, so they get re-fetched from the database:
#>> QueryOne:
# SELECT last_name, middle_initial, first_name
# FROM person
# WHERE id = 1
Where does it learn that the new Person has an ID of 1?
I tried the same commands manually and the insert doesn't return the ID. How
does SQLObject know what the ID is?
Confsued,
Scott

On Wednesday, October 1, 2003, at 06:25 PM, Simon Willison wrote:
> When you run a SQLObject SELECT statement and get back an array of
> objects matching that select, are all of the represented rows from the
> database loaded in to memory? If so, that could cause a great deal of
> memory usage for large queries. An interesting trick I saw a while ago
> was using generators to yield a single row from a query at a time,
> allowing code to iterate through a whole database result set without
> first having to load everything in to a Python list:
>
> http://www.halfcooked.com/mt/archives/000497.html
>
> Are there any plans to add some kind of mechanism to SQLObject to
> support this kind of more efficient SELECT access? Or does it exist
> already and I juste haven't spotted it yet.
Yes, this is what SQLObject already does. SQLObject.SelectResults is
what .select() returns, and when you iterate over it you get an
iterator from DBConnection.DBAPI._iterSelect
(How exactly this interacts with caching under different loads, I'm not
as sure)
Ian

When you run a SQLObject SELECT statement and get back an array of
objects matching that select, are all of the represented rows from the
database loaded in to memory? If so, that could cause a great deal of
memory usage for large queries. An interesting trick I saw a while ago
was using generators to yield a single row from a query at a time,
allowing code to iterate through a whole database result set without
first having to load everything in to a Python list:
http://www.halfcooked.com/mt/archives/000497.html
Are there any plans to add some kind of mechanism to SQLObject to
support this kind of more efficient SELECT access? Or does it exist
already and I juste haven't spotted it yet.
Cheers,
Simon Willison
http://simon.incutio.com/

A friendly SQLObject proponent Victor Ng (http://www.crankycoder.com/archives/000066.html)pointed out to me that I had forgotten to import my data model into my servlet code.
As a newbie to Pyton and SQLobject, these things happen very easily. I wish there were some extensive examples on the usage of SQLObject beyond the 'official' documentation. Do yo know of any 'referece' implementations for public view?
regards,
- vidar
>
> From: Ian Bicking <ianb@...>
> Date: 2003/09/30 Tue PM 10:09:01 EDT
> To: vidar@...
> CC: <sqlobject-discuss@...>
> Subject: Re: [SQLObject] SQLObject and Webware newbie
>
> On Tuesday, September 30, 2003, at 09:02 PM, Vidar Brekke wrote:
> > but when I try to execute this code within webkit (below), I get an
> > error saying "NameError: name 'Member' is not defined"
>
> Are you importing the SQLObject class properly? Is the name of the
> servlet clobbering the name of the SQLObject class?
>
> Ian
>
>

--- Ian Bicking <ianb@...> wrote:
> DeleteSelectTest.testSelect, PeopleTest.dynamicJoin:
> The Cursor unknown error -- some problem with
> using a connection with
> two cursors, or something. It happens when an
> iterator still is using
> a cursor and you execute another statement. The
> exact problem I'm not
> sure of, nor the resolution.
I'm not sure if this is the same issue or not, but it
seems very similar:
http://kinterbasdb.sourceforge.net/dist_docs/usage.html#db_api_optional_unsupported
> Firebird is really slow for me. Like, the tests
> take seven minutes to
> run, compared to 3sec for Mysql, and 9sec for
> Postgres (MySQL doesn't
> actually do as many tests). Are other people
> getting similar results?
Although Firebird can be as slow as a turtle with bad
knees when doing metadate updates, seven minutes for
the tests is excessive. For me, they run in 36 sec.
on doze, 9 seconds on Linux (vs. 3 sec and 2 sec for
mysql).
Are you using SuperServer version or Classic? SS is
known to have issues under Linux.
This article may help:
http://www.ibphoenix.com/a519.htm
Also, consider changing/setting a pagesize when you
create the database.
HTH,
James
__________________________________
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com

Ian,
> Firebird is really slow for me. Like, the tests take seven minutes to
> run, compared to 3sec for Mysql, and 9sec for Postgres (MySQL doesn't
> actually do as many tests). Are other people getting similar results?
> I'm *hoping* my Firebird is badly configured, because I'm not that patient.
My guess is that you are using Firebird Classic (a process started by
init.d for each connection) and the tests are creating lots of
connections very quickly. If that is the case then init.d or xinetd.d
can put a limit on the rate at which they will create new processes.
We have had this problem on some machines used by developers.
I am using Mandrake 9.1 which has xinetd (seems to be an improvement
over init.d) and has this in the firebird description in
/etc/xinetd.d/firebird
# default: on
# description: Interbase server
service gds_db
{
flags = REUSE
socket_type = stream
wait = no
user = root
log_on_success += USERID
log_on_failure += USERID
server = /opt/interbase/bin/gds_inet_server
disable = no
}
I think the wait value is probably the critical one.
Sorry I can't remember how we fixed it on init.d
Dave
--
David Warnock: http://davew.typepad.com/42 | Sundayta Ltd:
http://www.sundayta.com
iDocSys for Document Management. VisibleResults for Fundraising.
Development and Hosting of Web Applications and Sites.

Ian is correct, I simply added REFERENCES to table creation. I purposely did
not add an ON DELETE clause because I want a database error if any funny
business happens.
"Ian Bicking" <ianb@...> wrote in message
news:2AF0213E-F3B5-11D7-B902-000393C2D67E@...
> On Tuesday, September 30, 2003, at 08:08 AM, Ian Sparks wrote:
> > "John Dell'Aquila" <dellaq@...> wrote in message
> > news:blb1e5$p6t$1@...
> >> I don't fully understand the framework yet but I've managed to get
> >> all the
> >> column types working. I also implemented referential integrity
> >> constraints
> >> on ForeignKey columns since I want the tables to be safe for use from
> >> the
> >> database side as well. I also want to get columnsFromSchema
> >> implemented if
> > I
> >> can figure out what's going on from the Postgres implementation.
> >
> > I'd be interested in the Referential integrity additions for porting to
> > Firebird - espcially if you have dealt with the cacheing issue (cascade
> > deletes removing child objects from the cache).
>
> I assume John is talking about REFERENCES in the automatic table
> creation -- which would be a nice addition. Cascading deletes are...
> challenging.
>
> Right now I'd suggest just overloading destroySelf(), doing the
> cascading manually. Maybe there could also be a soft destroySelf --
> like, remind the object it's dead (but the object doesn't have to
> delete itself from the database). Either way, you could do it by
> adding an attribute to Col/ForeignKey objects to control it, and then
> handle that in destroySelf. (And, while we're at it, all the foreign
> key stuff should be moved out of Col and into ForeignKey, though that
> would require a little refactoring to keep it as general as it is)
>
> Ian
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf