Update: the commented portion in italics does not apply anymore (see the reply below)

/*

SQLite is a bit more gracious than ODBC and will accept generic null value. For ODBC, the difference is that when inserting/updating null fileds, data type has to be specified, so the above code looks like this:

ses << "INSERT INTO NullTest VALUES(?)", use(NULL_INT32), now;

The rest is the same. ODBC code will work for SQLite (which will gracefully take any of NullData enum values), while the other way around is not true – ODBC will throw up when fed the generic one.

*/

Code is in SVN, tested on windows with all supported drivers and Linux with PostgreSQL. Comments and bug reports are welcome.

Things always look better after a good night sleep and early morning run :-).
So, I retract the above statement about ODBC and generic null values. The simple ‘use(null)’ syntax can be used with ODBC. All the supported ODBC drivers seem to be happy with specifying char as type, regardless of real underlying field type.
Since I’ve seen ODBC drivers going overboard to adhere to the specification (or the author’s interpretation thereof), I won’t remove the type-specific null enum values and related code. Just in case.

Linux with PostgreSQL means PostgreSQL ODBC driver on Linux with unixODBC – that’s what I have fully tested so far. PostgreSQL is currently the only one that passes all the tests on Linux.

As a PostgreSQL fan, I would very much like to see native PostgreSQL connector in Data, but the usual fuss about lack of time applies. A contribution there would be very much appreciated, so anyone willing to help, email me at(aleskx, dot(gmail, com)). I’ll setup the project framework and provide development support, you supply the code.

As for inserting multiple values, yes we do support that for pretty much all the STL containers. Here’s an excerpt from the tests:

INSERT INTO … VALUES (…)
INSERT INTO … VALUES (…)
INSERT INTO … VALUES (…)
and so on

or

INSERT INTO … VALUES (…), (…), (…)

My experience so far has been that the latter can be a great deal faster.

I suppose I’ll have to look at Data eventually ;-).

Issues like this and support for multiple result sets from an SQL batch or stored proc invocation are somewhat important to me though. My experience as a Sybase developer has been that latency through round-trip is a major cause of application slowness.

To me a bulk operation is the last of these and is exemplified by a Sybase BCP or PostgreSQL COPY operation: it may or may not be logged or transacted, and has limited flexibility, but is the fastest mechanism available.

The SQL insert statement is quite flexible in terms of specifying the column set or sometimes even inserting to a view, so I think its probably best managed seperately by the creation of an object that contains table metadata, a special bulk connection, and then an explicit bulk insert against the connection which references the meta data object where all attributes must be specified, with an STL iterator pair defining the source.

The bulk insert facilities seem to vary more widely than other bits of CLI API, and in particular may not work against a standard database connection.

you are right, there is a finer clasification that simple calling it all bulk. I’ll make a note of this for future development. Also, help from folks like you is very valuable because we are trying to put everything behind a common interface and sometimes it is hard to predict what kind of constraints will a new connector impose.
More on this in the next development cycle for Data.

PLOP! (that’s the sound of me jumping in the Adriatic sea to cool myself down

BTW I suggest looking at libpqxx table stream for the bulk insert, but being careful with any expectation that the operation will be transacted or can be performed on a normal connection.

It would be nice if databases that offer it (PostgreSQL, Firebird afaik) can provide async alerts too. This will need careful design since some file descriptor monitoring may be necessary, or polling. This may be another case where its worth requiring a seperate connection as a conservative approach to avoid problems on other systems.