Per Aaron Bedra: "I just tried to use this on a 1.2 project that we have been working on for a while now and the dynamic defs are killing it. Can I change it to use the older ^{:dynamic true} syntax and release a new version? Is that the only thing that is keeping this off of 1.2? For now we need to make sure the new contrib libraries are 1.2 compatible, so people can start transitioning now."

Having reviewed the code, the result of get-connection is never closed, only the result of calling .getConnection on a PreparedStatement and that matches how java.jdbc has always worked, so I don't believe this is a problem. Aphyr mentioned on Twitter that he believes the problem was just due to his test code and not inherent in java.jdbc but we are still waiting for confirmation. Lowering priority for now.

I believe I have verified this locally. Under the old API, a connection was only added explicitly at the "top" of a call chain. Under the new API, a connection can be added on-demand in many places which makes the logic much more complex and there are a couple of places where normal usage could indeed cause a connection leak.

I believe this is resolved in the latest snapshot (which will become Alpha 4 at some point), as I've gone thru and changed how get-connection is called and ensured that with-open is used whenever a new connection is actually created.

The code needs a fair bit of clean up now tho'. I'm not happy with the result of adding that logic so I need to spend some time refactoring...

It would be very useful to be able to pass in a function argument to several c.j.j functions that would be called on the PreparedStatement before executing the query, for example, calling setFetchSize(). I'm not sure what the syntax should look like for this...

– While working with clojure.java.jdbc, I kept running into the "ResultSet must have unique column labels" exception, so I rewrote it to automatically rename duplicate columns. Now, I can run "SELECT * FROM tbl1 JOIN tbl2 ON tbl1.id = tbl2.id" without causing any errors.

This would definitely be a nice enhancement but so far no one with a signed CA has offered a patch

Figuring out the appropriate renaming strategy is key - a sensible default with an optional function to allow users to specify how to do this? Or perhaps only renaming if users specify such a function?

"JDBC driver for PostgreSQL and ms-sql does not support returning of generated keys from statements executed in batch. The SQL exception was thrown when 'do-prepared*' was called with 'return-keys' set to true. Currently only the function 'insert-values' calls a new method 'do-single-prepared-get-keys*' when single relation is inserted."

I think the correct behavior would be to call rollback on the transaction and not just not calling commit? The user will expect at the end of the transaction block one of commit or rollback to be called.

I was thinking about how to test this. How do you feel about a mock Connection object, where we can check if commit and/or rollback have been called as part of the test?

[5:11pm] <paraseba>
In clojure.java.jdbc there is a transaction function, which is supposed to rollback if the passed body throws an exception, but it only does that in case of an Exception, not any Throwable. What's the rationale behind this?[5:11pm] <paraseba>
I'd want my transaction rolledback in case of any errors[5:12pm] <paraseba>
seancorfield: maybe you can explain me this?[5:53pm] <amalloy>
paraseba: catching Errors is generally a bad practice. i'm not saying it's wrong all the time, but Errors are often unrecoverable anyway[5:53pm] <amalloy>
eg, "You ran out of heap space! I can't even allocate memory to throw an exception!"[5:54pm] <paraseba>
but, even in the worst conditions, shouldn't we try to rollback the transaction? is not better that commiting in this unexpected error condition?[5:55pm] <paraseba>
we can then rethrow the Throwable, after trying to rollback[5:55pm] <lucian>
paraseba: i don't think the db will commit if it gives you an error[5:55pm] <lucian>
and as amalloy, errors like OOM are best solved with exit()[5:55pm] <paraseba>
lucian: it will ... jdbc is catching Exception and rolling back in that case .... but it commits in a finally[5:56pm] <paraseba>
so, if you have an Error thrown, it will commit[5:56pm] <paraseba>
I guess that's more surprising than a rollback[5:57pm] <paraseba>
the logic is .... (try do-your-thing (catch Exception roll-back) (finally commit))[5:57pm] <lucian>
paraseba: well, then maybe you don't want to commit in finally?[5:57pm] <paraseba>
I don't, not if I got an Error[5:58pm] <amalloy>
lucian: i think he's upset that a library he's using is committing on error[5:59pm] <paraseba>
amalloy: I can solve it easily by wrapping my operations in a catch Throwable -> rollback -> rethrow, but I think it's not the right behavior for the library[5:59pm] <paraseba>
I would expect a commit only if the block ended without any kind of exceptions or errors. don't you agree ?[6:01pm] <amalloy>
paraseba: meh. all kinds of weird stuff can happen if an Error occurs; i wouldn't be entirely certain that an attempt to "recover" makes things worse due to some weird program state caused by the Error. i mean, my completely-unresearched opinion is that catching Throwable would be better, but you can't really rely for your program's correctness on anything that happens after an Error[6:02pm] <paraseba>
but, we are forcing a commit after an Error[6:04pm] <paraseba>
the usual logic should be .... (do do-your-thing (commit)) if do-your-thing throws anything ... no commit is done. Puting a commit in a finally enforces the commit, even after Errors[6:06pm] <amalloy>
yeah, i think i agree[6:08pm] <paraseba>
amalloy: ok, I'll report an issue, thanks

On success, commit unless rollback has been set (in which case rollback).
On Exception, rollback and rethrow as-is.
On Error (Throwable), allow it to escape.
In all cases, set rollback false and restore auto-commit setting.

The only two changes here should be:

Exception is no longer wrapped when thrown (may break code that expected the wrapping?)

java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedStatement.checkIfInMiddleOfBatch(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown Source)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_$fn__84.invoke(internal.clj:272)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:185)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_.doInvoke(internal.clj:272)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojure.java.jdbc$insert_values.doInvoke(jdbc.clj:259)
at clojure.lang.RestFn.invoke(RestFn.java:443)
at hiredman.triples$store_triple$fn_471$fn_472.invoke(triples.clj:39)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:174)
at hiredman.triples$store_triple$fn__471.invoke(triples.clj:39)
at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:156)
at hiredman.triples$store_triple.invoke(triples.clj:39)
at hiredman.clojurebot.factoids$fn__2689.invoke(factoids.clj:139)
at clojure.lang.MultiFn.invoke(MultiFn.java:167)
at hiredman.clojurebot.factoids$factoid_command_run.invoke(factoids.clj:173)
at conduit.core$fn_284$a_arr285$a_arr_286.invoke(core.clj:233)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_except$a_except__307.invoke(core.clj:350)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at clojurebot.conduit$a_indirect$this_fn__500.invoke(conduit.clj:9)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.irc$reply_fn$irc_reply_fn__350.invoke(irc.clj:22)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.irc$irc_run$handle_msg__421.invoke(irc.clj:166)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_run.invoke(core.clj:28)
at conduit.irc$irc_run$run__424.invoke(irc.clj:175)
at conduit.irc$irc_run.doInvoke(irc.clj:179)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojurebot.core$main$fn_4654.invoke(core.clj:214)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 185 more

replacing .executeUpdate with .executeBatch in do-prepared-return-keys* seems to fix the problem and be what was meant in the first place, the result is bound to 'counts' and .executeUpdate returns a single count while .executeBatch returns multiple

I now have some unit tests for MySQL, HSQLDB and Derby. insert-records fails with the exception reported here, only for Derby. I'll complete the suite of tests, try to get PostgreSQL and MS SQL Server tests running as well and then I'll be in a good position to fix things...

The way resultset-seq uses create-struct and struct doesn't seem to leverage any inherent functionality of structmaps except perhaps some vague guarantee of ordering of columns in rows. Replacing the create-struct / struct calls with

(into {} (map vector keys (row-values)))

maintains the order up to 16 columns (PersistentArrayMap) so that seems a reasonable compromise.

I'm not sure that preserving order is something that should be documented or supported?

It seems to be that there is an issue in the naming-strategy in this issue. Either in the way I define it of the way it is implemented
(sql/with-connection db
(sql/with-naming-strategy {:keyword identity :entity (fn [x] (str \" x \"))}
...
))

Now that the new API exists, the old API should be rewritten in terms of the new API, passing db and binding it appropriately. Where no new equivalent exists, extend the new API (in particular, the form of insert that just provides rows without column names is not supported in the new API).

with-connect and other functions accepting a db-spec do not support standard jdbc uris, like this one:

jdbc:mysql://localhost:3306/<database>

In my environment passing that uri to yields an exception:

Exception in thread "main" java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at clojure.lang.RT.loadClassForName(RT.java:2050)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:175)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:166)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:168)
at clojure.java.jdbc$with_connection_STAR_.invoke(jdbc.clj:299)
at bungle_o.migration$create_exceptions.invoke(migration.clj:17)
at bungle_o.migration$_main.invoke(migration.clj:21)
at clojure.lang.Var.invoke(Var.java:411)
at user$eval191.invoke(NO_SOURCE_FILE:1)
at clojure.lang.Compiler.eval(Compiler.java:6511)
at clojure.lang.Compiler.eval(Compiler.java:6501)
at clojure.lang.Compiler.eval(Compiler.java:6477)
at clojure.core$eval.invoke(core.clj:2797)
at clojure.main$eval_opt.invoke(main.clj:297)
at clojure.main$initialize.invoke(main.clj:316)
at clojure.main$null_opt.invoke(main.clj:349)
at clojure.main$main.doInvoke(main.clj:427)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)

I'm using the Clojure library Korma to generate SQL for Postgres, which depends on clojure.java.jdbc. Recently I added a custom 'upsert' query. Using it on normal java 1.6 is fine, but on openJDK it breaks due to the use of 'java.sql.Statement/RETURN_GENERATED_KEYS'. This makes the Postgres driver add a seemingly random "RETURNING *" at the end of the query.

On normal Java 1.6 java.sql.Statement/RETURN_GENERATED_KEYS doesn't exist and then a exception is caught and everything works as expected. Sounds like the world up side down, but I'm probably missing the rational behind this.

transaction* currently catches Exceptions for its rollback logic. This means Throwables subclassing java.lang.Error are not caught, and indeed the transaction seems to be immediately committed in this case (at least if the connection has autocommit set to true).

This happened to me in the case of an OutOfMemoryError. Another example is that the following code commits:

(transaction
(insert-records "table" {:foo 12})
(assert false))

Presumably this can be fixed by catching Throwable instead of Exception. I'll prepare a patch to that effect.

This is just a suggestion, and i am not sure if it is the right way to do this.

When fetching data from Oracle the oracle jdbc driver returns all the numeric type columns as BigDecimal and date/datetime columns as oracle.sql.TIMESTAMP. This causes a lot of headache as we have to remember to convert the data to the correct type.

Although, this can be handled at the application level by using a :row-fn, it would be really nice if this library handles the conversion itself.

I think the right solution here will be to modify the result-set-read-column protocol function to be passed rsmeta and i and then Oracle users can extend the protocol to those types and perform the conversions they want. I don't believe automcatically converting BigDecimal is going to be the right choice for all users although doing the conversion for oracle.sql.TIMESTAMP is a more arguable conversion.

Agreed. I thought of this, but it requires changing the protocol to take rsmeta along with the column index. i have uploaded another patch for this change.

Also, it'll be better to leave the handling of oracle.sql.Timestamp to the application as it needs a dependency on the oracle thin driver (which we should certainly not add to java.jdbc).

So, after this updated protocol, the oracle users can extend it with something like this (much better now):

;;; Oracle users can define this in their applications
;; to extend it to BigDecimal and other oracle types
(extend-protocol IResultSetReadColumn
oracle.sql.TIMESTAMP
(result-set-read-column [x _ _] (.timestampValue x))

Thanx. Yes, that was where I was going with this, and while we're in alpha for 0.3.0 I'm not averse to changing the signatures of new stuff introduced in the API rewrite. I'll review the patch later and probably roll it in this week (I just got back from a web tech conference in MN and I'm busy catching up on work!).

<opinion>
This all leads me to think that get-connection should pass URIs along to DriverManager without modification, and leave URI format conventions up to the drivers involved. For now, my workaround is to do essentially that, using a map like this as input to with-connection et al.:

That certainly works, but I presume that such a workaround won't occur to many users, despite the docs/source.
</opinion>

I don't think I've used java.jdbc enough (or RDMBS' enough of late) to comfortably provide a patch (or feel particularly confident in the suggestion above). Hopefully the report is helpful in any case.

Yup, 0.2.3-SNAPSHOT's :connection-uri works fine. I've since moved on to using a pooled datasource, but this will hopefully be a more obvious path to newcomers than having to learn about :factory and DriverManager.

accessing an heroku database outside heroku, "sslfactory=org.postgresql.ssl.NonValidatingFactory" doesn't work. i get "ERROR: syntax error at or near "user" Position: 13 - (class org.postgresql.util.PSQLException". this happens whether adding it to :subname or :connection-uri Strings

another minor issue - why the documentation of "with-connection" (0.2.3) refers the following format for the connection string URI:
"subprotocol://user:password@host:post/subname
An optional prefix of jdbc: is allowed."
but the URI which can actually be parsed successfully is like the one above: jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true
"subprotocol://user:password@host:post/subname" (format like the DATABASE environment variables on heroku) will not be parsed correctly. why the format for the URI that is used on heroku is not supported by the parser?

Carlos, the :connection-uri passes the string directly to the driver with no parsing. The exception you're seeing is coming from inside the PostgreSQL driver so you'll have to consult the documentation for the driver.

The three "URI" styles accepted by java.jdbc are:

:connection-uri - passed directly to the driver with no parsing or other logic in java.jdbc,

:uri - a pre-parsed Java URI object,

a string literal - any optional "jdbc:" prefix is ignored, then the string is parsed by logic in java.jdbc, based on the pattern shown (subprotocol://user:password@host:port/subname).

If you're using :connection-uri (which is used on its own), you're dealing with the JDBC driver directly.

If you're using :uri or a bare string literal, you're dealing with java.jdbc's parsing (implemented by Phil Hagelberg - of Heroku).

Still, it didn't work with any of the options. I used before a postgres JDBC driver to export to the same database (in an SQL modeller - SQLEditor for the MAC) and it worked (though it would connect some times, but others not). The connection String used was like "jdbc:postgresql://host:port/database?user=xxx&password=yyy&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory". The driver name was "org.postgresql.Driver" (JDBC4). Anyway, time to give up. I will just use a local database.

Sean, JDBC combinations were working after. i was neglecting an insert operation in a table with a reserved sql keyword "user", so i was getting a "ERROR: syntax error at or near "user" Position: 13", and therefore the connection was already established at the time.

I looked at the BoneCP docs and I don't see a ConnectionManager class but if you define the db-spec based on the BoneCPDataSource object, java.jdbc will call getConnection on that object to get a working connection for operations.

I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array. I don't know if there are significant performance implications for an extra function call for each value coming out of the database.

I'm thinking this feature is no longer necessary due to the :row-fn option – since the only motivation in the first place was to have the opportunity to transform a result before the connection closes.

There would still be no easy way to do it from Korma, but that's Korma's fault. So maybe we should remove this before 0.3.0 is released?

This turns out to be a very useful hook for certain database types. I used it to solve an issue with Oracle, to enable users to automatically convert Oracle-specific datatypes to regular values, in JDBC-57. So, thank you... even if you think it isn't necessary now!

That use case could be served by :row-fn, right? You just like that it's easier? I don't mind it if others still find it valuable, I just wanted to make sure we didn't accidentally cement redundant features.

:row-fn is intended for whole-row transformations - it is lazily mapped over the result set - so it's really meant for post-processing your data in a general sense while the connection is open. The protocol-based column reader is intended to transform specific types of data, independent of any application semantics (e.g., Oracle returns an oracle.sql.TIMESTAMP which can be automatically transformed into a java.sql.Timestamp or Date or whatever).

And :result-set-fn is intended for whole-result-set transformation, just prior to returning the result set.

execute! would take an optional :multi? argument, default false, and execute-helper would either call db-do-prepared directly (as it does now) or apply it, assuming (rest sql-params) is a sequence of vectors, which would become multiple params-groups.

I think this would also allow insert! to be simplified and implemented in terms of execute! which would be a nice symmetry.

Further update: this looks to be a configuration or version issue with the Tungsten Connector setup as we've been able to repro the problem in one environment but it works as expected in a newer environment.

May still go ahead with this change anyway since I believe it's a good change for robustness, but it looks lower priority right now.

The underlying behavior is a quirk of the Tungsten Connector configuration which we believe we've successfully corrected. Given the result of (.getColumnType) is -3, I'm not sure anything could be done in code to address this, so I'm going to resolve this as invalid.

In case anyone else runs into this with Tungsten: BIT fields are handled in a version-specific way and the logic assumes three segments to the version number. Tungsten's default configuration has only two segments in the version number so the BIT handling logic barfs. The workaround is to change the Tungsten configured version to match the actual MySQL version you are proxying (e.g., 5.5.30-percona instead of 5.1-tungsten).

Just to note: the naming strategy stuff, based on dynamically bound globals, is all deprecated in 0.3.0. Alternatives to create-table and drop-table, which respect the new entities function approach, will be provided in an upcoming alpha build of 0.3.0.

Replacements for create-table / drop-table will be added before beta1. I'm not sure right now how many more alphas we'll have. There's a lot of churn in the code right now and I want it to be feature complete and fairly settled before the first beta.

A ResultSet can contain pretty much any keys. As of CLJ-1252 being applied, keywords are more restricted now. It's not clear what should be done with keys in a ResultSet that cannot be converted to keywords... left as strings perhaps?

This patch uses the when-available macro from match.numeric-tower to check for the existence of javax.naming.InitialContext before using it. Since the javax.naming.* namespace is not available on Android, its current use prevents this library from running on the platform.

Hi, I am using this nice package with a large (government) Teradata machine. I found that I can not insert nil values using insert-rows: the Teradata JDBC driver tells me to use .setNull or the form of .setObject including an SQL type code.

The following replacement definition of set-parameters was the only change needed to get this to work.

For some queries and driver implementations, the data that would be returned by a ParameterMetaData object may not be available until the PreparedStatement has been executed.

Anyway it does work for Teradata. Maybe it could be an option?

It just occurred to me that it is probably inefficient to retrieve the parameter metadata on every call to set-parameters. It could be done in insert-rows instead. On the other hand, I didn't actually notice any performance hit when inserting 40,000 rows.

This was a tricky one. Both .getParameterMetadataData and .getParameterType can throw exceptions. On MS SQL Server, if there is a syntax error in the SQL, the former throws an Exception. On MySQL, the latter throws an exception (every time as far as I can tell, even tho' .getParameterMetaData succeeds). That introduces quite a performance penalty for MySQL (about 10% in my tests).

However, if I treated MySQL as a special case and swallowed all exceptions and had a fallback to the original .setObject call, I could get everything to pass the test suite without a noticeable performance overhead (still measuring the latest MySQL performance as I write this).

Although it will probably slow everyone down a little, having to make these calls, it should be more robust and may make the .setObject calls faster in some situations?

Attached a patch that adds an ISQLParameter protocol that handles the .setObject call but can be overridden for custom types.

The most awkward part of this is that I had to implement the protocol for Object and nil (in order to cover everything), but the teradata case would require re-implementing for nil. As far as I can tell this would work fine, but overriding a protocol implementation feels sketchy.

I suppose an alternative would be to not provide any impls, and then check (satisfies? ISQLParameter x) and do the default behavior if not. Not sure if that would perform worse though.

I feel like the main downside to my patch's approach is that it precludes interacting with two different kinds of databases in the same process – if I want to connect to postgres in one function and have it write nulls one way, while writing nulls a different way to teradata, this isn't really possible.

The only other idea I have is passing a "setter" function as an option, which would do basically the same thing the protocol does in my patch.

Since this really is a per-db issue - and restricted to a handful of DBs (or fewer) - my sense is that we should do something with the db-spec itself to offer a way to override how set-parameters works. Having looked at the code, here's my proposal:

Wherever the code calls set-parameters in the context of a db-spec, it will check for a :set-parameters key in the db-spec and use that if present, else default to the normal set-parameters call.

This would allow you to add :set-parameters to your db-spec with a function that mimicked the built-in set-parameters except handled null values in a different way.

I'm going to make this change for 0.3.0-beta1 (coming real soon) and see how it works for you...

If I upgrade to any version past 0.3.0-alpha4, I get this exception when attempting to run my ring server:

Exception in thread "main" java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn, compiling:(clojure/java/jdbc.clj:275:49)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6567)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3624)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:5973)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6009)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$DefExpr$Parser.parse(Compiler.java:529)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler.eval(Compiler.java:6623)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at eq_server.db$eval2811$loading_4910auto___2812.invoke(db.clj:1)
at eq_server.db$eval2811.invoke(db.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at eq_server.models.peek$eval2164$loading_4910auto___2165.invoke(peek.clj:1)
at eq_server.models.peek$eval2164.invoke(peek.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:551)
at eq_server.controllers.peeks$eval2133$loading_4910auto___2134.invoke(peeks.clj:1)
at eq_server.controllers.peeks$eval2133.invoke(peeks.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5417)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:482)
at eq_server.routes$eval2107$loading_4910auto___2108.invoke(routes.clj:1)
at eq_server.routes$eval2107.invoke(routes.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at user$eval5.invoke(form-init8113116856213476175.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.Compiler.loadFile(Compiler.java:7020)
at clojure.main$load_script.invoke(main.clj:294)
at clojure.main$init_opt.invoke(main.clj:299)
at clojure.main$initialize.invoke(main.clj:327)
at clojure.main$null_opt.invoke(main.clj:362)
at clojure.main$main.doInvoke(main.clj:440)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)
Caused by: java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn
at clojure.lang.Compiler$InvokeExpr.<init>(Compiler.java:3423)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3630)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
... 169 more

Currently, it is very easy to get spurious connection use on multiple statements,for instance by mapping a query function over a collection.
A workaround for this can be done by providing a binding through db-transaction.
However, this complects single connection use with commit / rollback mechanics that might be unwanted.

Another option is providing the binding yourself by wrapping it in a with-open/let construction.
This needs people to get into the implementation of the connection system, and might not play well with it.

Using connection pools is the last option, but usually is overkill for simple tasks.

Creating a macro and function that provides a simple binding for a connection in the way db-transaction does, without the included commit/rollback mechanics would do the trick. Such a function would play nice with the rest of the new connection propagation system (even including levels), and still doesn't rely on the rebinding of global vars.

In clojure.java.jdbc 0.2.3 it was easy to call methods on the connection, like .getDatabaseMetaData.

Now connection management has changed, and finding, reaching and keeping connections (and accompanying DatabaseMetaData) is a lot harder.

It would be good to have two new functions that would give access to all database metadata information in a minimalistic way, and keeps connection management in line with the new clojure.java.jdbc 0.3.0 ideals.

The first is a function that gets / sets up a connection by the new connection rules (find-, get-, add-connection), and provides a local binding for the DatabaseMetaData object resulting from calling .getDatabaseMetaData on that connection.
Of course, like a transaction, this also means the actions done within that binding needs a single connection that's kept open for the duration of the binding block.

The second is a function (or enhancement of result-set-seq) that calls result-set-seq if the argument is (instance? java.sql.ResultSet) or else the argument itself; this is because metadata method calls can either give back a resultset that needs to be clojurized (f.i. .getTables)or an object by themselves (f.i. .getDriverName). Let's call it meta-result as an example.

As far as I can see that's a pretty minimalistic set of functions that unlocks all DatabaseMetaData possibilities. Also, because it integrates with the new connection rules, it would integrate nicely with transactions, in case of queries that are dynamically generated on metadata within the same transaction.

Nice suggestion. Thanx. with-db-metadata is probably more in line with the current naming and perhaps metadata-result to tie back to that name, rather than cause any confusion with Clojure's built-in meta?

This patch fixes one remaining issue related to using this library in Android apps. While compiling for Android is now possible, there is an error thrown during release builds. This occurs during the "dexing" phase, when it converts from JVM to Dalvik bytecode.

It fails when it gets to the class file generated by the IResultSetReadColumn protocol definition. This is because it makes duplicate use of the "_" symbol to represent unused arguments. This patch simply renames the symbols so they are unique.

After some experimentation, I've discovered that certain JDBC drivers return the equivalent of (Boolean. true) and (Boolean. false) which are truthy/falsey but sometimes produce surprising results since they are not actually true and false. Extending the result set reading protocol to Boolean to turn them into canonical values via if seems to do the trick.

A simple (if v true false) is not sufficient for this - it needs to be (if (= true v) true false) based on tests I've conducted. The problem is that the constructed (Boolean. false) value behaves as truthy.

This is due to executeUpdate. Need to reverify the issues around using executeBatch with no params (since that looks like what db-do-commands does - which works - but executeUpdate was used as a workaround for an issue with no params at some point).

is now legal but will remain undocumented. Since optional positional arguments are expected, any attempt to call query with SQL and params not in a vector will throw an IllegalArgumentException anyway. This seems like a reasonable compromise.

Giving metadata-result a result-set-fn parameter defaulting to either identity or doall would work, I guess.

(defn metadata-result
"If the argument is a java.sql.ResultSet, turn it into a result-set-seq,
else return it as-is. This makes working with metadata easier.
Also accepts :identifiers and :as-arrays? to control how the ResultSet
is transformed and returned. See query for more details."
[rs-or-value & {:keys [identifiers as-arrays? result-set-fn]
:or {identifiers str/lower-case result-set-fn doall}}]
(if (instance? java.sql.ResultSet rs-or-value)
(result-set-fn (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?))
rs-or-value))

Yes, I think that is probably the right solution. I'll review it in the context of query etc and make sure it would create a consistent API. It might be worthwhile having :row-fn as well but I need to compare the functions and think carefully about the impact of those changes.

Currently, Oracle subprotocols and classnames are missing. The Oracle jdbc driver supports two subprotocols, "oracle:thin" and "oracle:oci" (for thin client and oracle client support). Both need to be mapped to the class "oracle.jdbc.OracleDriver".

It is currently rather difficult to set options such as the stmt.setTimeout() as the creation of the PreparedStatement is encapsulated inside db-query-with-resultset. Some sort of option or hook for this would be cool.

You can pass a PreparedStatement instead of a SQL string - so you can build your PreparedStatement - using prepare-statement or manually - and then pass it into query as the first element of the vector that would normally contain the SQL string plus parameters.

Ah, db-do-prepared-return-keys only supports one parameter group, so I can see why it won't work for you here.

So this is to address the one specific case where you want to create a single prepared statement and run it for multiple parameter groups... If that really isn't possible with the current API, a generic way to do so should be added. You can easily create a prepared statement that will return generated keys, using prepare-statement and query will accept a prepared statement instead of SQL, so this should just be a matter of ensuring execute! and/or db-do-commands can accept a prepared statement instead of SQL. That would be a more generic solution.

With the 0.3.4 release allowing execute! to accept a prepared statement and the reasonably clean way to execute a prepared statement and then get the generated keys, I don't think a change to the API is necessary.

Finally taking a look at these - sorry for the delay - and it seems that java.jdbc expects a string argument to be something you can simply construct a URI out of and then pass to get-connection so this suggests that either java.net.URI might be monkeying with the value or parse-properties-uri can't figure out how to reconstruct the appropriate spec map from such a URI.

Currently it is possible to set the transaction isolation only (db-transaction*). In our case we use SERIALIZABLE isolation, so it makes sense to mark read-only transactions using Connection/setReadOnly.

Thanks Gerrit. That seems nice and simple. I can't take the patch exactly as-is, because you do not appear to be a formal contributor yet - http://clojure.org/contributing - but this definitely points me in the right direction to create tests and implement something that should provide the functionality you're after.

The JDBC spec supports vendors to add their own specific constants for result set holdability, concurrency and cursor types. With clojure.java.jdbc, these are abstracted away to give easy acces through keywords that are mapped to specific constants. With only an option for keywords, other constants cannot be used.

When result-set-type, result-set-concurrency and result-set-holdability would be changed from map lookups to functions with case statements containing the keyword / constant pairs, but :else returning the parameter itself, we could use both the current behavior {:cursor :read-only} as well as using vendor specific parameters {:cursor SQLServerResultSet/TYPE_SS_SERVER_CURSOR_FORWARD_ONLY}.

Given that both query and execute! support passing a PreparedStatement in place of the SQL string, this is already possible. You can create a PreparedStatement with the prepare-statement function and then set a timeout or whatever you want, then pass it into query or execute!.

That said, adding support for it directly on prepare-statement seems like a reasonable enhancement since that's where all the other fancy PreparedStatement stuff is done.

result-set-seq ensures that column name keys are unique by suffixing identical column names with an incrementing integer. This is important and required when returning maps as key collisions are possible (result sets can easily contain multiple columns with same label, in different positions) and would yield surprising results.

However, when result-set-seq is called with :as-arrays? true, rows are returned as vectors with the columns ordered as in the SQL query. The first row returned is a vector of column names. In the current implementation, column name keys are still made unique.

I find this behaviour unhelpful as it introduces an unnecessary difference between column names as returned by the query and what they are called in the column name row.

I propose the following change:

When as-arrays? is true, don't make column names unique. This is obviously breaking change if you rely on the uniquified column names (unlikely?). If this is considered a too big risk, I propose we introduce an option to toggle this behaviour uniquify-column-names?, defaulting to true.

I think the approach I'm going to take here is to allow :as-arrays? to be either true, false, or :cols-as-is. So :as-arrays? true will continue to work as it does now but if you specify :as-arrays? :cols-as-is it will work like :as-arrays? true but also omit the step that makes column names unique.

Our database stores date and time related information in UTC. But, the results from with-query-results creates objects in the local timezone. This is causing a lot of unexpected behavior when constructing Joda DateTime objects in clj-time. Currently, the with-query-results function is using the single argument getter functions in the ResultSet

Can you provide a self-contained test case? Lots of people are using java.jdbc in production without running into this problem, and at World Singles we've had this in production for a long time in a high traffic environment without seeing any problems with timezones. Dates go in and out of the database unchanged, which is exactly as expected - and we have databases running in three different timezones.

In my opinion, the behavior you're seeing is not a bug (in java.jdbc) but an artifact of your environment being set up incorrectly. I'll leave this ticket open for a little while for more discussion but without a concrete patch that is shown to not affect other use cases, I will close this ticket by the end of August.

As mentioned earlier, the database server timezone is in UTC and the JDBC library runs on a client machine set to local time. If the database has a date value of 2012-7-23, it changes to 2012-7-22 on the client side (if the client is running at timezone greater than UTC) when coerced using to-date-time in clj-time. This is extremely dangerous and is not specific to any database. I noticed it first on Sybase Enterprise Server and I have now also replicated it in the test case with Postgresql.

I have attached a patch which fixes this problem by passing in an optional parameter. The test case is also modified to use clj-time, as it expresses the problem more clearly. Please let me know if you need any more clarification.

p.s. I have already mailed the CA and should reach Durham in a few days.

Since the proposed patch requires the user to pass in a list of columns to treat specially - and reorders columns in results and adds some performance overhead for all users, not just those wanting to adjust column values - I believe users who choose to run JDBC clients in timezones different to the database, against the widely-listed best practice recommendations, should bear the burden of adjusting the columns themselves in their own client code. Since they already know which columns to adjust and they know the difference between UTC and local time, they should be able to make the adjustments easily enough as part of post-processing the resultset-seq.

I have just encountered this issue and I cannot believe you don't consider this a bug. Coercing all DATE fields to the local timezone is literally wrong. You could justify the DB timezone, or a Calendar object, or JSR-310, but this is just a big ball of surprising behavior.

clojure.java.jdbc does absolutely nothing with date or timestamp columns: it hands back exactly what it gets from the underlying Java JDBC library, so this is absolutely NOT a bug in clojure.java.jdbc (there are no references to date or time in the code base). If you think it's a bug, petition the underlying JDBC libraries and see what they say...

I dug into this a bit more and I see that it is basically a problem with JDBC: java.sql.Date extends java.util.Date, using the system default time zone to fill in the missing information. This is especially problematic from the point of view of clojure because they get printed as `#inst` so it appears we are working with regular Date objects. The fix is just to extend the `IResultSetReadColumn` protocol that you have so helpfully provided, for example to use JSR-310 dates:

Unfortunately, since there are currently several competing Java Date solutions (Joda, JSR-310, Date/GregorianCalendar) there isn't a single default that will work for everyone. But I would suggest that you put something in the readme that anyone who intends to work with dates should extend the IResultSetReadColumn protocol to convert to their preferred representation.

jdbc/query uses doall to force results when :as-array? is false and uses vec when :as-array? is true. if you process the result as a seq after that, seqs on vectors are chunked, while the seq from the doall is not, so you can get very different behaviour

:as-array? (rows as vectors) seems like it should be orthogonal to changing chunking

the issue, I guess, isn't the chunking, it is changing the return type of the collection based on :is-array?. one of the differences between the two return types is the chunkedness of the seq you get by calling seq on each

Then I guess I'm not sure what is actionable here. Users specifically requested the ability to return results as vectors instead of maps and so :as-arrays? does exactly what it says on the tin – and the chunkiness is just an artifact of requesting a different result type. Closing this out but curious as to what, if anything, would be appropriate in your mind here...?

The library helpfully allows nested transactions by maintaining an internal depth counter, but silently disregards any transaction options for nested transactions. This has actually bitten me recently in production; code that required a serializable transaction in order to maintain an invariant was inadvertently being executed within a repeatable-read transaction.

This patch changes the behavior to raise if a nested transaction requests a different isolation level in which the actual transaction is running.

A similar problem exists for nested transactions that request read-only mode. I will be happy to modify the patch to cover that case if you are interested in accepting this.

java.jdbc currently has no support for creating and deleting indices. I wrote create-index-ddl for my own project to handle this; I could work that into a patch if that would be helpful. I know you're re-working the global DB connection right now, so if you'd like a patch, but would like me to wait until the code has settled down, that's fine, too.

Thanx Matt. Definitely interested. DDL is an area the library is weak in and I'm thinking of adding clojure.java.jdbc.ddl as a namespace dedicated to DDL-generating functions, if you'd like to kick that off?

I've implemented a simple stropping feature that also provide a way to supply qualified identifiers and a function (as-identifier) that can be reused in user code. The implementation can be found there:

For production usage, I've been relying on c3p0 for connection pooling which works very well with c.j.j so I'm not sure that we need something specific within the library itself. What we do need, of course, is documentation on how to use something like c3p0 for connection pooling with c.j.j!

Stuart Sierra: In general, c.c.sql is inconsistent about using tuples of column values versus maps of column=>value pairs. Both are useful, and I'd like to see versions of all the core functions that support both modes.

Having reviewed this after using the library for quite a while, I don't really see any inconsistencies except, perhaps in the naming of update-values (and update-or-insert-values).

insert-values is the base function, taking a table name, a vector of column names and any number of vectors of column values. Generates one insert statement (containing multiple rows). If only one row is inserted, returns a map of generated keys if the database supports that.

insert-rows takes a table name and any number of vectors of column values (for all columns); uses insert-values.

insert-records takes a table name and any number of maps (containing column/value pairs); uses insert-values to insert one record at a time (and thus may return a sequence of maps of generated keys).

insert-record takes a table name and a single map; uses insert-records and returns a map of generated key if the database supports that.

delete-rows takes a table name and a where clause. Perhaps delete-where might be a better name?

update-values takes a table name, a where clause and a map (of column/value pairs to update). Arguably this should be renamed since it doesn't take the same argument pattern as insert-values, but whilst update-records might sound more appropriate it only takes one "record" and update-record is clearly not right since it can update more than one row!

update-or-insert-values (taking the same arguments as update-values) can however lead to inserting a single record but it can also update multiple records instead.

On both of the latter functions, an analogy could be drawn with the findAndModify command in MongoDB which has an upsert option (to insert the document if no matches are found for the update), but most notably, findAndModify only updates a single record. I don't believe SQL has any portable equivalent but perhaps find-and-modify is a better name for update-values? Or perhaps, in the spirit of my suggestion for delete-where, we might call it update-where - and use an optional keyword argument to indicate that an insert-record should be performed when no rows match?

The only other comment I could make on update-values is that it might be nice to have a variant that took: table where-params column-names value-group - somewhat inline with insert-values (although not allowing any number of vectors of column values).

This will be incorporated into the new API I'm adding in JDBC-20 so you can say (where {:name nil :age 42}) and it will return the correct SQL where clause. Accordingly, I'm closing this as a duplicate.

This has sort of been handled along the way with functions used in the test suite to reflect how various databases handle key generation / return. Surprisingly few seem to reliably return keys on an insert

An optional keyword argument sounds like a reasonable compromise. I'm already looking at keyword arguments on resultset-seq for returning arrays instead of maps and returning an order-preserving map should be as simple as replacing (zipmap keys (row-values)) with (apply array-map (interleave keys (row-values)))

To be able to enforce foreign key constraints in SQLite you need to execute the following statement:

PRAGMA foreign_keys = ON;

However, this statement can't be used from within a transaction. From the SQLite docs:

It is not possible to enable or disable foreign key constraints in the middle of a multi-statement transaction (when SQLite is not in autocommit mode). Attempting to do so does not return an error; it simply has no effect.

In clojure.java.jdbc both do-commands and do-prepared unconditionally run the supplied statements in a transaction. To me it seems like these two functions are the only ones that provide a way to execute an arbitrary SQL statement.

It seems like the more flexible thing to do would be to allow users to specify that a transaction not be used on the existing operations. I'm reworking the API substantially soon so I'll bear that in mind. I might not make it possible with the existing API but I'll make sure that's an option in the new API that I'm adding.

See JDBC-14 since I'm declining this for the same reason. Performing a global update is an unusual operation and I think the API should prevent you from accidentally updating all rows, just as it should prevent you from deleting all rows. Requiring a where clause does that.

1) PreparedStatements.addBatch does not do anything without some parameters, so executeBatch doesn't do anything.
2) Transaction support appears to require closing the ResultSet object from generatedKeys.

I've only tested the included patch with the default test dbs and sqlite3.

This will break things - see JDBC-16 which required that .addBatch be called when there are no param-groups. I'm not going to reject this out of hand but patches for new database support need testing against MySQL, PostgreSQL and MS SQL Server.

Test suite runs correctly for TEST_DBS=mysql,postgres,derby,hsqldb,sqlite3 and TEST_DBS=mysql-str,postgres-str. This just leaves MS SQL server, which I do not have the ability to run.

Regarding JDBC-16, when I revert to just using .executeBatch for the no param-groups case I can see the errors produced in the test-suite. Using .executeUpdate for the no params-group case continues to fix these errors (note MS SQL server untested). I do not see any other information in the ticket that would explain why using .executeUpdate instead of .addBatch/.executeBatch for the no params-group case would break JDBC-16. Is there a reason I am missing?

For a prepared statement batch, it is not possible to know the number of rows affected in the database by each individual statement in the batch. Therefore, all array elements have a value of -2. According to the JDBC 2.0 specification, a value of -2 indicates that the operation was successful but the number of rows affected is unknown.

As documented update-values returns (-2) which means that the (if (zero?)) check in update-or-insert-values will never return true.

Oracle does provide the number of rows updated via an the method getUpdateCount on instances of Statement.

The following change fixes this problem for oracle users, but I am not sure how it will affect other drivers.

Interesting. (.getUpdateCount stmt) is a standard part of the JDBC API so it looks reasonable to see if the result of the (.executeBatch stmt) is a single sequence with a -2 value, the code could call (.getUpdateCount stmt) and return that (single) value as a sequence. It looks like that should be completely portable but it will be hard to test since none of the other DBs return -2 as far as I know and I don't have an Oracle install to test against

Rationale: obviously we want eagerness already, so the only difference here is what data structure we use. A vector has the advantage of being chunked when processed as a seq, which is probably the normal usage?

Most code out there is going to treat the result as a seq so I'd be concerned about potential overhead in changing from essentially `(seq (doall (map ...)))` to `(seq (vec (map ...)))`. Can you persuade me that's not a real issue?

It occurred to me that maybe a more efficient version of this would be the ability to (maybe not by default) construct a chunked-seq directly (in the lower-level manner) in the result-set-seq function. Would this address your objections?

I think the only reason to avoid that by default is the danger (is it really a problem?) of reading more records than the user needs. But it might be that the benefits of chunked seqs outweigh that risk, and just providing an option to unchunk would be sufficient.

And it also occurred to me that if someone actually wants a vector result, they can just specify `:result-set-fn vec` directly. So it's easy to get the behavior you want without changing the behavior for all that existing code out there (I know, not much code relies on the new API yet, but I have it in production at World Singles so I don't want it to silently change under me at this point - we'd have to do a lot of testing to ensure it didn't adversely affect performance and/or stability).

I don't want a vector in particular, I want a chunked seq so that subsequent processing is faster. If we go with just the chunked-seq approach (in result-set-seq) I think there's a very small chance of any noticeable behavior change.

My hunch is that most uses of java.jdbc would see some speedup as a result of chunking.

... made sense before the newish as-arrays? feature. The results were always maps, and with maps it is idiomatic to use keywords as keys.

But when returning the results as nested lists, the first entry contains the field identifiers; these might very reasonably be something other than keywords.

I suggest that when the user passes in their own function to translate from fields to keys, that the user be able to make them keys whatever they like, perhaps plain strings or perhaps somethings else.

As a workaround, of one can just let this code keywordize the information, then unkeywordize it outside afterward.

Optional 'identifier' kwarg only transforms the string column name to another string; the row keys are always keywordized. In some result parsing cases, where a different transformation may be required the default keywordization would be unnecessary.

A possible solution is to expand the responsibility of 'identifier' to keywordize (default behavior) when required. This solution is attached as a patch to this issue, and passes all tests.

The problem with this approach is that anyone using the `identifiers` approach - overriding the default behavior - will now have broken code. I'll give it more thought. I think a :keyword? true|false argument, passed thru everything is the only way to maintain backward compatibility but that's a lot of churn so breaking backward compatibility may be the less painful approach (esp. since 0.3.0 is a whole new API anyway).

java.jdbc has always converted column identifiers to keywords and having keys in maps be keywords is idiomatic in Clojure. If some folks want non-keywords, they can post-process the results.

Adding another option to turn keywords on and off will make the API more complex and breaking compatibility by suddenly not turning the column names into keywords, and forcing a lot of existing code to change, is not acceptable at this point.

OS X 10.8.3
Java 1.6.0_45
Clojure 1.5
c3p0 0.9.1.2
java.jdbc 0.2.1 (the problem also seems to be there in HEAD at the time of logging this)

Description

I use an agent to offload email processing and update a db on completion. If I use a c3p0 connection pool, I get an exception and a complaint that the connection isn't open (see stacktrace). The code works if I wrap the agent fn in a clojure.java.jdbc/with-connection and get a connection from the pool in the agent thread.

The in jdbc.clj, the find-connection (db-find-connection in HEAD) checks for an existing connection and tries to use it without checking if it's open. This causes an exception with a c3p0 connection that was acquired in another thread.

2013-05-16 10:32:22,208 [clojure-agent-send-off-pool-7] DEBUG com.mchange.v2.sql.SqlUtils - Converting Throwable to SQLException...
java.lang.NullPointerException
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:621)
at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
at clj_record.core$find_by_sql.invoke(core.clj:71)
at clj_record.core$find_records.invoke(core.clj:85)
at clj_record.core$find_record.invoke(core.clj:91)
at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
at clojure.lang.AFn.applyToHelper(AFn.java:185)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:623)
at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
at clojure.lang.RestFn.applyTo(RestFn.java:146)
at clojure.lang.Agent$Action.doRun(Agent.java:114)
at clojure.lang.Agent$Action.run(Agent.java:163)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
java.sql.SQLException: You can't operate on a closed Connection!!!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:222)
at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:621)
at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
at clj_record.core$find_by_sql.invoke(core.clj:71)
at clj_record.core$find_records.invoke(core.clj:85)
at clj_record.core$find_record.invoke(core.clj:91)
at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
at clojure.lang.AFn.applyToHelper(AFn.java:185)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:623)
at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
at clojure.lang.RestFn.applyTo(RestFn.java:146)
at clojure.lang.Agent$Action.doRun(Agent.java:114)
at clojure.lang.Agent$Action.run(Agent.java:163)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.NullPointerException
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
... 28 more

If you use the new API, you pass the db-spec into the functions and it will take care of getting the connection, using it, and closing it - and that will all happen on the same (agent) thread. I would not expect you to be able to get a connection in one thread and use it from another thread.

When you get a chance, please post an example of both the working and non-working code, but my initial thinking is this isn't a bug, just an inherent limitation of working with connections across threads....?

I can't say I like that approach. I really don't want to expose the ...* implementation functions. Can you explain what you needed to do differently in your result-set-seq function? Making that extensible by some other approach seems like a better way to tackle this...

<jkkramer> I was initially thinking that (I have a use case in mind I'll explain) but thought this approach allowed for maximum flexibility for callers
<jkkramer> my particular use case is I wanted custom column labels
<jkkramer> which could be accomplished with a custom-label keyword arg or similar to result-set-seq, which is a function that takes rsmeta and idx
<jkkramer> however – i also wanted to be able to return additional per-column metadata such as the table name, which could be used later on to e.g. create nested hash map results. it starts getting complicated to rely on result-set-seq. being able to write my own ResultSet-processing function would be nice
<jkkramer> but i don't want to have to bother with connections, prepared statements, etc. just ResultSet processing

Just to reinforce the point: I don't see it as exposing implementation details so much as exposing the host – i.e., JDBC building blocks. Being able to process a ResultSet without having to manage the connection details would be extremely useful to me. I can make do until 0.4.0+ though.

Given recent discussions about the direction of java.jdbc and the DSLs that crept into 0.3.0, I'm going to close this as "won't fix" and direct efforts toward adding DDL support to HoneySQL (and we talked a bit about this at Clojure/conj).

sqlite has the IF NOT EXISTS clause to CREATE TABLE, but there doesn't seem to be an easy way to use it through jdbc.

It is usually an error to attempt to create a new table in a database that already contains a table, index or view of the same name. However, if the "IF NOT EXISTS" clause is specified as part of the CREATE TABLE statement and a table or view of the same name already exists, the CREATE TABLE command simply has no effect (and no error message is returned). An error is still returned if the table cannot be created because of an existing index, even if the "IF NOT EXISTS" clause is specified.

Java code has already opened java.sql.Connection instance that should be to access the database instead of opening new connection at the Clojure side.

The with-db-connection macro could be used for this with {:connection conn} db spec. However as it wraps the body inside a with-open macro the provided connection will be closed in the end.

Practically the following macro could fix the problem. However I don't know if the already existing with-db-connection macro should be extended instead. Also skipped the documentation as this is only an example:

with-db-connection is a recent addition - as a convenience for users who want to open and close a connection for a specific piece of code. The default mode of operation in java.jdbc wass for users to manage their own connection. Your macro removes useful information from the db-spec. A more useful approach would be: (jdbc/add-connection db-spec (jdbc/get-connection db-spec)) so that the db-spec is maintained in the "connected spec". However, that requires the db-spec be passed in and locks you into calling get-connection which not all code may want. What you're asking for is not provided for a reason: it isn't universally applicable to all java.jdbc users - therefore I do not think it belongs in the library.

The latter does not deal with nil which would probably be a natural choice for passing falsehood while the former would have to be extended when another thing may be passed as SQL, which seems unlikely.

I've got a bunch of integration/system tests in Lobos which are run over multiple target database. As I do not want to force all target drivers to be made as dev-dependencies, I check if the driver is available before running those tests. As long as the classname key is in the db-spec it's all fine, but now java.jdbc support db-spec with that key, it would be useful to have access to the classnames var for me, so it would be great to have it made public. This isn't a blocker as up till now I don't really need it, but I'm planning to add some tests for string based db-spec.

Right now, that's in the internal namespace which is going away so I can't make it part of the API. You can always pull private vars out with @#'clojure.java.jdbc.internal/classnames but bear in mind that internal ns will merge with the parent ns soon!

I think this is more consistent than returning a sequence sometimes and a single value other times. If you want the first key, you know you're always safe to call `first` on the result. I'll probably just update the docstring to make that clearer what happens for different circumstances.