Description

A connection request that has a default schema that is being
created by another transaction will block until that transaction
completes (or time out). Probably in this situation the connection
request should be succeed as if the schema does not exist.

This is a problem in particular for a prepared XA transaction, where even after restarting the system, the user cannot reconnect and recover the transaction.

ij> connect 'testdb;create=true';
ij> autocommit off;
ij> create table mytabi(i int);
0 rows inserted/updated/deleted
ij> connect 'testdb';
ERROR 40XL1: A lock could not be obtained within the time requestedERROR 40XL1: A lock could not be obtained within the time requested
at org.apache.derby.iapi.error.StandardException.newException(StandardException.java:295)
at org.apache.derby.impl.services.locks.LockSet.lockObject(LockSet.java:408)
at org.apache.derby.impl.services.locks.SinglePool.lockAnObject(SinglePool.java:168)
at org.apache.derby.impl.services.locks.SinglePool.lockObject(SinglePool.java:220)
at org.apache.derby.impl.store.raw.xact.RowLocking3.lockRecordForRead(RowLocking3.java:181)
at org.apache.derby.impl.store.access.heap.HeapController.lockRow(HeapController.java:425)
at org.apache.derby.impl.store.access.heap.HeapController.lockRow(HeapController.java:543)
at org.apache.derby.impl.store.access.btree.index.B2IRowLocking3.lockRowOnPage(B2IRowLocking3.java:329)
at org.apache.derby.impl.store.access.btree.index.B2IRowLocking3._lockScanRow(B2IRowLocking3.java:571)
at org.apache.derby.impl.store.access.btree.index.B2IRowLockingRR.lockScanRow(B2IRowLockingRR.java:115)
at org.apache.derby.impl.store.access.btree.BTreeForwardScan.fetchRows(BTreeForwardScan.java:374)
at org.apache.derby.impl.store.access.btree.BTreeScan.next(BTreeScan.java:1691)
at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java:7118)
at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java:1381)
at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java:1291)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.initDefaultSchemaDescriptor(GenericLanguageCon
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.initialize(GenericLanguageConnectionContext.ja
at org.apache.derby.impl.db.BasicDatabase.setupConnection(BasicDatabase.java:267)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.startTransaction(TransactionResourceImpl.java:182)
at org.apache.derby.impl.jdbc.EmbedConnection.<init>(EmbedConnection.java:237)
at org.apache.derby.impl.jdbc.EmbedConnection20.<init>(EmbedConnection20.java:49)
at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(EmbedConnection30.java:56)
at org.apache.derby.jdbc.Driver30.getNewEmbedConnection(Driver30.java:68)
at org.apache.derby.jdbc.Driver169.connect(Driver169.java:172)
at java.sql.DriverManager.getConnection(DriverManager.java:512)
at java.sql.DriverManager.getConnection(DriverManager.java:140)
at org.apache.derby.impl.tools.ij.ij.dynamicConnection(ij.java:843)
at org.apache.derby.impl.tools.ij.ij.ConnectStatement(ij.java:700)
at org.apache.derby.impl.tools.ij.ij.ijStatement(ij.java:528)
at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:283)
at org.apache.derby.impl.tools.ij.Main.go(Main.java:204)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:170)
at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:50)
at org.apache.derby.tools.ij.main(ij.java:54)

Changing Fix Version to Unknown as I do not plan to fix it for 10.2. The workaround for users who cannot connect to recover a prepared XA transaction because of the lock timeout would be to connect with a different user so that the default schema is not in the middle of being created.

Kathey Marsden
added a comment - 07/Jul/06 23:52 Changing Fix Version to Unknown as I do not plan to fix it for 10.2. The workaround for users who cannot connect to recover a prepared XA transaction because of the lock timeout would be to connect with a different user so that the default schema is not in the middle of being created.

Uploading an experimental patch which makes the default schema
creation happen in a nested subtransaction which commits when done, so
the lock on SYSSCHEMAS can be released immediately.

This means that the Derby behaviour would change slightly, in that if the
user transaction rolls back, the schema creation would stay, whereas presently
it would be rolled back with the rest of the user transaction.

It makes the repro test (also part of the patch) pass, but there may
be other good reasons why this is not a good approach, so the patch is
for review only at this point.

Dag H. Wanvik
added a comment - 26/May/08 18:08 Uploading an experimental patch which makes the default schema
creation happen in a nested subtransaction which commits when done, so
the lock on SYSSCHEMAS can be released immediately.
This means that the Derby behaviour would change slightly, in that if the
user transaction rolls back, the schema creation would stay, whereas presently
it would be rolled back with the rest of the user transaction.
It makes the repro test (also part of the patch) pass, but there may
be other good reasons why this is not a good approach, so the patch is
for review only at this point.
Running regressions now.

Only one test broke in the regression run, lang/lockTable.sql.
This is because this test runs with table level locking, and the first patch
got a deadlock when the nested transaction for auto-creating a schema
tried to get a write lock to SYSSCHEMAS presumably because the
outer transaction already had a table read lock on SYSSCHEMAS from the
first attempt (which failed) to read the schema descriptor.

Patch derby-48-2 only tries to perform the schema creation in a subtransaction
if row level locking is used. Not sure I like checking the property here, but I did not
find any API for asking the database about locking level.

Dag H. Wanvik
added a comment - 27/May/08 01:26 Only one test broke in the regression run, lang/lockTable.sql.
This is because this test runs with table level locking, and the first patch
got a deadlock when the nested transaction for auto-creating a schema
tried to get a write lock to SYSSCHEMAS presumably because the
outer transaction already had a table read lock on SYSSCHEMAS from the
first attempt (which failed) to read the schema descriptor.
Patch derby-48-2 only tries to perform the schema creation in a subtransaction
if row level locking is used. Not sure I like checking the property here, but I did not
find any API for asking the database about locking level.
lang/lockTable.sq ran OK with the new version of the patch.

1 row selected
ij> create table t (x int);
ERROR 40XL1: A lock could not be obtained within the time requested
ij>

2. In which situations can startNestedUserTransaction() fail? Would it be better to expose those errors if they happen instead of silently ignoring them as in the code below (from DDLConstantAction.getSchemaDescriptorForCreate()):

Knut Anders Hatlen
added a comment - 27/May/08 11:53 I think it sounds like a good idea to do the implicit schema creation in a subtransaction. A couple of questions, though:
1. Should we try to handle self-deadlocks someway? I think the lock timeout produced by the ij script below is a self-deadlock:
ij> connect 'jdbc:derby:db;user=kah';
ij> autocommit off;
ij> set isolation rr;
0 rows inserted/updated/deleted
ij> select count from sys.sysschemas;
1
-----------
11
1 row selected
ij> create table t (x int);
ERROR 40XL1: A lock could not be obtained within the time requested
ij>
2. In which situations can startNestedUserTransaction() fail? Would it be better to expose those errors if they happen instead of silently ignoring them as in the code below (from DDLConstantAction.getSchemaDescriptorForCreate()):
+ try
{
+ nestedTc = tc.startNestedUserTransaction(false);
+ }
catch (StandardException e)
{
+ }

Thanks for looking at this patch, Knut. I agree it is a good idea to
fall back on the outer transaction in case of a timeout. This will also
handle the self-lock case more generally than the testing of page vs
row level locking in previous version of the patch, although at the expense
of a wait. But such wait scenarios are much less rare than the
one the issue addresses, I think.

I am not sure if there is a normal use case for the failure of
creation of a nested transaction here, so I added code to throw in
sane builds for now.

Dag H. Wanvik
added a comment - 27/May/08 19:56 Thanks for looking at this patch, Knut. I agree it is a good idea to
fall back on the outer transaction in case of a timeout. This will also
handle the self-lock case more generally than the testing of page vs
row level locking in previous version of the patch, although at the expense
of a wait. But such wait scenarios are much less rare than the
one the issue addresses, I think.
I am not sure if there is a normal use case for the failure of
creation of a nested transaction here, so I added code to throw in
sane builds for now.
I also added another test case for the self-lock fallback case.
Running regressions over again.

Not that it makes any difference in practice, but... When we call commit() and destroy() on the nested transaction in the catch block, wouldn't it be more natural to call abort()+destroy(), since we don't want any side effects of the work performed by the nested transaction to be persisted? The javadoc for TransactionController.destroy() says that it will also abort the transaction, so I guess a call to destroy() would suffice.

When we check for SQLState.LOCK_TIMEOUT, should we also check for SQLState.LOCK_TIMEOUT_LOG, which is the message id used when lock tracing is enabled? And should SQLState.DEADLOCK be handled the same way?

Knut Anders Hatlen
added a comment - 29/May/08 13:00 Hi Dag,
The approach in the latest patch looks elegant. A couple of small comments and questions:
The try/catch around executeConstantAction() swallows all non-timeout exceptions. I think we need to move "else throw se" from the inner if to the outer if in the catch block.
I think the code below is fine, but it would be clearer if the comment also said: "In non-debug builds, we retry in the parent transaction before giving up."
+ } catch (StandardException e)
Unknown macro: {+ if (SanityManager.DEBUG) {
+ // Should not happen
+ throw e;
+ }+ }
Not that it makes any difference in practice, but... When we call commit() and destroy() on the nested transaction in the catch block, wouldn't it be more natural to call abort()+destroy(), since we don't want any side effects of the work performed by the nested transaction to be persisted? The javadoc for TransactionController.destroy() says that it will also abort the transaction, so I guess a call to destroy() would suffice.
When we check for SQLState.LOCK_TIMEOUT, should we also check for SQLState.LOCK_TIMEOUT_LOG, which is the message id used when lock tracing is enabled? And should SQLState.DEADLOCK be handled the same way?

Thanks for catching the fall-through case, Knut. The new patch,
derby-48-4, fixes this plus omits the commit for the failed cases,
destroy is sufficient I agree. This further simplifies the code, since
destroy can't throw. This removed the need for the extra comment, too.

I considered the cases of LOCK_TIMEOUT_LOG and DEADLOCK: I get the
former if I enable derby.locks.deadlockTrace=true. I think it may be
best to show the the self.lock if the user has enabled this property
so she may take steps to avoid it (it only shows on derby.log if we
throw it, it seems). I added a testcase,
testDerby48SelfLockingRecoveryDeadlockDetectionOn to test this.

As for deadlock, I think this will only happen if another transaction
is involved, and in that case, trying again in the outer transaction
would only deadlock again (usually), so my current thinking is that it
may be best to let that one throw?

Dag H. Wanvik
added a comment - 30/May/08 00:58 Thanks for catching the fall-through case, Knut. The new patch,
derby-48-4, fixes this plus omits the commit for the failed cases,
destroy is sufficient I agree. This further simplifies the code, since
destroy can't throw. This removed the need for the extra comment, too.
I considered the cases of LOCK_TIMEOUT_LOG and DEADLOCK: I get the
former if I enable derby.locks.deadlockTrace=true. I think it may be
best to show the the self.lock if the user has enabled this property
so she may take steps to avoid it (it only shows on derby.log if we
throw it, it seems). I added a testcase,
testDerby48SelfLockingRecoveryDeadlockDetectionOn to test this.
As for deadlock, I think this will only happen if another transaction
is involved, and in that case, trying again in the outer transaction
would only deadlock again (usually), so my current thinking is that it
may be best to let that one throw?
Re-running regressions.

I agree that we probably shouldn't check for LOCK_TIMEOUT_LOG and DEADLOCK (Although I'm not sure the statement that retrying in the outer transaction usually causes a new deadlock is correct, since aborting the nested transaction probably would allow the other transactions involved in the deadlock to continue.)

In the code below, if we move destroy() out of the try block (which we could and perhaps should?), it becomes obvious that the potentially swallowed exception is an error in commit. This I think would mean that we haven't actually created the schema, and it feels a bit strange to ignore that exception, even in insane builds. I think it would make sense to revert this back to how it was in patch 1 and 2, where exceptions from nestedTc.commit() were not swallowed.

Knut Anders Hatlen
added a comment - 30/May/08 07:32 Thanks Dag. Some more comments:
I can't find the test case in the patch. Did you forget svn add?
I agree that we probably shouldn't check for LOCK_TIMEOUT_LOG and DEADLOCK (Although I'm not sure the statement that retrying in the outer transaction usually causes a new deadlock is correct, since aborting the nested transaction probably would allow the other transactions involved in the deadlock to continue.)
In the code below, if we move destroy() out of the try block (which we could and perhaps should?), it becomes obvious that the potentially swallowed exception is an error in commit. This I think would mean that we haven't actually created the schema, and it feels a bit strange to ignore that exception, even in insane builds. I think it would make sense to revert this back to how it was in patch 1 and 2, where exceptions from nestedTc.commit() were not swallowed.
+ try
{
+ nestedTc.commit();
+ nestedTc.destroy();
+ }
catch (StandardException e) {
+ if (SanityManager.DEBUG)
{
+ // Should not happen
+ throw e;
+ }
+ }

Dag H. Wanvik
added a comment - 30/May/08 14:18 Oops. the test was left out of patch #3, reattaching, plus
a) changed the wording on the rationale for not catching the dead-lock to
"it may be better to expose it"
b) removed the try-catch around the final commit in the nested tr case, you
are right.
The regressions passed, so I'll commit this version if there are no further comments.

Why not just let the SQLException be thrown out of the test method regardless of the SQL state? As it is now, we'll lose the stack trace if it's a lock timeout.

The last part of testDerby48testNewSchemaHang tests that the implicitly created schema is still around after a rollback. It's maybe OK to test it, but it's not actually testing whether or not Derby does the right thing, it only canonizes the current behaviour. But I guess that's what many of our tests do anyway...

Is it intentional that derby.locks.waitTimeout is set to 1 both in setUp() and in testDerby48SelfLockingRecoveryDeadlockDetectionOn()?

waitTimeout is not reset, as far as I can see. I think it would be better to wrap the test with DatabasePropertyTestSetup.setLockTimeouts() which automatically does the right thing for you. DatabasePropertyTestSetup would probably also be the most appropriate way to set reset the deadlockTrace property.

Perhaps you could add a comment about why the exception is swallowed. Is it because the schema in some cases does not exist? In that case, perhaps we should have an assertSQLState() to prevent unexpected exceptions from being ignored?

Is engine the right package for the test? We are also testing the client, it seems.

Knut Anders Hatlen
added a comment - 30/May/08 15:33 The last patch looks fine.
Some small nits in the test, but they shouldn't stop you from committing.
In a comment near the end of testDerby48testNewSchemaHang: s/shoudl/should
In testDerby48testNewSchemaHang:
+ } catch (SQLException e) {
+ if (e.getSQLState().equals(LOCK_TIMEOUT))
{
+ c1.rollback();
+ c1.close();
+ fail("DERBY-48 still seen");
+ }
else
{
+ throw e;
+ }
Why not just let the SQLException be thrown out of the test method regardless of the SQL state? As it is now, we'll lose the stack trace if it's a lock timeout.
The last part of testDerby48testNewSchemaHang tests that the implicitly created schema is still around after a rollback. It's maybe OK to test it, but it's not actually testing whether or not Derby does the right thing, it only canonizes the current behaviour. But I guess that's what many of our tests do anyway...
Is it intentional that derby.locks.waitTimeout is set to 1 both in setUp() and in testDerby48SelfLockingRecoveryDeadlockDetectionOn()?
waitTimeout is not reset, as far as I can see. I think it would be better to wrap the test with DatabasePropertyTestSetup.setLockTimeouts() which automatically does the right thing for you. DatabasePropertyTestSetup would probably also be the most appropriate way to set reset the deadlockTrace property.
In tearDown() we have this code:
+ try
{
+ createStatement().executeUpdate("drop schema newuser restrict");
+ }
catch (SQLException e)
{
+ }
Perhaps you could add a comment about why the exception is swallowed. Is it because the schema in some cases does not exist? In that case, perhaps we should have an assertSQLState() to prevent unexpected exceptions from being ignored?
Is engine the right package for the test? We are also testing the client, it seems.

Dag H. Wanvik
added a comment - 30/May/08 19:17 Uploading a new version, #6, which:Uploading a new version, #6, which:
Fixes the typo
Kept the explicit fail because I would like to see the JIRA number show
if we get a regressions here. I'd like to see a fail which lets us
attach an exception.
Changed the setup for timeouts to use the decorators you suggested,
thanks, nicer.
Added an assertSQLState(42Y07) - schema does not exist - for the drop schema case.
As far as the package, I was not sure where to put it, but since you
asked, moved it to lang; it is perhaps more appropriate.

Thanks for the new patch, Dag! It looks very good, so please go ahead and commit it.

> - Kept the explicit fail because I would like to see the JIRA number show
> if we get a regressions here. I'd like to see a fail which lets us
> attach an exception.

You could do a throw new Exception("DERBY-48 still seen", e) which would both print the JIRA number and preserve the original stack trace. By the way, the test name contains both the JIRA number and the word "hang", so it should be pretty obvious which bug has regressed if that test throws a timeout exception.

Knut Anders Hatlen
added a comment - 30/May/08 20:10 Thanks for the new patch, Dag! It looks very good, so please go ahead and commit it.
> - Kept the explicit fail because I would like to see the JIRA number show
> if we get a regressions here. I'd like to see a fail which lets us
> attach an exception.
You could do a throw new Exception(" DERBY-48 still seen", e) which would both print the JIRA number and preserve the original stack trace. By the way, the test name contains both the JIRA number and the word "hang", so it should be pretty obvious which bug has regressed if that test throws a timeout exception.

Dag H. Wanvik
added a comment - 31/May/08 01:47 Right, but I'd like a JUnit Failure, not an Error.. Oh well.. I think we are OK with this one, either way
we will know what it is if we see it
Thanks for your diligent help on this one, Knut!

This (#7) is the same as #6, except it adds a static method fail(String, Exception) to BaseTestCase and uses that for the case discussed above, plus makes a constant out of 40XL2; all in LazyDefaultSchemaCreationTest.

Dag H. Wanvik
added a comment - 31/May/08 03:10 This (#7) is the same as #6, except it adds a static method fail(String, Exception) to BaseTestCase and uses that for the case discussed above, plus makes a constant out of 40XL2; all in LazyDefaultSchemaCreationTest.

Dag H. Wanvik
added a comment - 04/Aug/08 23:48 I was (still am) hesitant to backport it since it does change behavior
slightly, cf. the release note. If people think it's ok and important
enough, I can backport it.

I can't tell from the release note exactly what the incompatible change is so it's hard to comment on if it should be backported.

E.g.:

The summary "This is no longer the case", it doesn't seem clear what 'This' refers to, should it be part of the previous sentence, e.g. "as well, this is ..."?
Maybe the summary could be re-written to describe what does happen, rather than focusing on the previous behaviour?

In "Symptoms Seen ..." section it says the schema will be created earlier than it would have before, is this true, from the summary it seems it is created at the same time as previously? I.e. the only change is if the transaction is rolled back, not when the schema is created?

The "Rationale for change" is more describing what is implemented, rather than the justification for changing it. Maybe it could be more along the lines of:
"Implicit schema creation is now performed in its own transaction to avoid deadlocks with other connections accessing the same schema"

Is this change in behaviour for all implicit schema creations or just for the default schema, it was hard to tell from the comments in this entry? That might change the release note as well.

Daniel John Debrunner
added a comment - 05/Aug/08 00:35 I can't tell from the release note exactly what the incompatible change is so it's hard to comment on if it should be backported.
E.g.:
The summary "This is no longer the case", it doesn't seem clear what 'This' refers to, should it be part of the previous sentence, e.g. "as well, this is ..."?
Maybe the summary could be re-written to describe what does happen, rather than focusing on the previous behaviour?
In "Symptoms Seen ..." section it says the schema will be created earlier than it would have before, is this true, from the summary it seems it is created at the same time as previously? I.e. the only change is if the transaction is rolled back, not when the schema is created?
The "Rationale for change" is more describing what is implemented, rather than the justification for changing it. Maybe it could be more along the lines of:
"Implicit schema creation is now performed in its own transaction to avoid deadlocks with other connections accessing the same schema"
Is this change in behaviour for all implicit schema creations or just for the default schema, it was hard to tell from the comments in this entry? That might change the release note as well.

Dag H. Wanvik
added a comment - 05/Aug/08 11:09 Thanks for your comments, Dan. Uploading a new version of the release
notes which are hopefully clearer.
To your specific questions:
The implicit creation happens at the same time as before, the only
change is what happens if the user transaction rolls back.
The change is for the default schema only. Apart from the system
schemas, I am not aware of other schemas created implicitly.

I see Derby implicitly creates schemas in cases I did not realize,
e.g. CREATE TABLE S.T(..) will create S implicitly if it does not
exist. I see from the code that the patch change applies for such cases as
well, so the release note would need to reflect that as well as the
change stands.

But is it acceptable to let such implicitly created schemas not be
rolled back? I will have a look at see what SQL says about this, if
anything. If it is not acceptable, I will reopen this issue and see if
I can limit the change to just the default schema. If it is acceptable
to let this change impact all implicit schema creation, I will update
the release notes.

Dag H. Wanvik
added a comment - 05/Aug/08 12:20 I see Derby implicitly creates schemas in cases I did not realize,
e.g. CREATE TABLE S.T(..) will create S implicitly if it does not
exist. I see from the code that the patch change applies for such cases as
well, so the release note would need to reflect that as well as the
change stands.
But is it acceptable to let such implicitly created schemas not be
rolled back? I will have a look at see what SQL says about this, if
anything. If it is not acceptable, I will reopen this issue and see if
I can limit the change to just the default schema. If it is acceptable
to let this change impact all implicit schema creation, I will update
the release notes.

It seems the SQL standard does not allow implicit
schema creation as in CREATE TABLE S.T(...), but Derby does.

I am starting to doubt whether the solution chosen in this patch is
good. If we want our DDL to be transactional, always creating implicit
schemas in separate transactions seems to defeat the
transactionality..

I can see four possibilities:
1) let the old behavior remain (essentially: "this is not a bug")
and roll back this patch
2) rework the patch to apply only for the initial default schema
Other implicit schema creations will happen in the user transaction as before.
3) drop the lazy schema creation, i.e. always create a user schema
as soon as a user (which has write access) connects
4) let the patch remain as is, impacting all implicit schema creation
and rewrite the release notes.

Dag H. Wanvik
added a comment - 05/Aug/08 12:52 - edited It seems the SQL standard does not allow implicit
schema creation as in CREATE TABLE S.T(...), but Derby does.
I am starting to doubt whether the solution chosen in this patch is
good. If we want our DDL to be transactional, always creating implicit
schemas in separate transactions seems to defeat the
transactionality..
I can see four possibilities:
1) let the old behavior remain (essentially: "this is not a bug")
and roll back this patch
2) rework the patch to apply only for the initial default schema
Other implicit schema creations will happen in the user transaction as before.
3) drop the lazy schema creation, i.e. always create a user schema
as soon as a user (which has write access) connects
4) let the patch remain as is, impacting all implicit schema creation
and rewrite the release notes.
What do you think?

There's one more option as described in the original description which would fix the bug and not change behaviour.

5) Allow the connection request to succeed as though the schema does not exist. I.e. the connection succeeds and is in the default schema but does not wait for the real schema creation by the other thread.

Daniel John Debrunner
added a comment - 05/Aug/08 19:14 There's one more option as described in the original description which would fix the bug and not change behaviour.
5) Allow the connection request to succeed as though the schema does not exist. I.e. the connection succeeds and is in the default schema but does not wait for the real schema creation by the other thread.

Thanks, Dan. I had more or less discounted that solution; I think
because I didn't see a way of getting the schema descriptor without
risking waiting for a lock (would need something similar to
C_LockFactory.NO_WAIT?), but I am sure we could go that route too. I
will have a look.

Meanwhile, I enclose derby-48b-1 which implements solution 2) and adds
an extra test case to verify that other implicit schema creations
(besides the initial default schema) are still transactional.
Regression tests passed.

Dag H. Wanvik
added a comment - 05/Aug/08 23:36 Thanks, Dan. I had more or less discounted that solution; I think
because I didn't see a way of getting the schema descriptor without
risking waiting for a lock (would need something similar to
C_LockFactory.NO_WAIT?), but I am sure we could go that route too. I
will have a look.
Meanwhile, I enclose derby-48b-1 which implements solution 2) and adds
an extra test case to verify that other implicit schema creations
(besides the initial default schema) are still transactional.
Regression tests passed.

The derby-48b-1 patch looks like it is doing the right thing and I have hand-verified that the transactional behavior is as you describe. The release note looks good now.

It's hard not to regret the old decision to violate the ANSI rules and create schemas implicitly on the fly. I wonder whether we are just exchanging one edge case for another. With this patch, there is now a path by which transactional work can be committed on behalf of your connection even though you explicitly rollback your transaction. I'm afraid I'm not smart enough to know whether this edge case is already violating some assumptions made elsewhere in the code. At the very least, it will require us to put another post-it on our brains next to the note about implicit schema creation.

Rick Hillegas
added a comment - 11/Aug/08 20:58 Hi Dag,
The derby-48b-1 patch looks like it is doing the right thing and I have hand-verified that the transactional behavior is as you describe. The release note looks good now.
It's hard not to regret the old decision to violate the ANSI rules and create schemas implicitly on the fly. I wonder whether we are just exchanging one edge case for another. With this patch, there is now a path by which transactional work can be committed on behalf of your connection even though you explicitly rollback your transaction. I'm afraid I'm not smart enough to know whether this edge case is already violating some assumptions made elsewhere in the code. At the very least, it will require us to put another post-it on our brains next to the note about implicit schema creation.

Thanks for looking at this Rick! I agree, I don't quite like this solution either.
It improves on the previous patch, so I will commit it, but leave the issue open
and see if I can make solution 5) work, cf Dan's comment.

Dag H. Wanvik
added a comment - 12/Aug/08 13:01 Thanks for looking at this Rick! I agree, I don't quite like this solution either.
It improves on the previous patch, so I will commit it, but leave the issue open
and see if I can make solution 5) work, cf Dan's comment.

Dag H. Wanvik
added a comment - 18/Aug/08 16:35 Uploading derby-48-10_4-1, which I propose to commit to the 10.4 branch.
It is the sum of the two trunk patches, but I had to tailor them a bit, hence this
separate patch. Running regressions now.