I have the problem that will all versions (1.1.3, 1.2.0 and 2.0.0) the value in the sequence table is running behind with the values that are actually stored in the table. That will result in a PK exception in the next insert.

Can anyone think of a scenario how this can happen? So: Eclipselink will issue a PK value but not update or commit the sequence table.

The only thing I can think of is that the connection writing the sequence table is rolled back, but the one doing the inserts is not.

On 2010-01-04 20:04, James wrote:
> I cannot see how this is possible. EclipseLink always writes to the
> database before using a sequence value.
>
> Are you changing your preallocation size dynamically? How is your
> sequence and connection pooling configured, and how are you allocating
> sequence numbers?

No. The strangest thing is that this popped up quite unexpectedly. I had a stable system using the 1.1 branch for a long time and switched to 2.0.0RC about two weeks before Christmas (skipping 1.2.0). And just before Christmas suddenly all kind of sequences were starting to 'lag'. I didn't have time to investigate anymore, so I downgraded back to 1.1.3 and have a "panic.sql" running every 15 minutes to reset the most used sequences to their max(PK) value. I was going on vacation during Christmas, and nobody called, so it worked. And it was good excuse to bring the laptop on vacation ;-)

Now I have to start digging into what is causing this and could use some pointers where to look.
Fact is that the sequences lagged under 2.0.0RC1 - *that is 100% sure*; I kept resyncing the sequences many times a day (while I was downgrading and testing that build).
Possibly also under 1.1.3, at least the one test we did today had this behavior, but the panic.sql was still running and that it may have conflicted. Going to rerun that test Wednesday.

The big changes that I made recently, is switching from using getSingleResult to find because of the caching, and from JPA events to Eclipselink events.
I also should have a look if I made some changes in the way the pool is setup. I remember fiddling with the parameter map for the EMF when upgrading to 2.0.0RC1. I think I had "commit using the same connection" in there, because of some locking that I was trying to do.

On 2010-01-04 20:04, James wrote:
> I cannot see how this is possible. EclipseLink always writes to the
> database before using a sequence value.

I am moving in closer. ATM I'm trying to reproduce the following scenario:
- new entity
- entity is assigned a PK
- persist -> exception because of some database constraint (e.g. not null)
- rollback (also the modification to the sequence is rolled back, because all use the same connection)
- user makes change to resolve the error
- persist -> ok
- commit

What goes wrong here, is that before the exception the entity is assigned a PK key.
The second persist does not need to allocate one since the entity already has a PK, so it won't and thus the sequence is not incremented.
Result: last PK in table is higher than the sequence value.
Possible solutions:
1. fetch PK in separate connection and commit each time
2. rollback the PK, or assign it to some other entity

Naturally I'm going to attempt #1, I remember EL doing that, but I do not know how this will play with my locking scenario in another part of the application.
But #2 is a probably a result of the changes in the way the fields have been assign, as we've discussed in the "1.1.3 vs 2.0 equals" thread.

lOptions.put(PersistenceUnitProperties.JOIN_EXISTING_TRANSAC TION, "true"); // reads and write should go through the same connection

This not only makes entity SQL read and write go through the same connection, but also the sequence SQL's.
And yes, now I have a connection conflict again; disabling it makes reading go through a different connection than writing, and any changes written to the database blocks queries on these records.

I am not sure JOIN_EXISTING_TRANSACTION also means that the sequence SQL should go through the same connection.

> I am not sure JOIN_EXISTING_TRANSACTION also means that the sequence SQL
> should go through the same connection.

"Real" sequences are not part of any transaction; once a sequence has generated a value, that value can never be generated again, rollback or not.
So I think sequence SQLs should always use their own connection and commit after each increment.

I suppose that in general it is best to not continue to used objects from a failed transaction. They can have lingering data from the transaction, such as your id, or version numbers, or references to other objects in the failed transaction. If a transaction fails, best to restart from scratch.

If you want to reused the failed objects, then you should reset the Id to null or 0, or keep a copy of their original state before the persist.

For sequencing you can use a sequence connection pool, which is definitely recommended if you are using TABLE sequencing.

"eclipselink.jdbc.sequence-connection-pool"="true"

For connections, another way to use the same connection for read and write is exclusive connections,

> I suppose that in general it is best to not continue to used objects
> from a failed transaction. They can have lingering data from the
> transaction, such as your id, or version numbers, or references to other
> objects in the failed transaction. If a transaction fails, best to
> restart from scratch.

In a fat client, the user is navigating all over the place; adding, removing and modifying entities.
When he presses save, you suggestion is to iterate over all to-be-stored-entities, create a clone and store that?