If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Unanswered: Insert statement hold at SYSSEQUENCES for more than 30 min

Have you ever tried to insert a record with more than 30 min?

I use SQL like below,
INSERT INTO RPTDATA(RPTDATA_ID, FTY, RECV_DATE) VALUES ( NEXTVAL FOR SN_RPTDATA, 'FTY1', timestamp('2009-06-04 00:38:55'));
SELECT PREVVAL FOR SN_RPTDATA AS MAX_ID FROM RPTDATA;

Auto Commit is off.
The first one is to insert, 2nd one is to get back the ID.

After run 1st SQL, it hold for more than 30 min sometimes (not always) without any problem, then I can run the 2nd SQL.
When the 1st SQL is waiting, I go to control center and see the lock details, it has a lock in table SYSIBM.SYSSEQUENCES.
At the same time I make another connection to query the SYSSEQUENCES table, it just waiting.
Then after a long time, when the 1st SQL complete, the 2nd SQL can run, and the new connection can return the result of SYSSEQUENCES.

Do anyone have any clue why that happen? Will it related to other apps use the same DB2 instance?

If I understood it correctly, you see regular locking behavior. The 2nd connection has to wait until the 1st connection releases its lock on SYSIBM.SYSSEQUENCES. You can avoid the wait on the 2nd connection by using UNCOMMITTED READ isolation level, which has its own implications...

Actually I just made one connection.
And before I run the 2nd SQL, the 1st SQL hold for more than 30min...
So it seems not related to the isolation level.
(Actually I have set to UNCOMMITTED READ already... so really don't know why it hold there... as there is just one connection, it should not be locked by another connection...)

What exactly do you mean with "the 1st SQL hold for more than 30min"? Do you stop it manually? Does the statement finish and you wait 30min before executing the 2nd SQL statement?

In any case, the insert should be really fast, and you probably have a lock-wait condition on the sequence. And that can only happen if you have a 2nd connection holding the locks. You can try the following:
- do a LIST APPLICATIONS to check which other connections exist
- reduce the lock time out to some smaller number
- take a snapshot to see who is waiting for locks held by whom

Yes. The 1st SQL run for more than 30 min (sometimes shorter, sometimes longer, no pattern on the period at all). I do not stop it or do anything.
I have set the lock timeout to 30 min, it seems it's not related to lock problem, as if it is a lock problem, the SQL should return error, but there is no error.

When I list application in control center, I cannot see other connection connecting to the database.
Certainly, the DB instance has many other application connect to other databases.

Actually I wonder if the harddisk has any problem leading to it. So later on I would like to try an offline reorganize to see if it helps.

For this database instance, sometimes it just keep waiting when I list applications in command line before connect to any DB. I wonder if the DB itself has some problems.

I really suspect a locking problem here. If the lock timeout is 30min and you wait 30min, then that is a intuitive conclusion. However, you haven't given us more details (like snapshots and how you collected them), so I can't comment any further.

as my application has log for every SQL, I can know it takes how long did the 1st SQL run. It sometimes takes about 10 min, sometimes takes for 1.5 hr, so it seems the problem does not related to a lock.
Just performed an offline REORG and RUNSTAT, the problem still exist.
And just found some clue from the file system... there are about 1 million LOG files in the LOG directory... (even do a "ls" need a very long time) and now monitoring if the problem exist when the old LOG file is full and need a new LOG file.
Anyway, now trying to move those very old LOG file to another dir, see if the problem still exist... sigh...
Hope it works...

I know that some file systems can't cope with too many files in single directory and that performance degrades dramatically. That's why every good DBA will make sure that the amount of log files doesn't grow too big and older logs are archived. Another reason for archiving is that logs are typically on fast disks (to reduce performance degradation due to slower log writing). And fast disks are still rather small these days and you wouldn't want to fill them up.

So I suggest you look into log archiving and the facilities provided by DB2 to accomplish that automatically.