If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Unanswered: Transaction log filling up

I have Archival logging on. It was my understanding that only with circular logging would the transaction log fill up. I checked the hard drive and there was plenty of space left. So I am confused how the logs are filling up and what I can do to prevent this. User Exit is on shipping archive logs to a veritas server.

Your problem is not archiving the logs, it is that your active logs are not big enough to hold all the currently active transactions. Either you are experiencing very large transactions (not committing often enough), or you need to increase the logging parameters (logprimary, logsecond, logfilsiz).

Another potential problem could be that there is not enough disk space to hold all log entries. Note that free disk space after the rollback doesn't mean anything. If secondary log files were used, they will be freed as part of the rollback, if possible.

I doubled the LOGFILSIZ and LOGSECOND of the datbase and it now the query runs without filling up the logs. Thanks for the advice.

Right now I'm just runnning a pruning script that has to DELETE 60,000 records for each hour of data. Wondered if there were any more effective way to prune this many records. I want to avoid locking, performance issues, and tons of logs.

Right now I'm just runnning a pruning script that has to DELETE 60,000 records for each hour of data. Wondered if there were any more effective way to prune this many records. I want to avoid locking, performance issues, and tons of logs.

I believe what is happening is, the DELETE of 60,000 records from table B escalates from a row lock to a table lock of table B. The Application does rapid inserts into table A which has a trigger to update table B. Is there a way to DELETE this many records with out locking out the table which will stall the application.

To reduce the likelihood of lock escalation, increase the size of the LOCKLIST in the database configuration. The default is way too small. A value of 4096 (4K pages) is probably OK for most applications. Also, you should probably increase the MAXLOCKS to about 60.

You could create a stored procedure that does cursor processing that commits after every delete, and only locks one row at a time (or locks n rows at a time if you only commit every n rows deleted). The stored procedure can be called from a script. This may take a little longer to execute, but it will minimize lock contention.

I increased LOCKLIST and MAXLOCKS significantly but its still escalating to a table lock. Not sure if those parameters will help me since I am trying to delete about 60,000 records at a time.

Anyway to pass in a flag into the DELETE command not to lock down the table. Using DB2 v8.2 right now. Or maybe a trigger to delete records as they come in... Its looking as I may have to do this offline.

I increased LOCKLIST and MAXLOCKS significantly but its still escalating to a table lock. Not sure if those parameters will help me since I am trying to delete about 60,000 records at a time.

Anyway to pass in a flag into the DELETE command not to lock down the table. Using DB2 v8.2 right now. Or maybe a trigger to delete records as they come in... Its looking as I may have to do this offline.

I bumped up LOCKLIST to 3000 pages. This seems to be adequate to DELETE 60K records. I havent received a lock esc since.