Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

If your transaction log is getting full and you don't have any autogrowth settings turned on with it (or have it sized properly), it sounds like this may be happening because of a long running transaction. More specifics on exactly what's happening and how your t-log file settings are would better assist us in answering.
–
Thomas StringerJan 18 '13 at 14:41

What's that data? Rows? Do you do one big transaction or multiple smaller ones?
–
MarianJan 18 '13 at 16:44

1

Based on that I'd assume that either your import is to large and needs to be changed into batching the import (via fast load) or you need to change the unpivot procedure to batch the process. That or allow the log to grow larger.
–
mrdenny♦Jan 22 '13 at 22:09

1 Answer
1

There may be nothing you can do except allow your log file to grow further. Without the exact message we can't be sure, but there are two reasons you might be runing out of log space:

You log file is not set to a large enoguh size (or not set to auto-grow), in which case you need adjust these properties.

The partition your log file is stored on is full so even if you have log growth enabled it has no-where to grow too. In this case you need more drive space.

Everything your package does will go via the log file: this is an important part of how SQL Server manges transactions in order to keep your data consistent and it can not simply be switched off or temporarily moved. You might find that your package is taking more fully logged action than it needs to in order to acheive the goal. You give no detail of the process your package follows so we can only give general guidance:

Could the same rows be being modified many times in the process? If so you could try to merge these steps. UPDATE table1 SET fiedl1=1 WHERE ID=1; UPDATE table1 SET fiedl2=2 WHERE ID=1; is two row updates that will be logged where UPDATE table1 SET fiedl1=1, fiedl2=2 WHERE ID=1; is one. Over millions of rows this could stack up quite quickly. Likewise if you insert rows and later update them in the process.

Are you refreshing a table completely by wiping it down and refilling it? In which case use TRUNCATE TABLE <tablename> instead of DELETE <tablename>. TRUNCATE is a minimally logged operation so will use less log space (and be much faster) then wiping the table by DELETEing every row.

Better still: if you are deleteing a bunch of rows then replacing them, are many of the replacement rows identical to the ones you initially delete? If so you could update your logic to only UPDATE rows that need updating, INSERT rows that are missing and DELETE rows that are superfluous. For instance one of our apps gets a daily update feed from a client that contains state from another application: this is hundreds of thousands of rows but only a few hundred are added/changed/deleted each day so it is a lot more efficient to detect which few hundred need action instead refreshing everything.