This isn't a major issue just something I've been researching. Here's some background on my situation.I have an admin database that I use to track performance and other items of interest. There is one table of that tracks memory usage and has 6 fields. I am using 2 fields to make up a clustered index with approximately 310,000 records. The data file is 500mb and the log file is 279mb. The log usage is normally around 1% (3mb). My issue arises when indexes are evaluated on weekends and are rebuilt/reorganized accordingly. When the index is rebuilt, the log spikes to 99% and grows if necessary.

After doing some research, I found out why it happens. (not the best explanation)When a clustered index is rebuilt it is broken down, sorted, and rebuilt as a second index. Only then is the original index dropped. This operation is fully logged thus blowing up the transaction log. I've come across a few solutions for this issue.-First is obviously to leave the log as big as it is (or as big as it needs to be). This is the easiest but makes me feel the log is just wasted space for the other 98% of the time. I'm not too interested in shrinking the log every week.-Second is to use the SORT_IN_TEMPDB ON option during the rebuild operation. This is what interests me and mainly why I am writing this post. As far as I know, the sorting would take place in the tempdb database and be logged in its transaction log, relieving the admin database of such duties. If all conditions were met, this is ideal for performance but my main concern is not letting disk space get out of hand. The downside is the transference of used space from the admin database to tempdb therefore not really reducing the overall footprint of the instance.-Third is to change the recovery model to bulk logged just before the rebuild operation and back to full afterwards. This will minimally log the rebuild operation thus reducing the amount of log growth.

Does anyone have any real world experience or opinions on these matters?

calvo (1/4/2011)This isn't a major issue just something I've been researching. Here's some background on my situation.I have an admin database that I use to track performance and other items of interest. There is one table of that tracks memory usage and has 6 fields. I am using 2 fields to make up a clustered index with approximately 310,000 records. The data file is 500mb and the log file is 279mb. The log usage is normally around 1% (3mb). My issue arises when indexes are evaluated on weekends and are rebuilt/reorganized accordingly. When the index is rebuilt, the log spikes to 99% and grows if necessary.

After doing some research, I found out why it happens. (not the best explanation)When a clustered index is rebuilt it is broken down, sorted, and rebuilt as a second index. Only then is the original index dropped. This operation is fully logged thus blowing up the transaction log. I've come across a few solutions for this issue.-First is obviously to leave the log as big as it is (or as big as it needs to be). This is the easiest but makes me feel the log is just wasted space for the other 98% of the time. I'm not too interested in shrinking the log every week.-Second is to use the SORT_IN_TEMPDB ON option during the rebuild operation. This is what interests me and mainly why I am writing this post. As far as I know, the sorting would take place in the tempdb database and be logged in its transaction log, relieving the admin database of such duties. If all conditions were met, this is ideal for performance but my main concern is not letting disk space get out of hand. The downside is the transference of used space from the admin database to tempdb therefore not really reducing the overall footprint of the instance.-Third is to change the recovery model to bulk logged just before the rebuild operation and back to full afterwards. This will minimally log the rebuild operation thus reducing the amount of log growth.

Does anyone have any real world experience or opinions on these matters?

1) why in the world are you concerned about a 279MEGABYTE transaction log?? If your disk space is that tight you have some very serious issues to deal with.

2) sort in tempdb is a wonderful thing for numerous reasons such as being able to split IO usage, avoiding tlog us in primary db, better contiguousness of data layout potentially

You're right. a 279mb log isn't big time but as I said, it's not an issue right now. I was interested in how others might have handled the same situation. Perhaps I could learn from their experience and leverage that if this situation were to ever arise in a more important production database.

I recently switched our index rebuild scripts to use tempdb for the sort runs instead of the database containing the index. I had the same issue where the transaction log was ballooning to about half the size of the database during the nightly rebuild task. With a 500 GB database and a Neverfail implementation all that excess log file I/O was definitely becoming a headache.

Switching to tempdb resolved the excessive log growth issue. The amount of time it takes to perform the reindexing also seems to have improved. The only catch is that you have to make sure have sufficient space on your tempdb disk set to accommodate the amount of index data it will store during the sorting. With the size of your database now you really don't have to worry but I would definitely recommend using this (or at least testing the performance) as you manage larger DBs.

Thanks Chris,I was thinking more about it last night (dreaming of sql server? i need a break) and thought of an additional caveat to using sort_in_tempdb. When I rebuild indexes, I loop through each database and rebuild/reorg as needed. Sorting in the tempdb would allow me to only have one large log file and not multiple large log files, one for each database with index operations.

I'm implementing the sort_in_tempdb this weekend in development. we'll see how it goes.

TheSQLGuru (1/7/2011)Good luck with the testing. Don't forget to size tempdb appropriately and not wait for autogrowths to size it for you.

Yep. It's happening Saturday night and there's no better way I could think of spending my Saturday night than watching transaction logs grow.I set the tempdb log equivalent to the largest log growth for the admin database and will keep an eye on usage during the maintenance.

Alright, it didn't go as expected. I set sort_in_tempdb = on and didn't get the results i expected. Looking over the logs, the index was reorganized and not rebuilt. This got me thinking about performance so I set up a test.

I have two tables, sourcedata and test, each with a clustered index.Here are the steps I would take before each round:-insert 500k records from test.sourcedata into test.test-insert 500k records into test.test to create fragmentation (99% and 3300 pages)-truncate transaction log for more accurate results

I would note the log sizes of tempdb and the test database after each truncate and again after each index operation. Here is what I got:

What gets me here is that reorganize had more of an effect on the log than a rebuild operation. It could be my own misunderstanding, but I thought a rebuild was more log intensive. I'm also unsure why the sort_in_tempdb = on didn't result in a larger tempdb log file, the testdb log actually grew more.

Hi,I am doing rebuilds/reorgs on the live database in our company and have the same issues/things seen and a few ideas how it is working.Sort in TempDB I have not used at the moment but normally I don't rebuild/reorg clustered indexes because they normally don't tend do fragment that much so I do them manually once a while.In this document http://technet.microsoft.com/en-us/library/cc966402.aspx you have quite a lot of information about space usage and where, maybe there is the explanation you are looking for.

You truncated/ shrinked the log so you have gone back to, lets say 20MB every time? Have you done this rebuilds with online=on or off?BR Christoph