SQL SERVER – Automated Index Defragmentation Script

Index Defragmentation is one of the key processes to significantly improve performance of any database. Index fragments occur when any transaction takes place in database table. Fragmentation typically happens owing to insert, update and delete transactions. Having said that, fragmented data can produce unnecessary reads thereby reducing performance of heavy fragmented tables.

I have often been asked to share my personal Index Defragmentation Script. Well, I use Automated Index Defragmentation Script created by my friend – a SQL Expert – Michelle Ufford (a.k.a SQLFool). Michelle is a SQL Server Developer, DBA, a humble blogger, and an absolute geek! She is also the President of Eastern Iowa PASS Chapter, known as the I380 Corridor Professional Association of SQL Server. Currently, she is working with large, high volume, high performance SQL Server databases at GoDaddy.com. Her nickname might be SQLFool but honestly speaking, she is one of the best Gurus of SQL. You can read her blog here.

Michelle has written an excellent article about Automated Index Defragmentation Script. I recommend this wonderful write-up to all those database developers who are searching for a good solution to improve database performance. Read her article Automated Index Defragmentation Script to equip yourself with better understanding on how to improve database performance.

Here, I would to mention an interesting question taken from the above-mentioned article, which will clearly show how powerful this Automated Index Defragmentation Script is.

How long will this Automated Index Defragmentation Script take to run? Well, it depends. I don’t necessarily recommend running it without specifying a database; at least, not unmonitored. You *can* do that, but it could take a while. For example, to run sys.dm_db_index_physical_stats for all databases and tables, totaling 2TB, took me 4.5 hours; that doesn’t even count the actual defrags.

I looked this script from coding perspective and I did check logic in depth… I assume it would be best ….

I do have few suggestions, can be ignored if they do not make sense.

1. To check existing (Tables) , its select Object_ID from systables. To drop procedures we did not use sys.procedures but we took a long path, object_property (object_id (proc_name)…. Any specific reason ? Just curious to know.

2. Online rebuild is good. What if I do not want to use this feature as it would take long time and I do not want to spent that much time or for some other crapy reason., Even though I have Enterprise Edition I still want to rebuild my indexes offline. Stored procedure accept a parameter @editionCheck , 1 = Rebuild Online, 0 = Rebuild Offline, Even if I give @editionCheck = 0, and if I have Enterprise Edition, Script automatically updates @editionCheck from 0 to 1. ( I wanted it to keep 0, but script forcefully updated it to 1)

3. We have two while loops. One to store objects in a database other to execute sql statement using those object names,

a) Is there a specific reason why we are defraging index with high fragmentation first, If there is no reason then why do we want to use select top 1 ? From performance side I dont think this is good. ( I know very little about performance tunning, please correct me if I am wrong)

b) since we are using top we have to use delete statement, which is again a logged operation, increasing overhead… Why not just use ID = @Var and increment @var to @Var + 1. Later drop #temp table.

I agree there could be few database, but there could be many tables in a database and many more tables through out server.

Note : I have written a script to rebuild indexes, which is not even comparable with this script ( This script is way better than mine )

1. This would require users to be on SQL 2005 SP2. The informal poll I did showed that this would exclude many people, so I went the long way instead.

2. You need to pass @onlineRebuild = 0 to the stored procedure to turn off online rebuilds. @editionCheck is internal to the stored procedure and does not need to be configured.

3.a. Yes, there’s a reason for it. I list indexes in order of most fragmented because, if you’re just returning the list and building it manually, you probably want to work on your most fragmented first. Also, I have occasionally had to kill my defrag script during execution; I’d rather know that it had worked on the most heavily fragmented tables and left the less fragmented ones.

3.b. I could look at doing that, but relatively speaking, the amount of deleting, and the frequency of it, is pretty small. I’ve never had any performance issues with it.

I have a large table that perform insert and update for every 15 minute (2000 record). From your point of view, it is ok if i run the automated index defragment on this large table on daily basic or i should run it for every 30 minute?

My aim is better the index defragment can finish before the next insert or update, meant within 15 minute range.

I have few query on this script… In this script , its checking if @allowPageLocks =0 (count for index which have Allow_Page_Locks=0) and if object have 1 or more LOB Objects then although index fragmentation value exceeds the rebuidthreshold value , it dont make rebuid , but make reorganization on it.

I am a little confused on it, and need to know , what is reason that we should do rebuid on index if we found ,1 or more LOB objects and Allow_Page_Locks=0

Pinal Dave is a technology enthusiast and an independent consultant. He has authored 11 SQL Server database books, 21 Pluralsight courses and have written over 3800 articles on the database technology on his blog at a http://blog.sqlauthority.com. Along with 14+ years of hands on experience he holds a Masters of Science degree and a number of database certifications. For any SQL Server Performance Tuning Issue send email at pinal @ sqlauthority.com .

Nupur Dave is a social media enthusiast and and an independent consultant.