I need to diagnose a problem, this Sunday a regular Database Maintenance plan which is supposed to rebuild indexes took exactly 6 hours and 32 minutes . Now that’s a hell lot of time and during all that process users were denied access to those tables. This is a production server(SQL server 2000 Sp3) on which this plan runs on weekly basis. I want to know what caused that plan to run for so long. I know DBCC INDEXDEFRAG doesn’t lock tables but how can I make Database Maintenance plan to run DBCC INDEXDEFRAG instead of DBCC DBREINDEX but more importantly why it took 6 hours. One more thing this plan is scheduled to run at off-peak hours

It's possible that the long run time was caused by other portions of your maintenance plan running (tlog backup possibly?), or a user had a long running job that caused a blocking condition to occur. You would need to be running a blocking script at this time to determine this however.

In any case, I would recommend not using the maintenance plan wizard at all.

Instead, you could create your own stored procs that perform this task and schedule them to run using the SQLagent.

Instead of running DBCC DBREINDEX through the manitenance plan wizard, write out the statements as eyechart mentioned. Then run the individual index scripts one at a time. Run sp_who to see what is going on during the execution of each statement.

Instead of running DBCC DBREINDEX through the manitenance plan wizard, write out the statements as eyechart mentioned. Then run the individual index scripts one at a time. Run sp_who to see what is going on during the execution of each statement.

Tara

Thanks eyechart and tduggan for your suggestions .Backup was supposed to run 2 hours later then this plan but since it got delayed so did the backup job, there was no other job scheduled to run then. Plus Application log on that server got corrupted too.From now on I will write a stored procedure and will schedule it as a job. One thing I forgot to mention same plan ran last week and without any trouble. Its hard to believe how could data in one week can cause such a halt, must be some thing else involved too.

One more thing any guidelines about how should I architect that particular procedure, should I first estimate which tables require indexing and just re index those or just run through all the tables. And should any care be taken for clustered indexes? Any online tips/tutorials available

I reindex all of the index regardless if they need to be done or not. I never got around to writing a script to see if any of them were fragmented. On most systems, I run the reindex once per night. On others that are larger, I run them only once per week on certain indexes.

I would shy away from always checking to see if the indexes need reindexing.

To start, I would run the dbcc showcontig on each table to try to estimate when you should consider reindexing your tables, or if you should use different fill factors. Showcontig uses a good deal of resources, so run it during off hours, perhaps in place of the reindex for the time being.

As each index starts to become fragmented, reindex it and note how long it took to reach an unacctepable level of fragmentation. That should be your reindex schedule for that table. If most of your tables are within a couple days of each other, doing them all at once is not an issue. But if you have several large tables (and with a 200MB database, none of your tables are really large) and they need reindexing at different intervals (say one needs it once a week and one needs it once per month) then I'd try to stick to their schedule to avoid wasting resources.

Once you have a good reindex schedule, you can stop using SHOWCONTIG until you beleive you need to re-analyze your reindex scheduling.

quote:One thing I forgot to mention same plan ran last week and without any trouble. Its hard to believe how could data in one week can cause such a halt, must be some thing else involved too.

Well, you're not alone there, I just went through 2 weeks of half my maintenance plan backups failing. I finally dumped them all and wrote a regular job to do the backups (full weekly, differential daily) and haven't had any problems yet (not to mention the diffs run in 1/10th the time) Next I'll add the CHECKDB and DBREINDEX tasks.

Alrite so its decided I am gonna write a procedure and make a job of itnow this quote brings me to another question isnt CHECKDB enough? I mean doesnt it perform all the functions + more than DBREINDEX. and how many guys here run CHECKDB and how often?(too many questions )anyhow thanks for replyingGood daySaad

DBCC CHECKDB and DBCC DBREINDEX are completely different. CHECKDB checks for corruption. DBREINDEX rebuilds indexes. They do not perform the same thing.

We run both every single night as a scheduled job. It is very important to run CHECKDB often so that you become aware of database corruption. I setup the job so that it pages me on failure. It's better to be paged at 2am when it occurs and get it fixed before the system is heavily used during the day than to wait to find out about it when the customer calls to complain of a problem.

quote:One thing I forgot to mention same plan ran last week and without any trouble. Its hard to believe how could data in one week can cause such a halt, must be some thing else involved too.

Well, you're not alone there, I just went through 2 weeks of half my maintenance plan backups failing. I finally dumped them all and wrote a regular job to do the backups (full weekly, differential daily) and haven't had any problems yet (not to mention the diffs run in 1/10th the time) Next I'll add the CHECKDB and DBREINDEX tasks.

Same situation with me, as a matter of fact I just finsihed the re-index on my last SQL Server tonight. I had a Maintenance Plan that did the DBCC REINDEX, it would crash after 2 hours, and leave a 8GB database with a 14GB Transaction Log.

Along those lines, should I be adding anything to the stored procedure other than the re index statements for each table? Below is the current format for by REINDEX.

Please don't run DBCC SHRINKFILE or DBCC SHRINKDATABASE after DBCC DBREINDEX runs. It is such a waste of time. The next time that you run DBCC DBREINDEX, the server is going to need to get that space back anyway which will cause a performance hit, so let SQL Server have it.

So you get the performance hit when you run DBCC SHRINKFILE or DBCC SHRINKDATABASE, then again when it needs to expand the files due to DBCC DBREINDEX. Not worth it.

My recovery model is set to full. How much should my log file grow when I reindex, and does the condition of the database effect the amount of growth in the log file as a result of the reindex? I inherired this database from someone who said they were a DBA, but didn't do anything to the Server. It is terribly slow. Hardware is more than adequate.

DBCC DBREINDEX depends on the size of the indexes and whether or not they are clustered or nonclustered. I don't think that the size can easily be calculated. I would suggest running the command at night and checking the size.

when I ran it last night, the size of the log file grew almost exactly by the size of the database. I have another server that has multiple accounting databases on it. If the same thing happens on that server I will run out of drive space.

You probably have a clustered index on every table, so that's why the size of it grew to the size of the database.

DBCC DBREINDEX should be run regularly. If you don't have the disk space to do it, add it! It is very important to run this command.

But you don't have to run it on ALL indexes in one shot! That happens when you use a maintenance plan (lazy DBA) to do the work. Write out your own commands, then run them one at a time with perhaps a BACKUP LOG in between the big indexes.

Yeah, that would be the right diagnosis on every count. I wrote out the DBCC's, but ran them on all indexes and all tables. It sounds like I could run the DBREINDEX on 2 or 3 of the big tables in each database, backup the logs, and then schedule a second job to do the other databases. There are 4 or 5 tables that are big and have multiple indexes with one being clustered. The rest are smaller list tables.

I separate the large indexes so that they don't run the same night. So let's say we've got 7 indexes that are large. I would schedule each one to run on a separate night or maybe it would run twice per week. All of the other indexes would get reindexed each night, but not the large ones. Your approach sounds fine though. It's just that in my environment we can't afford the hit to the system for very long so we spread it out over a couple of nights.