Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I have inherited a large and slow stored procedure and it's giving me a nightmare:

I'm not a DBA, although I have some knowledge, but I don't know where to start to identify this bottleneck.

I have SQL Server 2008 installed on my desktop with an exact copy of the production database. I am running everything from SSMS and I have tried straight SQL and SP. Time for SP vs SQL is close enough to the same to not worry unduly about - its the time of local vs server that's concerning me.

When I run against the server, it takes around 14 hours to complete!!!

[Total inserts approx = 14,000,000]

I have tried to tune the SP as much as I can and avoided parameter sniffing but not sure WHY this should be SO much slower on a pretty much equivalent machine (I know SQL Server version is slightly different, but can't see why it would make THAT much difference?)

I know the SP isn't fantastic, but if it runs in 10 minutes locally, then its not total pants! If it took 60 seconds on the server then that would still be fine.

Server has a pretty low load on it when these SP's are not running, and around 50% CPU when they are in progress (one core 100%?). Locally CPU never gets above 25%.

I've broken down the SP so it is called a number of times in the hope that might help...it did locally, but not by much, but made no difference on the server.

1 Answer
1

It's definitely ugly - you could remove the cursor, which is going to be a significant culprit to the general slowness, creating a more set-based solution. But it depends on your general comfort in refactoring as to whether that's really a good idea.

But there are a number of things about servers you could investigate.

For example - does the database on the server have "AutoShrink" turned on? Is tempdb on a particularly slow disk?

You could compare the execution plans on the different servers to see if there's a massive difference between one and the other. One difficulty with cursors is that you can end up with a plan being worked out on an early iteration, despite it being a bad plan for later iterations. But I assume you have the same data on the two boxes, so this shouldn't really be a factor.

I'd be checking the other conditions on the server, such as disk, and other things which may be running. If your files keep growing and shrinking (for example), then this could be bad news for any query that's running there.

Will check AutoShrink setting. You say "tempdb"...I don't reference any other DB's so is this where all temp tables are being stored? Is it better to "hard" create a table to hold my temp data and truncate it afterwards? I can re-factor the cursor into a "while" loop and re-test. Thx
–
BlueChippySep 24 '12 at 5:08

No - a while loop is essentially the same as a cursor. Try to get all the rows in one fell swoop, instead of doing it one customer at a time. TempDB is likely to be used to store the intermediate results of your cursor, so it's important that it's not a bottleneck. Also, I wouldn't put it in SINGLE_USER mode. It won't make it faster to remove that, but you shouldn't turn on that setting without an excellent reason.
–
Rob FarleySep 24 '12 at 5:13

"Try to get all the rows in one fell swoop" - do you mean re-write to "join" my temp table with my results for all rows?
–
BlueChippySep 24 '12 at 5:16

Yes - that's exactly what I mean. You can wrap the UNION ALL bits in parentheses, and join it to the CusNum field. But you don't really need that table variable at all - you could do it all in the one query, which could well run significantly quicker (and certainly be more tunable, since it doesn't have a reliance on so much programmatic code).
–
Rob FarleySep 24 '12 at 5:24