Just another WordPress.com site

SQL performance tuning

Post navigation

Recently we had some trouble with a stored procedure taking between 1.0 and 1.5 seconds with occasional runs of 8 or 9 seconds. This procedure is widely used by applications contacting a widely used database and is called as often as once or twice a second. It returns six recordsets each with a total row count of 50 or less.

This stored procedure is moderately complex and structured as follows: temporary data structures are created and populated by successive calls to the database and then the data is modified slightly and returned. The procedure accepts up to 16 input parameters with one or two “signatures” predominant. For example, looking up records by Customer_ID was the most common use for the procedure but it was also possible to do lookups based on a set of hashed TINS, first and last name, etc. If null values were passed in for the parameters the procedure still proceeded in a top-down fashion without any logic to skip queries that were unnecessary. Each DB call involved joining on a dozen code tables using the NOLOCK query hint.

The Query Plan

We began by examining the query plan. This is a representation of the physical steps SQL Server uses to provide the data you request in a query. Often a DBA can use this plan to discover where SQL Server is spending a lot of time and take action to correct it. This plan can be expensive to create and so once calculated it is stored in SQL Server’s plan cache until manually cleared or SQL Server is restarted. As long as the plan is calculated correctly this is a great efficiency however the input parameters can have a significant effect on the logic of the query and consequently the cost. Suppose the plan was first calculated based on a list of hashed TINS and stored in the plan cache. If the next call is based on Customer_ID the first query plan will be used even though it may not be optimal for the new input parameters. This is known as a Parameter Sniffing problem and is discussed in more detail here.

We believed this was part of the reason for the poor performance. There are at least two ways to deal with a situation like this:

• Restructure the procedure into modules (sub – procedures) so each module has its own execution plan. This introduces some complexity in maintenance but guarantees that each logical path has an efficient query plan.

• Recalculate the plan each time you call the procedure with the current parameters. The drawback of course is you have to pay to recalculate the plan each time.

In this case the plan is not overly complex and it seemed worthwhile to recalculate the plan each run by altering the procedure to use the WITH RECOMPILE option. This brought the average run time down to 550 ms with no executions taking longer than two seconds.

NOLOCK Query Hints

SQL Server locks data during inserts and updates. These locks prevent other users from querying that same data until the transaction is complete. This can cause a query to take longer to run. Telling the query optimizer to use the NOLOCK query hint allows the database engine to read the data even if the lock would normally prevent it. One danger in using this hint is it allows a query to return potentially incorrect information. Another more subtle and serious result comes from meddling with the query optimizer. The Optimizer is a very sophisticated piece of code and is far better at making choices for optimizing data retrieval than any single DBA can ever be. Using query hints should always be a last resort and used only after detailed testing. See here for additional details.

This procedure used the NOLOCK hint 57 times even on code tables which would almost never be modified. We removed these NOLOCK hints and saw a modest performance increase of 40 ms bringing our average run time to 515 ms.

Temporary Data Structures

The next area of interest is the temporary data structures. The original developer chose to use table variables rather than temporary tables. One significant difference between these structures is that the table variable is thought to reside only in memory while the temporary table is written to tempdb, SQL Server’s internal working database. I say “thought to reside only in memory” because while that is generally true SQL Server will write table variables to tempdb if it sees fit – as it did in this case. There are cases where tempdb contention is a problem and choosing a table variable over a temporary table makes sense. This is usually the case when the amount of data you are storing is very small. Based on our experience in similar situations we decided to re-factor the procedure to use temporary tables rather than table variables. Using this approach brought our average run time down to about 90 ms.

Other Opportunities

So far the steps are taken are all very low risk and no changes to the logical structure of the query have been made. We believe additional improvements could be realized through altering the flow of the procedure to bring back less data on the intermediate database calls and to eliminate unnecessary calls based on the parameters submitted. However, for now, since performance went from 8 to 9 seconds down to a 10th of a second we decided to leave well enough alone and revisit this later if necessary.

As you can see, SQL Server tuning is as much an art as a science. It involves observation, experimentation, and testing. Sometimes there’s a single silver bullet, other times you take a more iterative approach as we saw in this example.