The answers to all of these questions are the same: “It depends!” Of course, for full credit on the quiz, you would have had to explain in detail what each one depends on. I have found that many people want to say that everything on this list is bad or at least something to be avoided whenever possible and that isn’t true. Let’s take a look at the pros and cons of each of these items.

Locked Data

SQL Server locks data and releases those locks over and over, and you aren’t typically aware of it. Locking is necessary to protect the integrity of your data. It becomes a problem only when it causes excessive blocking.

Table Scans

Scanning a table can be a bad thing if you need to retrieve only a few rows from it. However, if you need 100,000 rows from a table with 1,000,000 rows, and the table is stored on only 5,000 pages, scanning those 5,000 pages is much more efficient than using an index and following a pointer to each of the 100,000 rows that satisfies your query.

Query Recompilation

Recompiling a query can be a bad thing if it’s done hundreds or thousands of times and the optimizer always comes up with the same plan. But if recompiling a query can produce a better plan that executes in a fraction of the time the original plan would execute, it can be a really good thing.

Parameter Sniffing

So imagine our million row table contains information about our customers, and 30 percent of those customers live in the beautiful Pacific Northwest. The rest of the customers are spread over the rest of the United States, and a few more in other countries, but by far, most are in the Washington (WA), Oregon (OR), and Idaho (ID); in fact, let’s say 20 percent are in each of those three states. All other states and countries have less than half a percent of the customers in each. If you build a nonclustered index on the state column of the customers table, do you want that index to be used when searching for all customers in a single state? Suppose we create a stored procedure called GetCustomersByState, to which you pass a two-character parameter indicating the state. Stored procedures typically reuse the same query plan over and over, so again, should the plan for looking up the customers using our nonclustered index?

Plans for stored procedures are created the first time we execute the procedure, and the plan is based on the parameter passed in during that first execution. Basing a plan on the original parameter values is what we call “parameter sniffing.” The original parameter is “sniffed” and that value determines the plan. If we are looking for customers in Nevada (NV) the first time we call the procedure, the optimizer will probably come up with a plan that uses the nonclustered index on state because there are few Nevada customers. If we’re looking for Massachusetts (MA) the next time we call the procedure, and the next time we’re looking for New York (NY), those subsequent executions will use the original plan and look for only a few rows. The plan should perform well, and we can say that parameter sniffing is a good thing! But if someone then decides to look for all the customers in Washington (WA), the stored procedure will use its saved plan and use the nonclustered index to retrieve 200,000 rows in the table, and we’ll say that parameter sniffing is a horrible thing!

Is Parameter Sniffing Good or Bad?

So which is it? Is parameter sniffing good or bad? It’s good when the saved plan is good and it’s bad when the saved plan is bad. Parameter sniffing itself is neither good nor bad. It’s SQL Server’s default behavior for stored procedures (and certain other classes of queries, which I don’t have room to discuss here). The trick is being able to tell when parameter sniffing is good and when it’s bad. To know that, you’ll need to make sure you learn about indexing best practices, you understand how to interpret your query plans, and you know how to measure query performance.

Parameter Sniffing Workaround

What can you do when you’re having problems with parameter sniffing? That’s also a big topic, but one possible solution is to use the WITH RECOMPILE option in your stored procedure definition so that SQL Server will compile a new plan on every execution based on the current parameters. In that case, you’re changing the default behavior and SQL Server won’t do parameter sniffing.

I think that because support people will often ask “Are you having problems with parameter sniffing?” people start to associate “parameter sniffing” with “problems” and assume that it’s a bad thing. If your doctor asks you “Are you having problems with digestion?” do you assume that means digestion is a bad thing? No, just like parameter sniffing, it can be a problem, but it isn’t problematic in itself. It’s just what happens.

From the Blogs

Don’t let bad data sneak up on you when and where you least expect it. Ferret out bad data with Melissa Data’s newest Profiling Component for SSIS. Learn how to take control of your data using knowledge-base-driven metadata. The truth shall set you free!...More

Now that we’ve outlined the process to let servers in a SQL Server AlwaysOn Availability Group "talk to each other" by means of setting up linked servers, it’s possible to set up some additional or improved checks on Availability Group Health....More

In my previous post, I provided a high-level outline of the core logic (and rationale behind that logic) that would be needed to set up regular synchronization checks on SQL Server Agent Jobs for servers where AlwaysOn Availability Groups have been deployed. In this post, I’ll walk through the steps--and the code--needed to setup those checks....More