Boost Performance with the Index Tuning Wizard

Have you created the optimal indexes for your database? How about the best clustered index for a table? Have you determined which indexes can improve query performance? One of the toughest issues a DBA faces is choosing the best clustered index for a primary table in the database. Choosing the best nonclustered indexes is no picnic either, requiring you to consider data distribution, the various techniques the query optimizer uses to devise an efficient plan, and how many selects vs. modifications users perform against the database so you don't create too many indexes.

Wouldn't it be nice to have a tool that could analyze queries run against your database and recommend which indexes to create? You're in luck. SQL Server 7.0 developers already have SQL Profiler, a valuable tool for tracing queries run against the database. You can save Profiler's output to a file, a table, or an SQL script. You can then analyze this output with another SQL Server 7.0 tool, the Index Tuning Wizard (ITW), which recommends which indexes to build.

To ensure the ITW gives you efficient recommendations, you must trace queries from a typical period of activity on the system—not a period of unusually high or low activity nor a period of unusual types of activities. You also must decide how long Profiler should trace queries. For example, you might need to run a Profiler trace for only a couple of hours to get an accurate representation of system activity. Or you might have to run the trace for a couple of days or more to capture activity that changes throughout the day or over several days.

The Tour

To see how to use the ITW, let's look at a simple example of an Employees table in the HR database, as Listing 1 shows. The Employees table has 1000 rows, and because each row consumes a whole page, the table consumes 1000 data pages.

Now let's check the performance of a few queries run against the HR database. First, turn on STATISTICS IO to count logical reads, as Listing 2 shows. Notice that every query uses 1000 logical reads because, right now, the only way to perform the query is with a full-table scan.

Next, you can create a trace in SQL Profiler for the ITW to analyze. Profiler's default trace properties, which include the events SQL:BatchCompleted and RPC:Completed in the TSQL category and the data columns EventClass and Text, are sufficient for the ITW. After you name the trace, you need to specify that Profiler send its output to a file.

You also need to ensure the trace doesn't generate unnecessary output by setting a filter to capture only queries run against the HR database. To set up this filter, execute SELECTDB_ ID('HR') to retrieve the database ID, and use this value as the database ID filter. (See "Trace That Event with SQL Profiler," for more information about defining traces, including setting filters.) To start the trace, click OK in the Trace Properties dialog box, rerun all the previous queries, and stop the trace. Now that you have Profiler's trace output, you can launch the ITW. (Note that the ITW can also use an SQL script, such as the script in Listing 2, for analysis.)

You can launch the ITW from Profiler under the Tools menu or from SQL Server Enterprise Manager (choose Wizards from the tools menu, open Management, and select the Index Tuning Wizard). After the welcome screen, you'll see a screen that asks for a server and database name, as Screen 1 shows. This screen also asks whether you want to keep all existing indexes. If you clear this option, the ITW might recommend dropping or replacing existing indexes. Checking Perform thorough analysis on this screen instructs the ITW to perform an exhaustive analysis of column and index combinations, which can result in a more optimal index recommendation but which can take a long time and stress your server. If you want to use the ITW to analyze a heavily loaded server or to perform a thorough analysis of column and index options, consider running the ITW on a test server instead of your production server.

The next dialog box, as Screen 2 shows, asks you to identify the workload you want ITW to analyze. Selecting the first option (I have a saved workload file) lets you continue to the next screen to supply details. But if you select the alternative option (I will create a workload file on my own) and click Next, the ITW doesn't load Profiler. Instead, the ITW, which needs input to perform its analysis, assumes you need to create a workload file so it simply ends. Screen 3 shows the dialog box that appears when you select I have a saved workload file. You can select input of an .sql script file, a Profiler trace file, or a Profiler trace table. Clicking the Advanced Options button takes you to the dialog box in Screen 4, which lets you choose:

Maximum queries to tune: The number you specify here tells the ITW to ignore all queries, not events, after the specified number.

Maximum space for the recommended indexes (MB): This value is the maximum total space, in MB, that all indexes will occupy. Note that it includes existing indexes if you selected Keep all existing indexes in the first dialog box.

Maximum columns per index: You use this option to restrict the maximum width of a composite index; the default is 16, which is also the maximum number of indexes allowed on a table.

Time to Tune

The next dialog box, as Screen 5 shows, lets you choose the tables you want the ITW to tune. In our case, we're tuning only the Employees table. On a heavily loaded system, choosing to tune only a given set of tables at a time can save you time and help you focus on a specific area of the database.

Based on the tables you selected, the ITW returns its index recommendations, as Screen 6 shows. The recommendations tell you whether the recommended index already exists, whether it's a clustered or a nonclustered index, the table on which the ITW will create the index, and the name the ITW will give the index and participating columns. You can't change the index properties from this dialog box, but you can change them in the script the ITW generates. For example, you might want to change index names, which consist of a nonintuitive table name and an ordinal number.

Clicking the Analysis button on this screen brings up a dialog box that lets you generate (and save to text files) the following six informative reports about your queries and the recommended and existing indexes:

Index Usage Report (recommended configuration) shows the percentage of queries in the workload that will use each index and the amount of space that each index will occupy.

Index Usage Report (current configuration) shows the same type of information as the previous report, but for your current index usage. This information is difficult to find in SQL Server versions before 7.0. Even if you aren't interested in its index recommendations, you might want to run the ITW just to get this information.

Table Analysis Report shows the current cost (based on the number of logical reads as well as CPU time and memory usage) of queries that accessed the specified table and the costs that will result from applying the recommended indexes. Table 1 shows a sample Table Analysis Report of current cost and esti-mated cost after applying the ITW's recommendations. The total cost of queries accessing Table1 is 30 percent of the cost of all queries in a certain workload, and the total cost of the queries accessing Table2 is 70 percent of that workload. After applying the recommended indexes, the same queries accessing Table1 will result in 5 percent of the total cost of the workload, and the same queries accessing Table2 will result in 8 percent of the total cost for that workload.

Query Cost Report (as Screen 7 shows) lists a workload's queries and the percentage of improvement in their performance.

Workload Analysis Report shows the number of queries per workload for each category (SELECT, INSERT, UPDATE, DELETE) and their impact on performance compared to the cost of the most expensive query in the current configuration. This report compares the current number of queries to an estimated number after applying the index recommendations.

Tuning Summary Report summarizes the number of tables, indexes, and queries that the ITW tuned.

You're now ready to choose whether to apply the ITW's recommended indexes. Screen 8 shows the dialog box that lets you apply the indexes immediately or as a scheduled job. You can also choose to save the recommendations to a script file. I usually save the recommendations to a script file instead of applying them immediately so I can change the index names to more meaningful names.

It's important at this dialog box to click Next and then Finish, even if you only want to save the changes to a script file. The button beside the Save script file box doesn't create the script file; it simply lets you supply a name for it. Screen 9 shows the script file the ITW created for our example.

Check the Results

After applying the ITW's recommended indexes, make sure you check the results. To check query performance using the new indexes, you can use Profiler's replay function to rerun the queries you originally traced. ("Problem-Solving with SQL Profiler," explains how to replay a trace.) Although the new indexes might have your system running smoothly, be sure to run the ITW regularly to tune your queries as your system changes over time.

Discuss this Article 2

Simon Letts (not verified)

on Nov 23, 2000

Yes it does appear to be a good tool, but what if you have a database which due to poor initial installation has got a clustered index on every table, and in the case of a couple of DB's I now look after, and for some tables over 10 indexes !! ITW says it will give 0% improvement so I assume it suggests that clustered indexes are always ok and it doesn't assess whether a table is over-indexed ?

I've found the same thing: the index tuning wizard only suggests indexes to add. I ran a trace table of update/insert activity that had bogged down due to excessive indexing. The wizard suggested that I add 2 more indexes!

From the Blogs

My initial goal in writing this series of posts was to outline some of the concerns surrounding Availability Groups (AGs) and SQL Server Agent Jobs – and call out how there is virtually no guidance from Microsoft on this front and then detail some of the pitfalls and options available for tackling this problem domain. I initially expected this series of posts to have between 25 and 30 posts – according to some of the early outlines I created ‘way back when’....More

Throughout this series of posts I’ve taken a somewhat pessimistic view of how SQL Server Agent jobs are managed within most organizations – meaning that most of the code and examples I’ve provided up until this point were based on assumptions about how CHANGE to jobs is managed. That pessimism, to date, has come in two forms:...More

In this series of posts I’ve called out some of the concerns related to SQL Server AlwaysOn Availability Groups and their interaction with SQL Server Agent jobs – both in the form of Batch Jobs (see post #3) and backups....More