In a previous post, Collecting Historical Wait Statistics, I discussed how you can easily collect historical wait stats by using the DMV sys.dm_os_wait_stats. Well today, I’d like to cover the same concept, but this time collect historical IO file stats from the DMV, sys.dm_io_virtual_files_stats. However, I wanted to improve on the code to make it even easier to implement.

The data collection process is still implemented the same way. First, we’ll need to create a history table to store the data. The data is stored in time slices with the cumulative values as well as the difference (TimeDiff_ms, NumOfReadsDiff, NumOfWritesDiff, etc) in those values since the last collection time.

The next step will be to get the start time of SQL Server, so that we can compare it to the previous collection. If the dates are different, then we must take that into account when calculating the diff values. Because if SQL Server is restarted, then all values in the DMV are reset back to zero. At this point, we know the diff values are actually the same value as the current counters, because this is the first collection after a restart.

You may notice the DATEDIFF is using “seconds” instead of “milliseconds”. This is because DATEDIFF only returns an INT value. The largest number it can return is equal to about 24 days before it hits an arithmetic overflow error. By converting it to seconds, we can avoid that error. All of the following data collections will do a DATEDIFF using milliseconds.

If the current start time is the same as the previous collection, then we’ll grab the difference in values and insert those into the history table.

At this point, we’re through collecting the raw data. However, as I mentioned earlier, I added a lot of functionality into this script. The script is actually a stored procedure that can run all of this code for you; including creation of the history table, data collection, historical data purging and finally reporting.

The stored procedure has 5 input parameters.

@Database – This is used to specify a single database, a list of databases, or a wildcard.

Finally, you can copy this report data into Excel and generate some easy to read charts.

From the chart, we can see there was a spike in write latency between 3PM and 4PM for tempdb. If we collect this data over time and identify a similar spike each day then we’d want to investigate further to find out what is causing it. But that can only be done if you’re collecting these metrics and storing them for historical analysis. Hopefully, this stored procedure will help you be more proactive in collecting performance metrics for each of your servers.

As a DBA, I’m sure you’ve heard many times to always check the sys.dm_os_wait_stats DMV to help diagnose performance issues on your server. The DMV returns information about specific resources SQL Server had to wait for while processing queries. The counters in the DMV are cumulative since the last time SQL Server was started and the counters can only be reset by a service restart or by using a DBCC command. Since DMVs don’t persist their data beyond a service restart, we need to come up with a way to collect this data and be able to run trending reports over time.

Collecting the data seems easy enough by simply selecting all rows into a permanent table. However, that raw data won’t help us determine the time in which a particular wait type occurred. Think about it for a minute. If the raw data for the counters is cumulative, then how can you tell if a bunch of waits occurred within a span of a few minutes or if they occurred slowly over the past 6 months that SQL Server has been running. This is where we need to collect the data in increments.

First, we need to create a history table to store the data. The table will store the wait stat values as well as the difference (TimeDiff_ms, WaitingTasksCountDiff, WaitTimeDiff_ms, SignalWaitTimeDiff_ms) in those values between collection times.

Next, we need to get a couple of timestamps when we collect each sample. The first will be the SQL Server start time. We need the SQL Server start time, so we can identify when the service was restarted.

The last timestamp is the collection time. We’ll also use this timestamp to calculate the difference in wait stat values between each collection.

SELECT GETDATE() AS 'CollectionTime',* FROM sys.dm_os_wait_stats;
GO

We need to compare the current SQL Server start time to the previous start time from the history table. If they don’t equal, then we assume the server was restarted and insert “starter” values. I call them starter values, because we just collect the current wait stat values and insert 0 for each of the diff columns.

You could filter the collection to only specific wait stat counters that you want to track by just a where clause, but I prefer to collect them all and then filter at the reporting end.

At this point, we’re ready to schedule the job. The script could be run at any interval, but I usually leave it to collect data once a day. If I notice a spike in a specific wait stat counter, then I could easily increase the job frequency to once every few hours or even once an hour. Having those smaller, more granular data samples will allow us to isolate which time frame we need to concentrate on.

For example, if we notice the CXPACKET wait suddenly spikes when collecting the data each day, then we could schedule the collection every hour to see if it’s happening during a specific window.

Finally, we can use Excel to format this raw data into an easy to read chart.

From this chart, we can see at 5PM there was a spike in CXPACKET waits, but a low number of tasks that waited. In this case, I would be assume there is a single process running in parallel that caused these waits and from there I could dig further into finding the individual query.

Data compression is enabled on this table to help keep it small. It can easily turn it off for the table by removing WITH (DATA_COMPRESSION = PAGE) from the CREATE TABLE statement. However, with page compression enabled, 24 collections (one per hour) only takes up 775KB of space. Without compression, the same sample of data consumes about 2.2MB. If you plan to keep a lot of history, then it’s best to leave page compression enabled.

Hopefully, this script will help you keep track of your historical wait statistics, so you can have better knowledge of what has happened to your environment over time. The entire script is posted below. If you want to read further into what each wait statistics means, then check out Paul Randal’s article about wait stats. Additionally, if you want more info on using DMVs, check out Glenn Berry’s diagnostic queries.

Sometimes rapid code development doesn’t always produce the most efficient code. Take the age old line of code SELECT COUNT(*) FROM MyTable. Obviously this will give you the row count for a table, but at what cost? Doing any SELECT * from a table will ultimately result in a table or clustered index scan.

USEAdventureWorksDW2012;

SELECTCOUNT(*)FROMdbo.FactProductInventory;

GO

Turning on STATISTICS IO on reveals 5753 logical reads just to return the row count of 776286.

Starting with SQL Server 2005, Microsoft introduced a DMV, sys.dm_db_partition_stats, that provides you with the same information at a fraction of the cost. It requires a little more coding, but once you turn on STATISTICS IO, you will see the performance benefit.

USEAdventureWorksDW2012;

SELECT

s.nameAS‘SchemaName’

,o.nameAS‘TableName’

,SUM(p.row_count)AS‘RowCount’

FROMsys.dm_db_partition_statsp

JOINsys.objectsoONo.object_id=p.object_id

JOINsys.schemassONo.schema_id=s.schema_id

WHEREp.index_id<2 ANDo.type=‘U’

ANDs.name=‘dbo’

ANDo.name=‘FactProductInventory’

GROUPBYs.name,o.name

ORDERBYs.name,o.name;

GO

Since we’re querying a DMV, we never touch the base table. We can see here we only need 16 logical reads to return the same row count of 776286, and the FactProductInventory table is nowhere in our execution plan.

By using the DMV, we have improved the query performance and reduced the total I/O count by nearly 100%. Another added benefit of using the DMV, is we won’t need locks on the base table and therefore will avoid the possibility of blocking other queries hitting that table.

This is just one simple example of how you can easily improve the performance of an application.

Just for the record, this happens to be one of my favorite interview questions to ask candidates.

At some point in time, there will be a database containing tables without clustered indexes (a heap) that you will be responsible for maintaining. I personally believe that every table should have a clustered index, but sometimes my advice is not always followed. Additionally there can be databases from a 3rd party vendor that have this same design. Depending on the what those heap tables are used for, over time it’s possible they can become highly fragmented and degrade query performance. A fragmented heap is just as bad as a fragmented index. To resolve this issue, I’d like to cover four ways we can defragment a heap.

To start with, we will need a sample database with a highly fragmented heap table. You can download the FRAG database (SQL2012) from here. Let’s use the sys.dm_db_index_physical_stats DMV to check the fragmentation level.

USEFRAG;

GO

SELECT

index_id

,index_type_desc

,index_depth

,index_level

,avg_fragmentation_in_percent

,fragment_count

,page_count

,record_count

FROMsys.dm_db_index_physical_stats(

DB_ID(‘FRAG’)

,OBJECT_ID(‘MyTable’)

,NULL

,NULL

,‘DETAILED’);

GO

As you can see, the heap is 93% fragmented, and both non-clustered indexes are 99% fragmented. So now we know what we’re dealing with.

Option 1 is the easiest and the most optimal way to remove heap fragmentation; however, this option was only introduced in SQL Server 2008, so it’s not available for all versions. This is a single command that will rebuild the table and any associated indexes; yes, even clustered indexes. Keep in mind, this command will rebuild the heap as well as all of the non-clustered indexes.

ALTERTABLEdbo.MyTableREBUILD;

GO

Option 2 is almost as quick, but involves a little bit of planning. You will need to select a column to create the clustered index on, keeping in mind this will reorder the entire table by that key. Once the clustered index has been created, immediately drop it.

CREATECLUSTEREDINDEXcluIdx1ONdbo.MyTable(col1);

GO

DROPINDEXcluIdx1ONdbo.MYTable;

GO

Option 3 requires manually moving all data to a new temporary table. This option is an offline operation and should be done during off-hours. First you will need to create a new temporary table with the same structure as the heap, and then copy all rows to the new temporary table.

CREATETABLEdbo.MyTable_Temp(col1INT,col2INT);

GO

INSERTdbo.MyTable_Temp

SELECT*FROMdbo.MyTable;

GO

Next, drop the old table, rename the temporary table to the original name, and then create the original non-clustered indexes.

DROPTABLEdbo.MyTable;

GO

EXECsp_rename‘MyTable_Temp’,‘MyTable’;

GO

CREATENONCLUSTEREDINDEXidx1ONdbo.MyTable(col1);

GO

CREATENONCLUSTEREDINDEXidx2ONdbo.MyTable(col2);

GO

Option 4 is by far to the least efficient way to complete this task. Just like option 3, this option is an offline operation and should be done during off-hours. First we need to use the BCP utility to bulk copy out all of the data to a data file. Using BCP will require a format file to define the structure of what we’re bulk copying. In this example, I am using an XML format file. More information on format files can be found here.

Options 1 and 2 do not require any downtime for the table; however, they will cause blocking during the rebuild stage. You can use the WITH ONLINE option but that will require enough free space in tempdb for the entire table. Both options 3 and 4 will require downtime and will potentially impact any foreign key constraints or other dependent objects. If you’re running SQL Server 2008 or higher, I highly recommend using option 1.

As you’ve seen, there are multiple ways of dealing with heap fragmentation. However, the best way is to avoid heaps altogether in your database design.

Last week I ran across a blog post by Axel Achten (B|T) that outlined a few reasons why you should not use SELECT * in queries. In the post, Axel used the SQLQueryStress tool by Adam Machanic (B|T) to stress-test a simple query using SELECT * and SELECT col1, col2,... This gave me an idea to use the same SQLQueryStress tool to benchmark a stored procedure that’s prefixed with sp_.

All DBAs know, or should know, you should not prefix stored procedures with sp_. Even Microsoft mentions the sp_ prefix is reserved for system stored procedures in Books Online.

I’m not going to discuss the do’s and don’ts of naming conventions. What I want to know is there still a performance impact of using the sp_ prefix.

For our test, we’ll use the AdventureWorks2012 database. First we need to create two new stored procedures that selects from the Person.Person table.

USEAdventureWorks2012;

GO

CREATEPROCEDUREdbo.sp_SelectPersonASSELECT*FROMPerson.Person;

GO

CREATEPROCEDUREdbo.SelectPersonASSELECT*FROMPerson.Person;

GO

Next, we’ll clear the procedure cache, and then execute each procedure once to compile it and to ensure all the data pages are in the buffer.

DBCCFREEPROCCACHE;

GO

EXECdbo.sp_SelectPerson;

GO

EXECdbo.SelectPerson;

GO

Next, we’ll run execute each stored proc 100 times using SQLQueryStress and compare the results.

Total time to execute sp_SelectPerson was 3 minutes 43 seconds, and only 3 minutes 35 seconds to execute SelectPerson. Given this test run was only over 100 iterations, 8 seconds is huge amount of savings.

We can even query sys.dm_exec_procedure_stats to get the average worker time in seconds and average elapsed time in seconds for each procedure.