Automating Performance with ExecStats

For several years I have made my (free) SQL Server performance tool
ExecStats publicly available
(download at
ExecStats).
I have just recently improved existing or added new features (build 2013-09-23 or later)
that could be useful so I am asking people to give it a try
and please do send feedback.

I starting working on ExecStats when SQL Server version 2005, for the first time,
made sufficient information available via the
Dynamic Management Views
(DMV) and functions to do a pretty good job of performance analysis.
The new features have been added to the DMV's over the more recent versions
have further improved the ability to assess SQL Server health.

In SQL Server 2000, we had to use Profiler to start a Trace and collect data for a sufficient
period of time. The information from various DMVs are available at any given point in time,
and depending on the degree to which execution plans are retained in the procedure cache,
could provide a reasonably accurate assessment of the workload on SQL Server.
Of course Profiler and Trace are still useful in the exception cases
(and Microsoft wants to move to
Extended Events going forward).

Two starting points in performance tuning with DMV's are
dm_exec_query_stats
which keep execution statistics for entries in
the plan cache and
dm_index_usage_stats
which keeps index usage statistics.
These avenues can be pursued independently, but working both together is best.
The dm_exec_query_stats view has sql and plan handles that links to DMFs for the SQL
(dm_exec_sql_text)
and the execution plan
(dm_exec_text_query_plan).
Inside the XML plan is information such as which indexes are used by each SQL statement
and the access method (seek, scan, lookup, update, etc.).

It should quickly evident that while SQL is a great language for data access,
it is not suitable for step-by-step processing, for which there are procedural programming languages
that are designed for this purpose.
This is why ExecStats is a C# program and a not a massively foreboding morass of dynamically generated SQL.
ExecStats parses top execution plans to build a cross-reference of
index usage by SQL statement.
Some other tools and scripts may attempt to parse 5 or 10 execution plans.
ExecStats default is 1000, and can parse the entire contents of the plan cache if desired.

In working from the index usage stats alone, it is possible to identify unused indexes.
It is also reasonable to guess that indexes on the tables with the same leading might be consolidated.
However it is not certain and no infrequently good assessment can be made with infrequently used indexes.
By building the full index usage to SQL statement cross-reference map via the execution plans,
it is possible to determine where each index is used.
This allows a reliable determination for the minimum set of good indexes.

Now that the capability to parse execution plans has been established,
the scope can be expanded because there are other sources for execution plans.
One option is to simply get a list of all the stored procedures in the database
via sys.procedures
to generate the estimated execution plans (for NULL or default parameter values).
I did this once for a client, and from the trapped error messages,
there were nearly one hundred procedures that referenced tables which no longer existed.
These were obsolete procedures that no one had identified for removal even though
the underlying tables had been dropped.

Another option is to maintain a list of SQL,
which could be stored procedures with parameter values
either for generating the estimated execution plan only
or be executed for the actual execution plan which has additional information.
In addition, certain stored procedure could be tested multiple times to expose the impact of compile
parameters or different code paths from to Control-of-Flow keywords.

ExecStats automates the data collection for all of this.
Results can be collected and archive on a regular basis and especially before and after code or other changes.
It is especially important to consider that execution plans can change with both compile parameters
and data distribution statistics.
When there are complaints of poor performance, one could investigate for top resource consuming SQL,
but it is immensely helpful to have previous characteristics for comparison.
This includes execution statistics, the execution plan, compile parameters, indexes,
and even the data distribution statistics last update.

In 2012, I integrated a previously separate program for performance monitoring into ExecStats.
The key point in ExecStats performance monitoring is that important system and storage architecture
details are captured and incorporated into the display.
Too many of the third party tools were obviously developed on a desktop with 1 processor and 1 disk.
It is clear that the tool was built by someone who got a list of counters with "best practice" parameters
with very little real understanding of what any of it actually means.
It is certainly not someone who actually solves SQL Server performance problems on a day-to-day basis,
or has even done it at all.

There are two recent additions in 2013-Sep.
One pertains to index fragmentation, and the other to fn_virtualfilestats.
Technically we should use the DMF
dm_db_index_physical_stats,
which replaces DBCC SHOWCONTIG, to assess fragmentation.
However the effort to run dm_db_index_physical_stats on large tables is substantial.
There are several pieces of information that can be used to provide a clue on the state of
fragmentation, but only for nonclustered indexes without included columns.
The DMV dm_db_partition_stats tells us the number of rows and the size of the index,
from which we can calculate the average bytes per row.
Next, DBCC SHOW_STATISTICS tells use the average key size,
which includes both the nonclustered index key portion and the clustered index key portion.
Keeping in mind that the row overhead is 10-12 bytes,
we can compare the value calculated from dm_db_partition_stats
to the key length from statistics plus row overhead to determine if this is reasonable.
See Paul Randall at SQL Skills
anatomy-of-a-record
for more on the record overhead, which also applies to index records.

edit
The previous versions of ExecStats display index row count and size in the Index Usage tab,
and the Average Key Length in the Dist Stats tab.
The newer version make a copy of this in 3rd right most column in Index Usage as AvKyL.
For now, compute the 1024*Used KB/Rows to compare with AvKyL.

The function fn_virtualfilestats provide useful
file IO statistics at the SQL Server file level.
In a complex environment, there are many files for the key databases spread over many volumes
on the storage system (each seen as a physical disk by the Windows operating system).
It is a simple matter to rollup the file IO statistics by database or filegroup.
It was not simple to rollup the file IO statistics on a volume basis
because complex storage systems are typically implemented with mount points.
The new (as of SQL Server 2008 R2) DMF
dm_os_volume_stats
provides the mount point.
So now ExecStats rolls up file IO by filegroup, database and volume.

edit
Below is the previous version of Databases, showing database size information.

The new file IO rollup by database is at the far right, as below.

The new file IO rollup by OS Volume is in the new Volumes tab.

So feel free to give ExecStats a try and please send feedback.
If any one would like to volunteer to write proper documentation, that would be especially appreciated.

It has been brought to my attention that the performance monitoring mode (Perf Ctr) fails when the SQL Server performance counters are not accessible. I will try to make changes so that it continues to run displaying only system performance counters, plus info from SQL queries to DMVs. for now, the latest build is http://www.qdpma.com/tools/ExecStats_20130925.zip but the permanent link with redirect should work as intended? http://www.qdpma.com/ExecStatsZip.html

Thanks for making possible & updating to new version for us. your old version was definitely helpful to me and I believe that new will help me to dissect more in detail. will do some test and let you know. Thanks again.