I personally turn profiler on for a while, capturing "statement completed" events for the database in question. I use a filter of reads >= 20 and duration > 1000 in order to cut down on the amount of data captured.

I find it irritating that you cannot (at least in 2000) specify NOT NULL in the filters.

Anyway, I usually always find some evil code that takes over a minute to run, and causes hundreds of thousands of disk reads.

It is quick and dirty, but it has not failed me yet.

And for those who are hyper-paranoid about the profiler causing performance problems of its own, I think that today's computers seem to be powerful enough to handle profiler for short bursts. I would rather solve the code-related problem quickly rather than worry too much about profiler induced problems.

Hello Gary. I don't think I mentioned this in my article, but one requirement of the trigger to provide information is that the Trace Table (and hence trigger) needs to be on the same database server (or instance) since the DBCC can only see SPIDs that are local, though there is no requirement as to which actual database it resides in. I am not sure if this is what you already tried, but it's my initial guess without knowing what exactly you tried. If you are still having the problem, did you use the same events that the article suggested? Certain events will never be able to show DBCC info.

I have implemented Michael's article successfully. I have a sql trace running and capturing data into table IndexCapture.

But when I try to implement the trigger provided by Solomon on table IndexCapture ( I changed the table name in the trigger), the trigger gets created successfully but immediately after that an error pops up in SQL Profiler 'Failed to save trace data to table' and the trace is stopped.

Any ideas, why this might be happening? I am working on SQL Servr 2005 SP2, not SQL Server 2000.

The table and trigger are in same the database and in the same SQL Server instance.

KB (11/8/2007)But when I try to implement the trigger provided by Solomon on table IndexCapture ( I changed the table name in the trigger), the trigger gets created successfully but immediately after that an error pops up in SQL Profiler 'Failed to save trace data to table' and the trace is stopped.

Hello KB. I think I found the problem. In SQL Server 2005, the "EventInfo" field returned by the DBCC INPUTBUFFER command has increased in size from 255 to 4000. So, the CREATE TABLE #DBCCInfo line should look as follows:

After adding the trigger, if the trace stops for any reason, here is a way to test the trigger:

INSERT INTO dbo.TraceTable (SPID) VALUES (@@SPID)

I tried this and got an error stating that data would be truncated. That led me to look at the temp table definition and then to Books Online to see the values that DBCC INPUTBUFFER was reporting.

ALSO, it appears that I was a bit hasty in posting this the first time so I am editing now to add this paragraph. It seems that even after increasing the EventInfo field to 4000 it still gave the Failed to save trace data to table. error and stops the trace. So after more investigating I was able to work around the problem. For some reason, adding the trigger to the table while the trace is running causes the problem. I am not exactly sure why but that is definitely the problem. So, the trick to fix it is to follow these steps:

1) Start the trace (this creates the table)2) Pause the trace3) Run the script to create the trigger4) UN-Pause the trace

Thanks for that. Once we ran the profiler & trigger on the same box as the db it worked & started to populate the TextData column.

We are monitoring the Scan:Started event & trying to capture the SQL Code that is calling the Index. The trigger popluates the TextData column but only with cursor operations eg sp_cursor, sp_cursorprepexec, sp_cursorunprepare, etc.

Gary G (11/9/2007)Do you know anyway to actually capture the SQL being executed?

Hey again, Gary. Now, some events simply do not offer that much info, unfortunately. However, I was able to modify the UPDATE statement to include the only other insight into the executed SQL that I am aware of (check out the sys.dm_exec_sql_text(sql_handle) dynamic management view). Just replace the UPDATE statement you have completely with this one (differences are the SET and the 3 lines between CROSS JOIN and WHERE):

Now, keep in mind that often enough the output from DBCC INPUTBUFFER is the same as the "text" field returned by dm_exec_sql_text(). But if it is ever different then this will certainly show it as it will always display both.

Also, keep in mind that the above modification to the original UPDATE only works in SQL Server 2005. If you want it for SQL Server 2000, then that would take a little more work to re-engineer it to use the ::fn_get_sql() function since there is no CROSS APPLY in SQL Server 2000.

Good article, Solomon. Will try to implement it on our siebel server. Siebel with SQL server ! ! ! is like a deadly combination. Performance had been an issue off late. (by the way, this is a new project attached to me on new company acquisition)