Eric Stimpson (6/10/2008)You can examine the contents of the trace after you've stopped it by using this select:

Therein lies the rub, I think. A trace used for auditing would need to be running as long as the server is running or you would have to deny access to the database while you examine the log.

Since I am not really familiar with server-side traces, can you output the results to a table like you can with the Profiler GUI? I would assume you can, but then you now have the performance hit of the inserts as G mentioned in one of his posts.

Jack Corbett

Applications Developer

Don't let the good be the enemy of the best. -- Paul FlemingAt best you can say that one job may be more secure than another, but total job security is an illusion. -- Rod at work

Eric Stimpson (6/10/2008)You can examine the contents of the trace after you've stopped it by using this select:

Therein lies the rub, I think. A trace used for auditing would need to be running as long as the server is running or you would have to deny access to the database while you examine the log.

Since I am not really familiar with server-side traces, can you output the results to a table like you can with the Profiler GUI? I would assume you can, but then you now have the performance hit of the inserts as G mentioned in one of his posts.

I was going to reply and correct that statement, but you beat me to it. No, the trace does NOT need to be stopped to examine the active trace file with the fn_trace_gettable function. And yes, the results of the query can be inserted into a table, just like any other table valued function.

If the overhead of inserting the trace into a table is that much of a concern, you have options. You can write the trace file to a share and have a different instance grab and import it for example.

Also for those of you who are not familiar/comfortable with setting up custom traces, you can actually use Profiler to set up a trace and then export the definition as a .sql file. This is a pretty good way of starting to crack the somewhat arcane syntax of the trace events etc...

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

I was going to reply and correct that statement, but you beat me to it. No, the trace does NOT need to be stopped to examine the active trace file with the fn_trace_gettable function. And yes, the results of the query can be inserted into a table, just like any other table valued function.

In SQL 2000 when using fn_trace_gettable, if you do not stop the trace you cannot read the complete trace. In fact, there will be a sizeable number of records that have not yet been written to disk. Only stopping the trace forces all records to be written to the trace file and allows you to read the entire contents into a table. SQL 2005 does not have this issue. You can read the entire contents of the trace file even while it is running.

So if you're in a SQL 2000, all you need to do is start a new trace before you close the previous, and you will not loose any transactions. Furthermore, your now closed trace can include the call to sp_trace_setstatus to show that a new trace was started. In a scenario where you need to trace continuously, you can create a job that runs once per day and uses the date in the trace filename. Even in SQL 2005, I would still use this method as it allows the trace file from previous days to be moved to another server and compressed, archived, etc.

Eric, that's a good point about 2000. Traces in SQL 2005 have been significantly improved in several ways including making them generally less expensive. I just assumed that we were talking about 2005 and I personally haven't worked with a 2000 instance for almost two years so I just don't think about it much anymore...my mistake.

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

In my case, it was necessary not only to show that a certain user ran a certain query at a certain time (as the trace would do), it was also necessary to show which records resulted, and the state of certain important columns at the time, hence the need for active logging from the select procedure. Also management wanted real-time info, and did not want to archive old info. Admittedly this was not a huge database, but effective normalization keeps the history table manageable.

Thomas Keller (6/10/2008)In my case, it was necessary not only to show that a certain user ran a certain query at a certain time (as the trace would do), it was also necessary to show which records resulted, and the state of certain important columns at the time, hence the need for active logging from the select procedure. Also management wanted real-time info, and did not want to archive old info. Admittedly this was not a huge database, but effective normalization keeps the history table manageable.

I have had databases where we needed to be able to track data changes and be able to present a view of the data as it existed at any point in time. This is best accomplished via triggers and archive/history tables. When combined with a trace that captures who ran what query and when, you could "recreate" what a given user saw at a given point in time without having to store the actual result set returned.

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

Nice work! I've implemented similar processes and found that they work well - one thing you need to beware of with this approach is if you make a change to a base table, you need to be sure to update the audit table and trigger.

If you plan to be able to "recreate" a result set using audit tables and the underlying query, keep in mind that a join to a non-audited table could make it impossible to recreate the desired result set, even to just determine which rows were returned. You're stuck with either auditing all tables that might be joined to the table your concerned with or relying on backups of transaction logs to be able to restore the database to a particular point in time and rerun the query when a non-audited table is used.